-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
Packets in Packets: OrsonWelles’ In-Band Signaling Attacks for Modern Radios Travis Goodspeed University of Pennsylvania Sergey Bratus Dartmouth College Ricky Melgares Dartmouth College Rebecca Shapiro Dartmouth College Ryan Speers Dartmouth College Abstract Here we present methods for injecting raw frames at Layer 1 from within upper-layer protocols by abuse of in-band signaling mechanisms common to most digital radio protocols. This packet piggy-backing technique allows attackers to hide malicious packets inside packets that are permitted on the network. When these carefully crafted Packets-in-Packets (PIPs) traverse a wireless network, a bit error in the outer frame will cause the inner frame to be interpreted instead. This allows an attacker to evade firewalls, intrusion detection/prevention systems, user-land networking restrictions, and other such defenses. As packets are constructed using interior fields of higher networking layers, the attacker only needs the authority to send cleartext data over the air, even if it is wrapped within several networking layers. This paper includes tested examples of raw frame injection for IEEE 802.15.4 and 2-FSK radios. Additionally, implementation complications are described for 802.11 and a variety of other modern radios. Finally, we present suggestions for how this technique might be extended from wireless radio protocols to Ethernet and other wired links. Download: http://static.usenix.org/events/woot11/tech/final_files/Goodspeed.pdf
-
CVE-2012-0769, the case of the perfect info leak Author: Fermin J. Serna - fjserna gmail.com | fjserna google.com - @fjserna URL: http://zhodiac.hispahack.com/my-stuff/security/Flash_ASLR_bypass.pdf Code: http://zhodiac.hispahack.com/my-stuff/security/InfoLeak.as SWF: http://zhodiac.hispahack.com/my-stuff/security/InfoLeak.swf Date: 23/Feb/2012 TL;DR Flash is vulnerable to a reliable info leak that allows ASLR to be bypassed making exploitation of other vulnerabilities, on browsers, Acrobat Reader, MS Office and any process that can host Flash, trivial like in the old days where no security mitigations were available. Patch immediately. 1. Introduction Unless you use wget and vi to download and parse web content the odds are high that you may be exposed to a vulnerability that will render useless nearly all security mitigations developed in the latest years. Nowadays, security relies heavily on exploitation mitigation technologies. Over the past years there has been some investment on development of several mechanisms such as ASLR, DEP/NX, SEHOP, Heap metadata obfuscation, etc. The main goal of these is to decrease the exploitability of a vulnerability. The key component of this strategy is ASLR (Address Space Layout Randomization) [1] . Most other mitigations techniques depend on the operation of ASLR. Without it and based on previous research from the security industry: DEP can be defeated with return-to-libc or ROP gadget chaining, SEHOP can be defeated constructing a valid chain, ... Put simply, if you defeat ASLR, we are going to party like if it is 1999. And this is what happened, a vulnerability was found in Adobe’s Flash player (according to Adobe [2] installed on 99% of user computers) that with some magic, explained later, resulted in a multiplatform, highly stable and highly efficient info leak that could be combined with any other vulnerability for trivial exploitation. This vulnerability CVE-2012-0769, with another one that my colleague Tavis Ormandy found, were patched in version 11.1.102.63 [3] released the 05/Mar/2012. According to Adobe, all versions earlier to 11.1.102.63 are impacted by this vulnerability. Flash users can check their current version and latest available one at Adobe’s website[4]. Download: http://zhodiac.hispahack.com/my-stuff/security/Flash_ASLR_bypass.pdf
-
The story of CVE-2011-2018 exploitation Mateusz \j00ru" Jurczyk February - April 2012 Abstract Exploitation of Windows kernel vulnerabilities is recently drawing more and more attention, as observed in both monthly Microsoft advi- sories and technical talks presented on public security events. One of the most recent security aws xed in the Windows kernel was CVE-2011- 2018 1, a vulnerability which could potentially allow a local attacker to execute arbitrary code with system privileges. The problem aected all - and only - 32-bit editions of the Windows NT-family line, up to Win- dows 8 Developer Preview 2. In this article, I present how certain novel exploitation techniques can be used on dierent Windows platforms to reach an elevation of privileges through this specic kernel vulnerability. Download: j00ru.vexillium.org/blog/20_05_12/cve_2011_2018.pdf
-
[h=3]A Tale Of Two Pwnies (Part 2)[/h]Monday, June 11, 2012 When we wrapped up our recent Pwnium event, we praised the creativity of the submissions and resolved to provide write-ups on how the two exploits worked. We already covered Pinkie Pie’s submission in a recent post, and this post will summarize the other winning Pwnium submission: an amazing multi-step exploit from frequent Chromium Security Reward winner Sergey Glazunov. From the start, one thing that impressed us about this exploit was that it involved no memory corruption at all. It was based on a so-called “Universal Cross-Site Scripting” (or UXSS) bug. The UXSS bug in question (117226) was complicated and actually involved two distinct bugs: a state corruption and an inappropriate firing of events. Individually there was a possible use-after-free condition, but the exploit -- perhaps because of various memory corruption mitigations present in Chromium -- took the route of combining the two bugs to form a “High” severity UXSS bug. However, a Pwnium prize requires demonstrating something “Critical”: a persistent attack against the local user’s account. A UXSS bug alone cannot achieve that. So how was this UXSS bug abused more creatively? To understand Sergey’s exploit, it’s important to know that Chromium implements some of its built-in functions using special HTML pages (called WebUI), hosted at origins such as chrome://about. These pages have access to privileged JavaScript APIs. Of course, a normal web page or web renderer process cannot just iframe or open a chrome:// URL due to strict separation between http:// and chrome:// URLs. However, Sergey discovered that iframing an invalid chrome-extension:// resource would internally host an error page in the chrome://chromewebdata origin (117230). Furthermore, this error page was one of the few internal pages that did not have a Content Security Policy (CSP) applied. A CSP would have blocked the UXSS bug in this context. At this point, multiple distinct issues had been abused, to gain JavaScript execution in the chrome://chromewebdata origin. The exploit still had a long way to go, though, as there are plenty of additional barriers: chrome://chromewebdata does not have any sensitive APIs associated with it. chrome://a is not same-origin with chrome://b. chrome://* origins only have privileges when the backing process is tagged as privileged by the browser process, and this tagging only happens as a result of a top-level navigation to a chrome:// URL. The sensitive chrome://* pages generally have CSPs applied that prevent the UXSS bug in question. The exploit became extremely creative at this point. To get around the defenses, the compromised chrome://chromewebdata origin opened a window to chrome://net-internals, which had an iframe in its structure. Another WebKit bug -- the ability to replace a cross-origin iframe (117583) -- was used to run script that navigated the popped-up window, simply “back” to chrome://net-internals (117417). This caused the browser to reassess the chrome://net-internals URL as a top-level navigation -- granting limited WebUI permissions to the backing process as a side-effect (117418). The exploit was still far from done. It was now running JavaScript inside an iframe inside a process with limited WebUI permissions. It then popped up an about:blank window and abused another bug (118467) -- this time in the JavaScript bindings -- to confuse the top-level chrome://net-internals page into believing that the new blank window was a direct child. The blank window could then navigate its new “parent” without losing privileges (113496). The first navigation was to chrome://downloads, which gained access to additional privileged APIs. You probably get a sense of where the exploit was headed now. It finished off by abusing privileged JavaScript APIs to download an attack DLL. The same APIs were used to cleverly “download” and run wordpad.exe from the local disk (thus avoiding the system-level prompt for executing downloads from the internet zone). A design quirk of the Windows operating system caused the attack DLL to be loaded into the trusted executable. As you can imagine, it took us some time to dissect all of this. Distilling the details into a blog post was a further challenge; we’ve glossed over the use of the UXSS bug to bypass pop-up window restrictions. The UXSS bug was actually used three separate times in the exploit. We also omitted details of various other lockdowns we applied in response to the exploit chain. What’s clear is that Sergey certainly earned his $60k Pwnium reward. He chained together a whopping 14 [*] bugs, quirks and missed hardening opportunities. Looking beyond the monetary prize, Sergey has helped make Chromium significantly safer. Besides fixing the array of bugs, we’ve landed hardening measures that will make it much tougher to abuse chrome:// WebUI pages in the future. Posted by Ken Buchanan, Chris Evans, Charlie Reis and Tom Sepez, Software Engineers Sursa: Chromium Blog: A Tale Of Two Pwnies (Part 2)
-
[h=3]A Tale of Two Pwnies (Part 1)[/h]Tuesday, May 22, 2012 Just over two months ago, Chrome sponsored the Pwnium browser hacking competition. We had two fantastic submissions, and successfully blocked both exploits within 24 hours of their unveiling. Today, we’d like to offer an inside look into the exploit submitted by Pinkie Pie. So, how does one get full remote code execution in Chrome? In the case of Pinkie Pie’s exploit, it took a chain of six different bugs in order to successfully break out of the Chrome sandbox. Pinkie’s first bug (117620) used Chrome’s prerendering feature to load a Native Client module on a web page. Prerendering is a performance optimization that lets a site provide hints for Chrome to fetch and render a page before the user navigates to it, making page loads seem instantaneous. To avoid sound and other nuisances from preloaded pages, the prerenderer blocks plug-ins from running until the user chooses to navigate to the page. Pinkie discovered that navigating to a pre-rendered page would inadvertently run all plug-ins—even Native Client plug-ins, which are otherwise permitted only for installed extensions and apps. Of course, getting a Native Client plug-in to execute doesn’t buy much, because the Native Client process’ sandbox is even more restrictive than Chrome’s sandbox for HTML content. What Native Client does provide, however, is a low-level interface to the GPU command buffers, which are used to communicate accelerated graphics operations to the GPU process. This allowed Pinkie to craft a special command buffer to exploit the following integer underflow bug (117656) in the GPU command decoding: static uint32 ComputeMaxResults(size_t size_of_buffer) { return (size_of_buffer - sizeof(uint32)) / sizeof(T); } The issue here is that if size_of_buffer is smaller than sizeof(uint32), the result would be a huge value, which was then used as input to the following function: static size_t ComputeSize(size_t num_results) { return sizeof(T) * num_results + sizeof(uint32); } This calculation then overflowed and made the result of this function zero, instead of a value at least equal to sizeof(uint32). Using this, Pinkie was able to write eight bytes of his choice past the end of his buffer. The buffer in this case is one of the GPU transfer buffers, which are mapped in both processes’ address spaces and used to transfer data between the Native Client and GPU processes. The Windows allocator places the buffers at relatively predictable locations; and the Native Client process can directly control their size as well as certain object allocation ordering. So, this afforded quite a bit of control over exactly where an overwrite would occur in the GPU process. The next thing Pinkie needed was a target that met two criteria: it had to be positioned within range of his overwrite, and the first eight bytes needed to be something worth changing. For this, he used the GPU buckets, which are another IPC primitive exposed from the GPU process to the Native Client process. The buckets are implemented as a tree structure, with the first eight bytes containing pointers to other nodes in the tree. By overwriting the first eight bytes of a bucket, Pinkie was able to point it to a fake tree structure he created in one of his transfer buffers. Using that fake tree, Pinkie could read and write arbitrary addresses in the GPU process. Combined with some predictable addresses in Windows, this allowed him to build a ROP chain and execute arbitrary code inside the GPU process. The GPU process is still sandboxed well below a normal user, but it’s not as strongly sandboxed as the Native Client process or the HTML renderer. It has some rights, such as the ability to enumerate and connect to the named pipes used by Chrome’s IPC layer. Normally this wouldn’t be an issue, but Pinkie found that there’s a brief window after Chrome spawns a new renderer where the GPU process could see the renderer’s IPC channel and connect to it first, allowing the GPU process to impersonate the renderer (bug 117627). Even though Chrome’s renderers execute inside a stricter sandbox than the GPU process, there is a special class of renderers that have IPC interfaces with elevated permissions. These renderers are not supposed to be navigable by web content, and are used for things like extensions and settings pages. However, Pinkie found another bug (117417) that allowed an unprivileged renderer to trigger a navigation to one of these privileged renderers, and used it to launch the extension manager. So, all he had to do was jump on the extension manager’s IPC channel before it had a chance to connect. Once he was impersonating the extensions manager, Pinkie used two more bugs to finally break out of the sandbox. The first bug (117715) allowed him to specify a load path for an extension from the extension manager’s renderer, something only the browser should be allowed to do. The second bug (117736) was a failure to prompt for confirmation prior to installing an unpacked NPAPI plug-in extension. With these two bugs Pinkie was able to install and run his own NPAPI plug-in that executed outside the sandbox at full user privilege. So, that’s the long and impressive path Pinkie Pie took to crack Chrome. All the referenced bugs were fixed some time ago, but some are still restricted to ensure our users and Chromium embedders have a chance to update. However, we’ve included links so when we do make the bugs public, anyone can investigate in more detail. In an upcoming post, we’ll explain the details of Sergey Glazunov’s exploit, which relied on roughly 10 distinct bugs. While these issues are already fixed in Chrome, some of them impact a much broader array of products from a range of companies. So, we won’t be posting that part until we’re comfortable that all affected products have had an adequate time to push fixes to their users. Posted by Jorge Lucangeli Obes and Justin Schuh, Software Engineers Sursa: Chromium Blog: A Tale of Two Pwnies (Part 1)
-
[h=1]Reverse-Engineered Irises Look So Real, They Fool Eye-Scanners[/h]By Kim Zetter July 25, 2012 Researchers reverse-engineered iris codes to create synthetic eye images that tricked an iris-recognition system into thinking they were authentic. Can you tell if this is the real image or the synthetic one? All images courtesy of Javier Galbally LAS VEGAS — Remember that scene in Minority Report when the spider robots stalk Tom Cruise to his apartment and scan his iris to identify him? Things could have turned out so much better for Cruise had he been wearing a pair of contact lenses embossed with an image of someone else’s iris. New research being released this week at the Black Hat security conference by academics in Spain and the U.S. may make that possible. The academics have found a way to recreate iris images that match digital iris codes that are stored in databases and used by iris-recognition systems to identify people. The replica images, they say, can trick commercial iris-recognition systems into believing they’re real images and could help someone thwart identification at border crossings or gain entry to secure facilities protected by biometric systems. The work goes a step beyond previous work on iris-recognition systems. Previously, researchers have been able to create wholly synthetic iris images that had all of the characteristics of real iris images — but weren’t connected to real people. The images were able to trick iris-recognition systems into thinking they were real irises, though they couldn’t be used to impersonate a real person. But this is the first time anyone has essentially reverse-engineered iris codes to create iris images that closely match the eye images of real subjects, creating the possibility of stealing someone’s identity through their iris. “The idea is to generate the iris image, and once you have the image you can actually print it and show it to the recognition system, and it will say ‘okay, this is the guy,’” says Javier Galbally, who conducted the research with colleagues at the Biometric Recognition Group-ATVS, at the Universidad Autonoma de Madrid, and researchers at West Virginia University. Or is this? Is this real? Iris-recognition systems are rapidly growing in use around the world by law enforcement agencies and the commercial sector. They’re touted as faster, more sanitary and more accurate than fingerprint systems. Fingerprint systems measure about 20-40 points for matching while iris recognition systems measure about 240 points. Schipol Airport in the Netherlands allows travelers to enter the country without showing a passport if they participate in its Privium iris recognition program. When travelers enroll in the program, their eyes are scanned to produce binary iris codes that are stored on a Privium card. At the border crossing, the details on the card are matched to a scan taken of the cardholder’s eye to allow the person passage. Since 2004, airports in the United Kingdom have allowed travelers registered in its iris-recognition program to pass through automated border gates without showing a passport, though authorities recently announced they were dropping the program because passengers had trouble properly aligning their eyes with the scanner to get automated gates to open. Google also uses iris scanners, along with other biometric systems, to control access to some of its data centers. And the FBI is currently testing an iris-recognition program on federal prison inmates in 47 states. Inmate iris scans are stored in a database managed by a private firm named BI2 Technologies and will be part of a program aimed at quickly identifying repeat offenders when they’re arrested as well as suspects who provide false identification. When someone participates in an iris-recognition system, his or her eyes are scanned to create iris codes, which are binary representations of the image. The iris code, which consists of about 5,000 bits of data, is then stored in a database for matching. The iris code is stored instead of the iris image for security and privacy reasons. When that person then later goes before an iris-recognition scanner – to obtain access to a facility, to cross a border or to access a computer, for example – their iris is scanned and measured against the iris code stored in the database to authenticate the person’s identity. It’s long been believed that it wasn’t possible to reconstruct the original iris image from an iris code stored in a database. In fact, B12 Technologies says on its web site that biometric templates “cannot be reconstructed, decrypted, reverse-engineered or otherwise manipulated to reveal a person’s identity. In short, biometrics can be thought of as a very secure key: Unless a biometric gate is unlocked by using the right key, no one can gain access to a person’s identity.” But the researchers showed that this is not always the case. And this? What about this? Their research involved taking iris codes that had been created from real eye scans as well as synthetic iris images created wholly by computers and modifying the latter until the synthetic images matched real iris images. The researchers used a genetic algorithm to achieve their results. Genetic algorithms are tools that improve results over several iterations of processing data. In this case, the algorithm examined the synthetic images against the iris code and altered the images until it achieved one that would produce a near identical iris code as the original iris image when scanned. “At each iteration it uses the synthetic images of the previous iteration to produce a new set of synthetic iris images that have an iris code which is more similar (than the synthetic images of the previous iteration) to the iris code being reconstructed,” Galbally says. It takes the algorithm between 100-200 iterations to produce an iris image that is “sufficiently similar” to one the researchers are trying to reproduce. Since no two images of the same iris produce the same iris code, iris recognition systems use a “similarity score” to match an image to the iris code. The owner of the scanner can set a threshold that determines how similar an image needs to be to the iris code to call it a match. The genetic algorithm examines the similarity score given by the recognition system after each iteration and then improves the next iteration to obtain a better score. “The genetic algorithm applies four … rules inspired in natural evolution to combine the synthetic iris images of one iteration in such a way … that they produce new and better synthetic iris images in the next generation — the same way that natural species evolve from generation to generation to adapt better to their habitat but in this case it is a little bit faster and we don´t have to wait millions of years, just a few minutes,” Galbally says. Galbally says it takes about 5-10 minutes to produce an iris image that matches an iris code. He noted, though, that about 20 percent of the iris codes they attempted to recreate were resistant to the attack. He thinks this may be due to the algorithm settings. Once the researchers perfected the synthetic images, they then scanned them against a commercial iris recognition system, and found that the scanner accepted them as matching iris images more than 80 percent of the time. They tested the images against the VeriEye iris recognition system made by Neurotechnology. VeriEye’s algorithm is licensed to makers of iris-recognition systems and recently ranked among the top four in accuracy out of 86 algorithms tested in a competition by the National Institute of Standards and Technology. A Neurotechnology spokeswoman said there are currently 30-40 products using VeriEye technology and more are in development. The iris codes the researchers used came from the Bio Secure database, a database of multiple kinds of biometric data collected from 1,000 subjects in Europe for research use by academics and others. The synthetic images were obtained from a database developed at West Virginia University. After the researchers had successfully tricked the VeriEye system, they wanted to see how the reconstructed images would fare against real people. So they showed 50 real iris images and 50 images reconstructed from iris codes to two groups of people — those who have expertise in biometrics those who are untrained in the field. The images tricked the experts only 8 percent of the time, but the non-experts were tricked 35 percent of the time on average, a rate that is very high given there is a 50/50 chance of guessing correctly. It should be noted that even with their high rate of error, the non-expert group still scored better than the VeriEye algorithm. The study assumes that someone conducting this kind of attack would have access to iris codes in the first place. But this might not be so hard to achieve if an attacker can trick someone into having their iris scanned or hacks into a database containing iris codes, such as the one that B12 technologies maintains for the FBI. BI2 states on its web site that the iris images in its database are “encrypted using strong cryptographic algorithms to secure and protect them,” but the company could not be reached to obtain details about how exactly it secures these images. Even if BI2?s database is secure, other databases containing iris codes may not be. Solution: The picture at the top of the post is a synthetic iris image. In the first set of images below that, the one on the left is real, the other synthetic. In the second set of images, the one on left is real, the one on right synthetic. And this final one? Authentic. Look hard, and you can even see the contact lens surrounding the iris. Sursa: Reverse-Engineered Irises Look So Real, They Fool Eye-Scanners | Threat Level | Wired.com
-
PHP 6.0 openssl_verify() Local Buffer Overflow PoC
Nytro replied to DarkyAngel's topic in Exploituri
EIP 00410041 E dubios, s-ar putea exploata, dar ce sunt acele 0-uri intre fiecare caracter? Pare stack overflow. -
[h=1]Japan’s Finance Ministry Spied On by Trojan for Two Years[/h]By: Liviu Arsene July 25, 2012 Japan’s Finance Ministry recently discovered a data-leaking Trojan on its computers that has been running for almost two years. During a security sweep of their network infrastructure, a third-party security firm found the Trojan and notified the institution. Of the 2,000 computers checked, 123 were infected with the Trojan that appeared to be present since January 2010. The Finance Ministry said no taxpayer details were exposed and it’s likely that only documents regarding meetings were exposed. “It is not that the personal information that we have was widely leaked,” one official told reporters. The most recent infection detected took place in November 2011, but that doesn’t mean information was not accessed via previously installed Trojans. No other attacks were reported after November 2011, indicating that interest may have subsided after the two-year spree. The antivirus solution on the infected machines seems to have been ineffective in detecting the Trojan, indicating a higher level of sophistication in the attack. Japan’s government only stated that infected computers belonged to junior staff, implying that access to vital information was severely restricted. Infected hard disk drives were removed, severing all Trojan activities with attacker-controlled servers. Although the official report depicts Anonymous as the primary suspect, the modus operandi of the organization has always been DDoS attacks and not hidden Trojans. Sursa: Japan’s Finance Ministry Spied On by Trojan for Two Years | HOTforSecurity
-
[h=1]Apple vrea sa fii "orb": a scos din App Store aplicatia Clueful[/h]de Liviu Petrescu | 25 iulie 2012 Apple a scos din App Store aplicatia Clueful, creata de BitDefender. Compania nu a dat niciun motiv pentru decizia sa, dupa ce acceptase in prealabil aplicatia fara probleme. Situatia este controversata deoarece Clueful este singura aplicatie capabila sa cerceteze toate celelalte aplicatii instalate pe iPhone, iPad sau iPod Touch si sa informeze utilizatorul cu privire la datele personale accesate de fiecare aplicatie, potrivit Capital. Catalin Cosoi, Chief Security Researcher BitDefender, s-a aratat surprins de decizia Apple, ce ii lipseste pe multi utilizatori de o unealta importanta pentru a-si apara drepturile. Aplicatia Clueful pentru iOS este in continuare functionala pe gadgeturile pe care a fost instalata si ofera informatii cu privire la peste 65.000 aplicatii analizate si accesul acestora la datele personale din telefon. Apple a recunoscut acum cateva luni ca exista unele probleme de securitate in iOS 5, care permit aplicatiilor din App Store sa acceseze adresele din agenda sau pozele personale fara permisiunea utilizatorului, insa a promis ca acestea vor fi rezolvate in urmatoarea versiune a sistemului de operare mobil Apple iOS 6. Sursa: Apple vrea sa fii "orb": a scos din App Store aplicatia Clueful | Hit.ro
-
Cuckoo Sandbox 0.4! July 25, 2012 By Mayuresh Our first post regarding the Cuckoo Sandbox can be found here. A few hours ago, an update –Cuckoo Sandbox version 0.4 was released! This release can be considered to be a historical milestone in the project’s history and the best release to have been produced so far! This is a complete rewrite of every single component from scratch with modularity, scalability and flexibility in mind. “Cuckoo Sandbox is a malware analysis system. Its goal is to provide you a way to automatically analyze files and collect comprehensive results describing and outlining what such files do while executed inside an isolated environment. It’s mostly used to analyze Windows executables, DLL files, PDF documents, Office documents, PHP scripts, Python scripts, Internet URLs and almost anything else you can imagine. But it can do much more!” [h=2]Cuckoo Sandbox 0.4 official change log:[/h] Modules for performing custom post-analysis processing of the results and generating reports: being able to customize the interpretation of the results and the generation of reports in any format you want, you can easily integrate Cuckoo Sandbox in any existing framework or environment you already have in place. Default support for KVM and the ability to create new, or modify existing, Python modules that will instruct Cuckoo Sandbox on how to interact with your virtualization solution of choice. A signatures engine that you can use to identify and isolate any pattern or event of interest: contextualize the analysis results, quickly identify known malwares or look for particularly interesting events for you or your company. Improved scripting capabilities, further customizing the sandbox to your analysis needs. You can now customize Cuckoo’s analysis process to the best extent by simply writing Python modules that define how the Cuckoo Sandbox should interact with the malware and the analysis environment. Last but not least, the Cuckoo Sandbox analysis core was completely re-engineered. This will significantly improve the quality of our analysis, giving much more detailed and explicative information about the malware you’re analyzing. [h=3]Download Cuckoo Sandbox:[/h]Cuckoo Sandbox v0.4 - cuckoo_0.4.tar.gz Sursa: Cuckoo Sandbox version 0.4! — PenTestIT
-
XMLCoreServices Vulnerability Analysis Authored by Minsu Kim This document is an analysis of the XMLCoreServices vulnerability as noted in CVE-2012-1889. 1. Executive Summary Recently, the malicious web pages exploiting XMLCoreServices vulnerability are frequently observed, and since Microsoft have released just a temporary fix for this vulnerability, many Internet Explorer users are exposed to this security threat. This document provides detailed analysis of XMLCoreServices (CVE-2012-1889) vulnerability. This vulnerability can be exploited by abusing uninitialized memory section of Microsoft Core Services 3.0, 4.0, 5.0 and 6.0, and ultimately executes malicious code injected by the attacker. This vulnerability can be temporarily removed by Fix It (Microsoft Security Advisory: Vulnerability in Microsoft XML Core Services could allow remote code execution), which disables XML Core Services, however Microsoft should release official patch to this vulnerability as soon as possible. This vulnerability has been analyzed on the machine with Windows XP SP2, Internet Explorer 6, and Microsoft Core Services 3.0. The vulnerability exists in msxml3.dll, which provides Core Services. The structure of memory where the exploitation of the vulnerability takes place is shown in Figure 1 below Download: http://packetstormsecurity.org/files/download/114977/CSRC-12-03-006.pdf Sursa: XMLCoreServices Vulnerability Analysis ? Packet Storm
-
Reverse Engineering. Il atasezi de un proces si executi fiecare instructiune pas cu pas. De preferat sa stii ce e de fapt Portable Executable si cum e structurat un astfel de fisier inainte de a apasa la intamplare pe butoane in OllyDbg.
-
Cred ca au inceput de cel putin 2 ani. SUA a afirmat ca ei au creat Stuxnet.
-
Emails from Iran Over the weekend, I received a series of emails from Iran. They were sent by a scientist working at the Atomic Energy Organization of Iran (AEOI). The scientist reached out to publish information about Iranian nuclear systems getting struck by yet another cyber attack. He wrote: I am writing you to inform you that our nuclear program has once again been compromised and attacked by a new worm with exploits which have shut down our automation network at Natanz and another facility Fordo near Qom. According to the email our cyber experts sent to our teams, they believe a hacker tool Metasploit was used. The hackers had access to our VPN. The automation network and Siemens hardware were attacked and shut down. I only know very little about these cyber issues as I am scientist not a computer expert. There was also some music playing randomly on several of the workstations during the middle of the night with the volume maxed out. I believe it was playing 'Thunderstruck' by AC/DC. I'm not sure what to think about this. We can't confirm any of the details. However, we can confirm that the researcher was sending and receiving emails from within the AEOI. Mikko Sursa: Emails from Iran - F-Secure Weblog : News from the Lab
-
Flame Cyber War Against Iran Description: In this video experts talking about Flame malware and lots of politicle issues about this dangerous flame malware. If you don’t know about flame malware so flame malware is Flame malware also known as Flamer, Skywiper, and sKyWIper. This malware discovered in 2012 and flame malware target Microsoft Windows Operating system. Security Experts are telling this malware created for Cyber War. And Secrely information gathering virus. Now, Japan is blaming Israel for the same virus that hit their nuclear computers. Flame can record data files, remotely change settings on computers, turn on PC microphones to record conversations, take screen shots and log instant messaging chats. For More Information about Flame Malware :- Flame (malware) - Wikipedia, the free encyclopedia On the 28th of May 2012 Iran announced the most crippling computer virus ever written, had been detected on computers in Iranian government offices. Iran's Computer Emergency Response Team, MAHER, announced on its official website, that the Flame virus had been intercepted and its infection components posted on their page. A few days later, on June first, the New York Times published an article by David Sanger that blew the lid off a lot of secret information. It revealed just how evolved the US and Israel are in the global cyber war, especially when it comes to the to the cyber attacks against Iran. The article explained how US president Barack Obama ordered a covert cyber attack on Iran a few months after taking office. The article was the result of 18 months of research and information gathered from interviews with top current and former American, European and Israeli officials. The first wave of digital attacks against Iran happened during the presidency of George W. bush. They were codenamed "The Olympic Games". According to the participants in the program it was the first sustained use of cyber weapons. Since then the stuxnet virus has invaded systems at Iran's nuclear power plant, and now Flame. In this edition of the show we will be looking at the cyber war being waged as we speak. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: Flame Cyber War Against Iran
-
Pe pagina de Facebook se posteaza cam in fiecare zi ce e mai interesant de aici: https://www.facebook.com/rstforum Recomandati pagina prietenilor interesati de hacking si securitate IT.
-
[h=1]Is Hacking in Self-Defense Legal? It Depends, Says Army Cyber Lawyer[/h]By Jordan Robertson | July 23, 2012 Photograph by Photomorgana The allure of hacking back is growing as digital espionage and trade-secret theft have become rampant. When Robert Clark meets with large corporations and government agencies that have been hacked, many express the same feeling. They want revenge. But the impulse to strike back is fraught with legal danger, said Clark, operational attorney for the U.S. Army Cyber Command, who plans to deliver that message on Thursday in a speech at the Black Hat security conference in Las Vegas. “I’ve been involved in this field in-depth for 10 years, and the first thing everybody asks is, ‘How do I hack back? I want to smack somebody,’” he said in an interview. “And my response is always the same: Why? Because you’re mad? What do you want to get out of it?” The allure of hacking back is growing as digital espionage and trade-secret theft have become rampant. Shawn Henry, formerly the FBI’s top cyber crime official, has said that organizations increasingly want to go on the offensive with hackers. Henry is now president of CrowdStrike, a startup that is focused on proactive anti-hacking measures. Companies are taking a cue from elected leaders. Two pieces of malicious software show that governments are taking a more active role in cyber attacks. The New York Times reported last month that the U.S. and Israel jointly developed Stuxnet, which damaged nearly 1,000 centrifuges in an Iranian nuclear plant. The Washington Post reported that the countries also built Flame, a piece of eavesdropping software, to slow Iran’s nuclear ambitions. Clark’s position is a conflicted one, as the military and civilian organizations play by different rules. He wouldn’t comment on Stuxnet or Flame, and emphasized that he was speaking in a personal capacity at Black Hat. But he did have advice for organizations considering hacking in self-defense. Some companies are discussing whether it’s legal to place a tracking bug inside computer files that are at risk of being stolen, Clark said. The law may be on their side in some instances. Clark pointed to a 1992 case where a driver working for the U.S. Postal Service was caught stealing envelopes stuffed with money on his route. The driver, Ervin Charles Jones, pleaded guilty but argued that investigators’ use of a small transmitter to track one of the envelopes — the key to making the arrest — led to an unlawful search of his van. The courts disagreed, and Jones was sentenced to 11 months in prison. It’s not that different from companies trying to chase stolen computer files. But in the digital realm, it’s easy to go too far, Clark said. Because of the powerful capabilities of spying software, organizations might be tempted to do more than simply track their purloined goods. Placing malicious software on attackers’ machines would violate anti-hacking laws, Clark said. A grayer area, though, is whether probing attackers’ networks violates the law. Breaking into computers to recover stolen intellectual property is illegal, but doing light reconnaissance to map attackers’ networks to learn about their systems might not be, Clark said. The law generally favors those that pursue prevention, such as the use of heavy encryption, over post-theft recovery, like a burglary victim who aggressively goes around looking for his stolen goods, Clark said. Planting disinformation is another strategy that’s gaining popularity, he said. Placing fake blueprints or software code in a place where hackers could steal them could be a legal, effective diversion. But spreading flawed airplane designs or pharmaceutical formulas that make their way into products and hurt people might not be, he said. “If I’m talking about the new secret formula for a soda, and I’m just making it taste bad, that’s no big deal,” he said. “But what if my disinformation gets to the point that it harms somebody? That’s what could happen if disinformation is pushed to its ultimate end.” A bizarre case from 1967 shows some limitations on self-defense that could apply to the cyber realm, Clark said. The case involved Iowa landowners, Edward and Bertha Briney, who rigged a shotgun to fire on anyone who entered a bedroom in a vacant farm house that was being repeatedly burglarized. An intruder broke in to scavenge old bottles and fruit jars and had most of his leg blown off. A jury awarded the intruder $30,000 in damages, which would be more than $200,000 in today’s dollars. Hacking attacks can now cause damage in the physical world, as the Stuxnet worm showed. Hackers have an array of non-PC targets to attack now, from the computers that run water facilities to automobiles to insulin pumps, as shown in this Bloomberg.com slide show. Aggressive counterattacks could be justified in cases where personal safety is in danger, Clark said. But organizations that engage in a counterattack would have to prove that their response was proportional to the threat, he added. Of course, the odds of a victim of a counterattack coming forward are slim, Clark said. “Who’s going to complain?” he asked with a laugh. Sursa: Is Hacking in Self-Defense Legal? It Depends, Says Army Cyber Lawyer - Bloomberg
-
[h=1]Spooky: How NSA’s Surveillance Algorithms See Into Your Life[/h] 24 Jul, 2012 by Radu Tyrsina On the ViewPoint talkshow with Eliot Spitzer, three whistle blowers from the National Security Agency (NSA), Thomas Drake, Kirk Wiebe and William Binney have expressed their allegations surrounding NSA’s illegal domestic surveillance measures. The whistle blowers specifically refer to 9/11 as the date after which electronic surveillance has taken new heights. This means that enormous amounts of email, cell phone conversations have been stored and surveilled, as Eliot Spitzer puts it. When asked whether they knew about the electronic surveillance used by NSA, Kirk Wiebe said that they didn’t even believe the U.S. government could go that far. http://www.youtube.com/watch?feature=player_embedded&v=AQalspt90AU [h=3]Google seems a joke when compared to NSA’s data[/h] William Binney confirms Spitzer’s assumptions by agreeing that there is, indeed, a dossier for almost every American, filled up with data aggregated by the National Security Agency. Looking at how much data the NSA could possibly have piled up, compared to that, Eliot says that Google “seems like a joke”. William goes on to say something really spooky: The data is resident in programs that can pull it together in timelines and things like that and let them (the Government) see into your life, to see what you’re doing in your life. By using satellites and the huge amount of data the NSA currently holds, they can even create some sort of algorithms to realize who’s talking to whom, thus being able to dissect our private lives. Eliot also says that it is being done without any regard to the Fourth Amendment in the United States Constitution. To get a better understanding, here’s what the 4th amendment presumes: When police conduct a search, the amendment requires that the warrant establishes probable cause to believe that the search will uncover criminal activity or contraband. They must have legally sufficient reasons to believe a search is necessary. also, it is important to know this aspect of the U.S Constitution: The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized. [h=3]The government has an electronic spying program[/h]The algorithms that the NSA has created, according to whistle blower William Binney, are actually a part of the Big Data Research and Development Initiative, which has the official purpose to improve the tools and techniques needed to access, organize, and glean discoveries from huge volumes of digital data. William goes on by saying that the algorithms will go through the data base looking at everybody. The basic and most obvious question that comes in our mind, as well as the mind of the talk show host – hasn’t anybody thought about the direct violation of the Constitution? Normally, for such critically important things for a nation, there needs to be a Court approval. According to Kirk Wiebe, it seems that nobody cares about that. [h=4]Secret spying room inside AT&T facility[/h] As always, this is nothing new, it’s just that thanks to this talk show, some of us have gotten the chance to find about these things and to make our duty about informing others. It is only now that I’ve discovered that the EEF (Electronic Frontier Foundation) has a lawsuit against U.S government’s massive spying program. Here’s what the EEF had to say about this: In a motion filed today (July 2), the three former intelligence analysts confirm that the NSA has, or is in the process of obtaining, the capability to seize and store most electronic communications passing through its U.S. intercept centers, such as the “secret room” at the AT&T facility in San Francisco first disclosed by retired AT&T technician Mark Klein in early 2006. There’ve been numerous reports about the secret room at AT&T, Wired and ArsTechnica have written about this 6 years ago. Another interesting article also shows that the same EEF has filed a suit against AT&T over NSA spying, accusing them of diverting customer traffic to the NSA for years as a means of aiding the NSA’s covert surveillance program. If you still think this is fiction we’re talking, you might want to read this Wikipedia article, which refers to this room as Room 641A. The same William Binney, guest of the ViewPoint show, said that there could be up to 20 of such “secret rooms” across the entire country. The octopus of secret surveillance is getting even bigger as we find out about the President’s surveillance program, which constitutes a series of secret intelligence activities authorized by then President of the United States George W. Bush after the September 11 attacks in 2001 as part of the War on Terrorism. [h=3]Is there still hope?[/h]The surveillance program seems to have appeared as a direct effect of the Patriot Act and its surveillance procedures. And here’s where we make the link with the 4th amendment. I’m no legal expert, but even to me, it makes sense and this explains it all: Removed was the statutory requirement that the government prove a surveillance target under FISA is a non-U.S. citizen and agent of a foreign power, though it did require that any investigations must not be undertaken on citizens who are carrying out activities protected by the First Amendment. The title also expanded the duration of FISA physical search and surveillance orders and gave authorities the ability to share information gathered before a federal grand jury with other agencies. To put it bluntly, by using the Patriot Act’s official purpose of fighting with terrorism, the freedom of U.S citizens appears to be greatly impaired; by using the Big Data initiative’s offical purpose of enhancing even more the role of technology in our lives, they’re actually creating intrusive algorithms that are spying on our private lives. But there’s still hope as it seems that, recently, The Office of the Director of National Intelligence has admitted that government’s spying efforts have exceeded the legal limits, at least once. Let’s hope that all this will have an end and our freedoms will be preserved. As Benjamin Franklin said: “They who can give up essential liberty to obtain a little temporary safety, deserve neither liberty nor safety.“ Sursa: Spooky: How NSA's Surveillance Algorithms See Into Your Life - Technically Personal!
-
Bypass Antivirus software using backdoor encoding http://www.youtube.com/watch?feature=player_embedded&v=ilZ1CjCB1jc Description: In this video you will learn how to Bypass Antivirus software using backdoor encoding. Basically we are encoding a Backdoor with only one module but in this video he will show us how to encode a backdoor using three different encoders to Bypass antivirus software. 1st encoding with shikata 2nd call4_dword_xor 3rd countdown Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: https://www.youtube.com/watch?v=ilZ1CjCB1jc Sursa: Bypass Anti-Virus
-
[h=3]Android DNS Poisoning: Randomness gone bad (CVE-2012-2808)[/h] Recently we discovered a very interesting vulnerability in Android’s DNS resolver, a weakness in its pseudo-random number generator (PRNG), which makes DNS poisoning attacks feasible. DNS poisoning attacks endanger the confidentiality and integrity of the target victim’s machine. For instance, DNS poisoning can be used to steal the victim’s cookies, or tamper with weak applications’ update mechanisms in order to execute malicious code. The official advisory with full details can be found here. This blog post summarizes the advisory. A very short background on DNS poisoning Each DNS request holds a unique identifier (‘nonce’) which consists of an attribute called ‘TXID’ and the UDP source port. They both combine to a 32bit value. Ideally these 32 bits are random. In order to conduct a DNS poisoning attack, an attacker must be able to inject a forged response before the legitimate one arrives from the server. The forged response must include the correct nonce (that the attacker must correctly guess), otherwise it is dropped by the resolver. If the nonce is a 32bit random value, an attack takes years to succeed (in average). Not that feasible! But with every bit of randomness we are able to reduce from the nonce, the expected time drops by a factor of two. How does android resolve DNS? The code that is in charge of the DNS resolution can be found under Android’s libc implementation (aka ‘bionic’). Android provides source port and TXID randomization by calling the function res_randomid , which returns a 16bit integer: 1: u_int 2: res_randomid(void) { 3: struct timeval now; 4: 5: gettimeofday(&now, NULL); 6: return (0xffff & (now.tv_sec ^ now.tv_usec ^ getpid())); 7: } It can be seen that the returned value is a XOR operation of the fraction of the current time in microseconds, the current time in seconds and the process ID. This method is used twice in close succession in order to produce the TXID and source port values. The dominant factor which makes this value hard to predict is the microseconds fraction. Why is it vulnerable? Remember that the res_randomid function is used twice, once for the TXID, and once for the source port. The Achilles' heel and the crux of the attack is the fact that there two subsequent calls to the res_random_id function in a very short time. Since res_random_id is a function of the current time, the TXID and source port values become very much correlated to each other: given that the attacker guessed correctly one value, probability is high that the other value would be also guessed correctly. This means that instead of 32 random bits, you get much less. In fact, our research shows that in some environments, the 32 bits contain less than 21 random ones. The expected time for a successful attack is brought down from years to minutes! The attack is feasible regardless of whether or not the attacker knows the process ID. See our whitepaper for the complete analysis. Take a look at the following capture: Let’s examine some source port, TXID couples and verify if there is any correlation: It can be seen even to the naked eye that the TXID and source port values are not that different. Remember that ideally each of them is chosen out of 65536 values. Why should you care? What is the impact? As usual, DNS poisoning attacks may endanger the integrity and confidentiality of the attacked system. For example, in Android, the Browser app can be attacked in order to steal the victim's cookies of a domain of the attacker's choice. In case the attacker manages to lure the victim to browse a web page controlled by him/her, the attacker can use JavaScript in order to start resolving non-existing sub-domains. Upon success, a sub-domain points to the attacker's IP, which enables the latter to steal wildcard cookies of the attacked domain, and even insert ones (see this for more details on the impact of subdomain poisoning). In addition, a malicious app may instantiate the Browser app on the attacker's malicious web-page. If the attacker knows the process ID (for example, a malicious app can access that information), the expected time for a successful attack can be reduced, as explained in the whitepaper. A video demo of the attack How was the issue fixed? The random sample is now taken from /dev/urandom which should have enough entropy when the call is made. Which versions are vulnerable? Android 4.0.4 and below Which versions are non-vulnerable? Android 4.1.1 Disclosed by: Roee Hay and Roi Saltzman Disclosure timeline 07/24/2012 Public disclosure 06/05/2012 Issue confirmed by Android Security Team and patch provided to partners 05/21/2012 Disclosed to Android Security Team by Roee Hay and Roi Saltzman Posted by Roee Hay on July 24, 2012 Sursa: IBM Application Security Insider: Android DNS Poisoning: Randomness gone bad (CVE-2012-2808)
-
Bre: [TABLE="class: table table-bordered table-striped"] [TR] [TD]AntiVir[/TD] [TD="class: text-red"]KIT/Bandook[/TD] [TD]20111127[/TD] [/TR] [/TABLE] Chiar e detectat ca ceea ce e. E un RAT, ce vrei sa iti spuna antivirusul, ca e un fluturas?