-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
[h=3]Patching an Android Application to Bypass Custom Certificate Validation[/h] By Gursev Kalra. One of the important tasks while performing mobile application security assessments is to be able to intercept the traffic (Man in The Middle, MiTM) between the mobile application and the server by a web proxy like Fiddler, Burp etc… This allows penetration tester to observe application behavior, modify the traffic and overcome the input restrictions enforced by application’s user interface to perform a holistic penetration test. Mobile applications exchanging sensitive data typically use the HTTPS protocol for data exchange as it allows them to perform server authentication to ensure a secure communication channel. The client authenticates the server by verifying server’s certificate against its trusted root certificate authority (CA) store and also checks the certificate’s common name against the domain name of the server presenting the certificate. To proxy (MiTM) the HTTPS traffic for mobile application, web proxy’s certificate is imported to the trusted root CA store otherwise the application may not function due to certificate errors. For most mobile application assessments you can just setup a web proxy to intercept mobile application’s SSL traffic by importing its certificate to device’s trusted root CA store. To ensure that the imported CA certificate works fine, its common to use the Android’s browser to visit a couple of SSL based websites and you should see that the browser accepts the MiTM’ed traffic without complaint. Typically, the native Android applications also uses the common trusted root CA store to validate server certificates, so no extra work is required to intercept their traffic. However, for some applications that could differ - let's take a look at how to handle those apps. [h=1]Analyzing the Unsuccessful MiTM[/h] When you launch the application under test and attempt to pass its traffic through the web proxy, the application will likely display an error screen indicating that it could not connect to the remote server because of no internet connection or it could not establish a connection for unknown reasons. If you're confident in your setup, the next step is to analyze the system logs and SSL cipher suite support. [h=2]logcat[/h] logcat is Android’s logging mechanism that is used to view application debug messages and logs. First run "adb logcat" to check if the application under test creates any stack trace indicating the cause of the error but there was none. The application may also leave debug logs indicating that the developers did a good job with the error handling or write debug messages that could potentially expose application internal working to prying eyes. [h=2]Common SSL Cipher suites[/h] When a web proxy acts as a MiTM between client and the server, it establishes two SSL communication channels. One channel is with the client to receive requests and return responses, the second channel is to forward application requests to the server and receive server responses. To establish these channels, the web proxy has to agree on common SSL cipher suits with both the client and the server and these cipher suites may not be the same as shown in the image below. You may see SSL proxying errors occur in one or both of the following scenarios which lead to failures while establishing a communication channel. Android application and the web proxy do not share any common SSL cipher suite. The web proxy and the server do not share any common SSL cipher suite. In both scenarios, the communication channel cannot be established and the application does not work. To analyze the above mentioned scenarios, fire up Wireshark to analyze SSL handshake between the application and the web proxy. If you don't see any issues in wireshark between the application and the proxy, issued a HTTPS request to the server within the web proxy to see if you have any issues there. If not, then you know the web proxy is capable of performing MiTM for the test application and there is something else going under the hood. [h=1]Custom Certificate Validation[/h] At this point you should start to look into the possibility of the application performing custom certificate validation to prevent the possibility of MiTM to monitor/modify its traffic flow. HTTPS clients can perform custom certificate validation by implementing the X509TrustManager interface and then using it for its HTTPS connections. The process of creating HTTPS connections with custom certificate validation is summarized below: Implement methods of the X509TrustManager interface as required. The server certificate validation code will live inside the checkServerTrusted method. This method will throw an exception if the certificate validation fails or will return void otherwise. Obtain a SSLContext instance. Create an instance of the X509TrustManager implementation and use it to initialize SSLContext. Obtain SSLSocketFactory from the SSLContext instance. Provide the SSLSocketFactory instance to setSSLSocketFactory method of the HttpsURLConnection. Instance of HttpsURLConnection class will then communicate with the server and will invoke checkServerTrusted method to perform custom server certificate validation. So, if you can decompile the code and search through it, you'll likely reveal the X509TrustManager implementation in one of the core security classes of the application. The next step is to patch the code preventing the MiTM and deploy it for testing. The image below shows two methods implemented for X509TrustManager from an example application. [h=1]Modifying checkServerTrusted Implementation[/h] The example image above shows implementation for two X509TrustManager methods, checkServerTrusted and checkClientTrusted. At this point it is important to point out that in the example above, both the methods behave in a similar way except that the former is used by client side code and the latter is used by server side code. If the certificate validation fails, the methods will throw an exception, otherwise they return void. The checkClientTrusted implementation above allows the server side code to validate client certificate. Since this functionality is not required inside the mobile application, this method can be empty and return void for the test application; which is equivalent to successful validation. However, the checkServerTrusted contains significant chunk of code performing the custom certificate validation which needs to be bypassed. To bypass certificate validation code inside the checkServerTrusted method for this example, I replaced its Dalvik code with the code from the checkClientTrusted method to return void, effectively bypassing the custom certificate check as shown in the image below. [h=1]Recompiling and Deploying the Modified Application[/h] Once you have all the checkServerTrusted invocations from set up to be successful, recompile the application with apkTool, sign it with SignApk and deploy it on the device. If you did it all right, the web proxy MiTM will work like a charm and you will be able view, modify and fuzz application traffic. Posted by OpenSecurity Research at 5:00 AM Sursa: Open Security Research: Patching an Android Application to Bypass Custom Certificate Validation
-
Cloud-Based Sandboxing: An Elevated Approach to Network Security By Aviv Raff on November 04, 2013 While the concept of sandboxing isn’t new, it wasn’t until a few years ago that it entered the mainstream network security vocabulary and enterprises began using on-premises sandboxing appliances to test suspicious executables for malware. And while the results have been impressive -- especially compared to systems that rely exclusively on signature-based antivirus software -- there has been some criticism of late with respect to sandboxing’s potential and, especially, its limitations. This criticism was nicely articulated in a recent article by IOActive Inc.’s CTO Gunter Ollmann, in which he argued that vendors who claim that this approach will stop all APTs and targeted attacks are flat out wrong because sandboxing appliances: · Stumble when it comes to stopping targeted threats that require users to perform certain actions; · Aren’t effective against threats that target less common desktop environments, such as those running 64-bit systems or are comprised of Apple, Android, Linux or non-Windows legacy devices; In response to this, the first thing to note is that Ollmann is correct and his warning is significant: sandboxing appliances cannot replace an existing security solution, and vendors who suggest otherwise are guilty of overly-aggressive marketing at best, and outright deception at worst. In truth, sandboxing can only augment an existing system – it cannot replace one. Yet with that being said, rumors of sandboxing’s demise are indeed greatly exaggerated. It certainly can and should be a big piece of the network security puzzle -- provided that enterprises elevate their approach. How high should they aim? To the cloud, of course! There are four key reasons why cloud-based sandboxes are qualitatively more effective than on-premise appliances: 1. Cloud-based sandboxes are free of hardware limitations, and therefore scalable and elastic. As a result, they can track malware over a period of hours or days -- instead of seconds or minutes -- to build robust malware profiles of targeted threats (such as the one that used a fake Mandiant APT1 report), or to uncover “Time Bomb” attacks that need to be simulated with custom times and dates (such as Shamoon). 2. Cloud-based sandboxes can be easily updated with any OS type and version, including those that aren’t part of a sandboxing appliance’s default set of images. Enterprises can also upload images and create their own custom environment. 3. Cloud-based sandboxes aren’t limited by geography. For example, attackers often target offices that are located in a different region than where the on-premise sandbox is running (typically the enterprise’s headquarters). As such, the attacker will not respond to the malware since it communicates from a different region. However, cloud-based sandboxes avoid this by allowing the malware to run from different locations worldwide. Based on these reasons, it’s easy to see why cloud-based sandboxing makes sense for CISOs and everyone else who is tasked with defending the network (and since there’s no new hardware or software to purchase, it also makes CFOs and those holding the enterprise’s purse strings quite happy). Yet, the beneficial picture I’ve painted here should not give rise to the error that some vendors have committed -- and that Ollmann has rightly criticized -- which is to suggest that cloud-based sandboxing will magically prevent 100% of APTs and targeted threats. That’s simply not possible, and likely will never be. Ultimately, the best approach is to rely on cloud-based sandboxing, alongside botnet interception, cloud-based traffic log analysis, security appliances, tools and technologies, to create a comprehensive network security system; one that levels the playing field, and gives enterprises a fighting chance against today’s surprisingly sophisticated, well-funded and insidious cyber criminal. Sursa: Cloud-Based Sandboxing: An Elevated Approach to Network Security | SecurityWeek.Com
-
Anatomy of a password disaster - Adobe's giant-sized cryptographic blunder by Paul Ducklin on November 4, 2013 One month ago today, we wrote about Adobe's giant data breach. As far as anyone knew, including Adobe, it affected about 3,000,000 customer records, which made it sound pretty bad right from the start. But worse was to come, as recent updates to the story bumped the number of affected customers to a whopping 38,000,000. We took Adobe to task for a lack of clarity in its breach notification. Our complaint One of our complaints was that Adobe said that it had lost encrypted passwords, when we thought the company ought to have said that it had lost hashed and salted passwords. As we explained at the time: [T]he passwords probably weren't encrypted, which would imply that Adobe could decrypt them and thus learn what password you had chosen. Today's norms for password storage use a one-way mathematical function called a hash that [...] uniquely depends on the password. [...] This means that you never actually store the password at all, encrypted or not. [...And] you also usually add some salt: a random string that you store with the user's ID and mix into the password when you compute the hash. Even if two users choose the same password, their salts will be different, so they'll end up with different hashes, which makes things much harder for an attacker. It seems we got it all wrong, in more than one way. Here's how, and why. The breach data A huge dump of the offending customer database was recently published online, weighing in at 4GB compressed, or just a shade under 10GB uncompressed, listing not just 38,000,000 breached records, but 150,000,000 of them. As breaches go, you may very well see this one in the book of Guinness World Records next year, which would make it astonishing enough on its own. But there's more. We used a sample of 1,000,000 items from the published dump to help you understand just how much more. ? Our sample wasn't selected strictly randomly. We took every tenth record from the first 300MB of the comressed dump until we reached 1,000,000 records. We think this provided a representative sample without requiring us to fetch all 150 million records. The dump looks like this: By inspection, the fields are as follows: Fewer than one in 10,000 of the entries have a username - those that do are almost exclusively limited to accounts at adobe.com and stream.com (a web analytics company). The user IDs, the email addresses and the usernames were unnecessary for our purpose, so we ignored them, simplifying the data as shown below. We kept the password hints, because they were very handy indeed, and converted the password data from base64 encoding to straight hexadecimal, making the length of each entry more obvious, like this: Encryption versus hashing The first question is, "Was Adobe telling the truth, after all, calling the passwords encrypted and not hashed?" Remember that hashes produce a fixed amount of output, regardless of how long the input is, so a table of the password data lengths strongly suggests that they aren't hashed: The password data certainly looks pseudorandom, as though it has been scrambled in some way, and since Adobe officially said it was encrypted, not hashed, we shall now take that claim at face value. The encryption algorithm The next question is, "What encryption algorithm?" We can rule out a stream cipher such as RC4 or Salsa-20, where encrypted strings are the same length as the plaintext. Stream ciphers are commonly used in network protocols so you can encrypt one byte at a time, without having to keep padding your input length to a multiple of a fixed number of bytes. With all data lengths a multiple of eight, we're almost certainly looking at a block cipher that works eight bytes (64 bits) at a time. That, in turn, suggests that we're looking at DES, or its more resilient modern derivative, Triple DES, usually abbreviated to 3DES. ? Other 64-bit block ciphers, such as IDEA, were once common, and the ineptitude we are about to reveal certainly doesn't rule out a home-made cipher of Adobe's own devising. But DES or 3DES are the most likely suspects. The use of a symmetric cipher here, assuming we're right, is an astonishing blunder, not least because it is both unnecessary and dangerous. Anyone who computes, guesses or acquires the decryption key immediately gets access to all the passwords in the database. On the other hand, a cryptographic hash would protect each password individually, with no "one size fits all" master key that could unscramble every password in one go - which is why UNIX systems have been storing passwords that way for about 40 years already. The encryption mode Now we need to ask ourselves, "What cipher mode was used?" There are two modes we're interested in: the fundamental 'raw block cipher mode' known as Electronic Code Book (ECB), where patterns in the plaintext are revealed in the ciphertext; and all the others, which mask input patterns even when the same input data is encrypted by the same key. The reason that ECB is never used other than as the basis for the more complex encryption modes is that the same input block encrypted with the same key always gives the same output. Even repetitions that aren't aligned with the blocksize retain astonishingly recognisable patterns, as the following images show. We took an RGB image of the Sophos logo, where each pixel (most of which are some sort of white or some sort of blue) takes three bytes, divided it into 8-byte blocks, and encrypted each one using DES in ECB mode. Treating the resulting output file as another RGB image delivers almost no disguise: Cipher modes that disguise plaintext patterns require more than just a key to get them started - they need a unique initialisation vector, or nonce (number used once), for each encrypted item. The nonce is combined with the key and the plaintext in some way, so that that the same input leads to a different output every time. If the shortest password data length above had been, say, 16 bytes, a good guess would have been that each password data item contained an 8-byte nonce and then at least one block's worth - another eight bytes - of encrypted data. Since the shortest password data blob is exactly one block length, leaving no room for a nonce, that clearly isn't how it works. Perhaps the encryption used the User ID of each entry, which we can assume is unique, as a counter-type nonce? But we can quickly tell that Adobe didn't do that by looking for plaintext patterns that are repeated in the encrypted blobs. Because there are 264 - close to 20 million million million - possible 64-bit values for each cipertext block, we should expect no repeated blocks anywhere in the 1,000,000 records of our sample set. That's not what we find, as the following repetition counts reveal: Remember that if ECB mode were not used, each block would have been expected to appear just once every 264 times, for a minuscule prevalence of about 5 x 10-18%. Password recovery Now let's work out, "What is the password that encrypts as 110edf2294fb8bf4 and the other common repeats?" If the past, all other things being equal, is the best indicator of the present, we might as well start with some statistics from a previous breach. When Gawker Media got hacked three years ago, for example, the top passwords that were extracted from the stolen hashes came out like this: (The word lifehack is a special case here - Lifehacker being one of Gawker's brands - but the others are easily-typed and commonly chosen, if very poor, passwords.) This previous data combined with the password hints leaked by Adobe makes building a crib sheet pretty easy: Note that the 8-character passwords 12345678 and password are actually encrypted into 16 bytes, denoting that the plaintext was at least 9 bytes long. It should come as no surprise to discover that this is because the input text consisted of: the password, followed by a zero byte (ASCII NUL), used to denote the end of a string in C; followed by seven NUL bytes to pad the input out to a multiple of 8 bytes to match the encryption's block size. In other words, we now know that e2a311ba09ab4707 is the ciphertext that signals an input block of eight zero bytes. That data shows up in the second ciphertext block in a whopping 27% of all passwords, which leaks to us immediately that all those 27% are exactly eight characters long. The scale of the blunder With very little effort, we have already recovered an awful lot of information about the breached passwords, including: identifying the top five passwords precisely, plus the 2.75% of users who chose them; and determining the exact password length of nearly one third of the database. So, now we've showed you how to get started in a case like this, you can probably imagine how much more is waiting to be squeezed out of "the greatest crossword puzzle in the history of the world," as satirical IT cartoon site XKCD dubbed it. Bear in mind that salted hashes - the recommended programmatic approach here - wouldn't have yielded up any such information - and you appreciate the magnitude of Adobe's blunder. There's more to concern youself with. Adobe also decribed the customer credit card data and other PII (Personally Identifiable Information) that was stolen in the same attack as "encrypted." And, as fellow Naked Security writer Mark Stockley asked, "Was that data encrypted with similar care and expertise, do you think? If you were on Adobe's breach list (and the silver lining is that all passwords have now been reset, forcing you to pick a new one), why not get in touch and ask for clarification? Sursa: Anatomy of a password disaster – Adobe’s giant-sized cryptographic blunder | Naked Security
-
Utilizing DNS to Identify Malware - Nathan Magniez Hack3rcon 4 DNS logs are an often overlooked asset in identifying malware in your network. The purpose of this talk to identify malware in the network through establishing DNS query and response baselines, analysis of NXDOMAIN responses, analysis of successful DNS lookups, and identifying domain name anomalies. This talk will give you the basics of what to look for in your own unique environment. Speakers Nathan Magniez Hack3rcon 4 Videos (Hacking Illustrated Series InfoSec Tutorial Videos) Sursa: ANOTHER Log to Analyze - Utilizing DNS to Identify Malware - Nathan Magniez Hack3rcon 4 (Hacking Illustrated Series InfoSec Tutorial Videos)
-
[h=3]Exploiting CVE-2013-3881: A Win32k NULL Page Vulnerability[/h] Microsoft Security Bulletin MS13-081 announced an elevation of privilege vulnerability [Microsoft Security Bulletin MS13-081 - Critical : Vulnerabilities in Windows Kernel-Mode Drivers Could Allow Remote Code Execution (2870008)]. Several days later Endgame published [Microsoft Win32k NULL Page Vulnerability Technical Analysis | Endgame.] some further details on the vulnerability in question but did not provide full exploitation details. In this post we will discuss how to successfully exploit CVE-2013-3881. [h=3]The Vulnerability[/h] The vulnerability resides in xxxTrackPopupMenuEx, this function is responsible for displaying shortcut menus and tracking user selections. During this process it will try to get a reference to the GlobalMenuState object via a call to xxxMNAllocMenuState, if the object is in use, for example: when another pop-up menu is already active, this function will try to create a new instance. If xxxMNAllocMenuState fails it will return False but it will also set the pGlobalMenuState thread global variable to NULL. The caller verifies the return value, and in case of failure it will try to do some cleanup in order to fail gracefully. During this cleanup the xxxEndMenuState procedure is called. This function's main responsibility is to free and unlock all the resources acquired and saved for the MenuState object, but it does not check that the pGlobalMenuState variable is not NULL before using it. As a result a bunch of kernel operations are performed on a kernel object whose address is zero and thus potentially controlled from userland memory on platforms that allow it. Triggering the vulnerability is relatively easy by just creating and displaying two popup instances and exhausting GDI objects for the current session, as explained by Endgame. However, actually getting code execution is not trivial. [h=3]Exploitation[/h] Usually a NULL dereference vulnerability in the kernel can be exploited by mapping memory at address zero in userland memory (when allowed by the OS), creating a fake object inside of this null page and then triggering the vulnerability in the kernel from the current process context of your exploit which has the null page mapped with attacker controlled data. With some luck we get a function pointer of some sort called from our controlled object data and we achieve code execution with Kernel privileges (e.g. this was the case of MS11-054). As such, NULL dereference vulnerabilities have for many years provided a simple and straightforward route to kernel exploitation and privilege escalation in scenarios where you are allowed to map at zero. Unfortunately in the case of CVE-2013-3881 life is not that simple, even on platforms that allow the null page to be allocated. When xxxTrackPopupMenuEx calls xxxMNAllocMenuState and fails, it will directly jump to destroy the (non-existant) MenuState object, and after some function calls, it will inevitably try to free the memory. This means that it does not matter if we create a perfectly valid object at region zero. At some point before xxxEndMenuState returns, a call to ExFreePoolWithTag(0x0, tag) will be made. This call will produce a system crash as it tries to access the pool headers which are normally located just before the poolAddress which in this case is at address 0. Thus the kernel tries to fetch at 0-minus something which is unallocated and/or uncontrolled memory and we trigger a BSOD. This means the only viable exploitation option is to try and get code execution before this Free occurs. [h=3]Situational Awareness[/h] At this point we try to understand the entire behavior of xxxEndMenuState, and all of the structures and objects being manipulated before we trigger any fatal crash. The main structure we have to deal with is the one that is being read from address zero, which is referenced from the pGlobalMenuState variable: win32k!tagMENUSTATE +0x000 pGlobalPopupMenu : Ptr32 tagPOPUPMENU +0x004 fMenuStarted : Pos 0, 1 Bit +0x004 fIsSysMenu : Pos 1, 1 Bit +0x004 fInsideMenuLoop : Pos 2, 1 Bit +0x004 fButtonDown : Pos 3, 1 Bit +0x004 fInEndMenu : Pos 4, 1 Bit +0x004 fUnderline : Pos 5, 1 Bit +0x004 fButtonAlwaysDown : Pos 6, 1 Bit +0x004 fDragging : Pos 7, 1 Bit +0x004 fModelessMenu : Pos 8, 1 Bit +0x004 fInCallHandleMenuMessages : Pos 9, 1 Bit +0x004 fDragAndDrop : Pos 10, 1 Bit +0x004 fAutoDismiss : Pos 11, 1 Bit +0x004 fAboutToAutoDismiss : Pos 12, 1 Bit +0x004 fIgnoreButtonUp : Pos 13, 1 Bit +0x004 fMouseOffMenu : Pos 14, 1 Bit +0x004 fInDoDragDrop : Pos 15, 1 Bit +0x004 fActiveNoForeground : Pos 16, 1 Bit +0x004 fNotifyByPos : Pos 17, 1 Bit +0x004 fSetCapture : Pos 18, 1 Bit +0x004 iAniDropDir : Pos 19, 5 Bits +0x008 ptMouseLast : tagPOINT +0x010 mnFocus : Int4B +0x014 cmdLast : Int4B +0x018 ptiMenuStateOwner : Ptr32 tagTHREADINFO +0x01c dwLockCount : Uint4B +0x020 pmnsPrev : Ptr32 tagMENUSTATE +0x024 ptButtonDown : tagPOINT +0x02c uButtonDownHitArea : Uint4B +0x030 uButtonDownIndex : Uint4B +0x034 vkButtonDown : Int4B +0x038 uDraggingHitArea : Uint4B +0x03c uDraggingIndex : Uint4B +0x040 uDraggingFlags : Uint4B +0x044 hdcWndAni : Ptr32 HDC__ +0x048 dwAniStartTime : Uint4B +0x04c ixAni : Int4B +0x050 iyAni : Int4B +0x054 cxAni : Int4B +0x058 cyAni : Int4B +0x05c hbmAni : Ptr32 HBITMAP__ +0x060 hdcAni : Ptr32 HDC__ This is the main object which xxxEndMenuState will deal with, it will perform a couple of actions using the object and finally attempts to free it with the call to ExFreePoolWithTag. The interaction with the object that occurs prior to the free are the ones we have to analyze deeply as they are our only hope in getting code execution before the imminent crash. xxxEndMenuState is a destructor, and as such it will first call the destructor of all the objects contained inside the main object before actually freeing their associated allocated memory, for example: _MNFreePopup(pGlobalMenuState->pGlobalPopupMenu) _UnlockMFMWFPWindow(pGlobalMenuState->uButtonDownHitArea) _UnlockMFMWFPWindow(pGlobalMenuState->uDraggingHitArea) _MNDestroyAnimationBitmap(pGlobalMenuState->hbmAni) The _MNFreePopup call is very interesting, as PopupMenu objects contain several WND objects and these have Handle references. This is relevant because if this WND object has its lock count equal to one when MNFreePopup is called, at some point it will try to destroy the object that the Handle is referencing. These objects are global to a user session. This means that we can force the deletion of any object within the current windows session, or at the very least decrement its reference count. win32k!tagPOPUPMENU +0x000 fIsMenuBar : Pos 0, 1 Bit +0x000 fHasMenuBar : Pos 1, 1 Bit +0x000 fIsSysMenu : Pos 2, 1 Bit +0x000 fIsTrackPopup : Pos 3, 1 Bit +0x000 fDroppedLeft : Pos 4, 1 Bit +0x000 fHierarchyDropped : Pos 5, 1 Bit +0x000 fRightButton : Pos 6, 1 Bit +0x000 fToggle : Pos 7, 1 Bit +0x000 fSynchronous : Pos 8, 1 Bit +0x000 fFirstClick : Pos 9, 1 Bit +0x000 fDropNextPopup : Pos 10, 1 Bit +0x000 fNoNotify : Pos 11, 1 Bit +0x000 fAboutToHide : Pos 12, 1 Bit +0x000 fShowTimer : Pos 13, 1 Bit +0x000 fHideTimer : Pos 14, 1 Bit +0x000 fDestroyed : Pos 15, 1 Bit +0x000 fDelayedFree : Pos 16, 1 Bit +0x000 fFlushDelayedFree : Pos 17, 1 Bit +0x000 fFreed : Pos 18, 1 Bit +0x000 fInCancel : Pos 19, 1 Bit +0x000 fTrackMouseEvent : Pos 20, 1 Bit +0x000 fSendUninit : Pos 21, 1 Bit +0x000 fRtoL : Pos 22, 1 Bit +0x000 iDropDir : Pos 23, 5 Bits +0x000 fUseMonitorRect : Pos 28, 1 Bit +0x004 spwndNotify : Ptr32 tagWND +0x008 spwndPopupMenu : Ptr32 tagWND +0x00c spwndNextPopup : Ptr32 tagWND +0x010 spwndPrevPopup : Ptr32 tagWND +0x014 spmenu : Ptr32 tagMENU +0x018 spmenuAlternate : Ptr32 tagMENU +0x01c spwndActivePopup : Ptr32 tagWND +0x020 ppopupmenuRoot : Ptr32 tagPOPUPMENU +0x024 ppmDelayedFree : Ptr32 tagPOPUPMENU +0x028 posSelectedItem : Uint4B +0x02c posDropped : Uint4B …... In order to understand why this is so useful, let's analyze what happens when a WND object is destroyed: pWND __stdcall HMUnlockObject(pWND pWndObject) { pWND result = pWndObject; pWndObject->cLockObj--; if (!pWndObject->cLockObj) result = HMUnlockObjectInternal(pWndObject); return result; } The first thing done is a decrement of the cLockObj counter, and if the counter is then zero the function HMUnlockObjectInternal is called. pWND __stdcall HMUnlockObjectInternal( pWND pWndObject) { pWND result; char v2; result = pWndObject; unsigned int entryIndex; pHandleEntry entry; entryIndex = pWndObject->handle & 0xFFFF; entry = gSharedInfo.aheList + gSharedInfo.HeEntrySize * entryIndex if ( entry->bFlags & HANDLEF_DESTROY ) { if ( !(entry->bFlags & HANDLEF_INDESTROY) ) { HMDestroyUnlockedObject(entry); result = 0; } } return result; } Once it knows that the reference count has reached zero, it has to actually destroy the object. For this task it gets the handle value and applies a mask in order to get the index of the HandleEntry into the handle table. Then it validates some state flags, and calls HMDestroyUnlockedObject. The HandleEntry contains information about the object type and state. This information will be used to select between the different destructor functions. int __stdcall HMDestroyUnlockedObject(pHandleEntry handleEntry) { int index; index = 0xC * handleEntry->bType handleEntry->bFlags |= HANDLEF_INDESTROY; return (gahti[v2])(handleEntry->phead); } The handle type information table (gahti) holds properties specific to each object type, as well as their Destroy functions. So this function will use the bType value from the handleEntry in order to determine which Destroy function to call. At this point it is important to remember that we have full control over the MenuState object, and that means we can create and fully control its inner PopupMenu object, and in turn the WND objects inside this PopupMenu. This implies that we have control over the handle value in the WND object. Another important fact is that entry zero on the gahti table is always zero, and it represents the FREE object type. So our strategy in order to get code execution here is to, by some means, create an object whose HandleEntry in the HandleEntry table has a bType = 0x0, and bFlags = 0x1. If we can manage to do so we can then create a fake WND object with a handle that makes reference to this object of bType=0x0. When the HMDestroyUnlockedObject is called it will end up in a call gahti[0x0]. As the first element in gahti table is zero, this ends up as a "call 0". In other words we can force a path that will execute our controlled data at address zero. [h=3]What we need[/h] We need to create a user object of bType=FREE (0x0) and bFlags= HANDLEF_DESTROY (0x1). This is not possible directly, so we first focus on getting an object with the bFlag value equal to 0x1. For this purpose we create a Menu object, set it to a window, and then Destroy it. The internal reference count for the object did not reach zero because it is still being accessed by the window object, so it is not actually deleted but instead flagged as HANDLEF_DESTROY on the HandleEntry. This means the bFlag will equal to 0x1. The bType value is directly associated to the Object Type. In the case of a menu object the value is 0x2 and there is no way of creating an object of type 0x0. So we focus on what ways we have to alter this value using some of the functions being called before destroying the WND object. As you can probably remember from the PopupMenu structure shown before, it contains several WND objects, and one of the first actions performed when HMUnlockObject(pWnd) is called is to decrement the lockCount. So we simply set-up two fake WND objects in such a way that the lockCount field will be pointing to the HandleEntry->bType field. When each of those fake WND objects is destroyed it will actually perform a “dec” operation over the bType of our menu object, thus decrementing it from 0x2 to 0x0. We now have a bFlag of 0x1 and a bType of 0x0. Using this little trick we are able to create a User object with the needed values on the HandleEntry. [h=3]Summary[/h] First we will create a MenuObject and force it to be flagged as HANDLEF_DESTROY. Then we will trigger the vulnerability, where xxxEndMenuState will get a reference to the menuState structure from a global thread pointer, and its value will be zero. So we map this address and create a fake MenuState structure at zero. XxxEndMenuState will call FreePopup(..) on a popup object instance we created, and will in turn try to destroy its internal objects. Three of these objects will be fake WND objects which we also create. The first two will serve the purpose of decrementing the bType value of our menu object, and the third one will trigger a HMDestroyUnlockedObject on this same object. This will result on code execution being redirected to address 0x0 as previously discussed. We have to remember that when we redirect execution to address 0, this memory also servers as a MenuState object. In particular the first field is a pointer to the PopupMenu object that we need to use. So what we do is to choose the address of this popup menu object in such a way that the least significant bytes of the address also represent a valid X86 jump opcode (e.g. 0x04eb translates to eb 04 in little endian memory ordering which represents a jump 4). [h=3]Finish him![/h] Once we achieve execution at ring 0 we patch the Enabled field on the _SEP_TOKEN_PRIVILEGES structure from the MOSDEF callback process in order to enable all the privileges for the process. We fix up the HandleEntry we modified before, and restore the stack in order to return after the PoolFree thus skipping the BSOD. Once all of this is done we return to user-land, but now our MOSDEF process has all the privileges, this allows us to for example migrate to LSASS and get System privileges. [h=3]-- Matias Soler[/h] Sursa: Immunity Products: Exploiting CVE-2013-3881: A Win32k NULL Page Vulnerability
-
[h=3]Sandbox Overloading with GetSystemTimeAdjustment[/h] Lately we came across an interesting sample (MD5: b4f310f5cc7b9cd68d919d50a8415974) we would like to share with you. An initial analysis spotted: To summarize the sample seems to be not showing any interesting behavior at all. However a closer look revealed: The process calls GetSystemTimeAdjustment for more than 1.8M times. Since Joe Sandbox captures this API which introduces some additional computation time the overall sample execution slows down dramatically and due to the limited execution time the payload is never reached. We already outlined this technique named "Sandbox overloading" in a previous blog post. Function 4011B4 outlines that GetSystemTimeAdjustment is called for 7.8M times: After the loop some anti-emulation routines follow and finally the payload is executed. Since overloading techniques are generic they effect a wide range of dynamic malware analysis system and thus are very powerful. Therefore it is important to have technologies to quickly detect and prevent such prevention techniques. Full analysis available: - Joe Sandbox 8.0.0 Analysis b4f310f5cc7b9cd68d919d50a8415974 Posted by Joe Monday, November 4, 2013 2:45 PM Sursa: Automated Malware Analysis Blog: Sandbox Overloading with GetSystemTimeAdjustment
-
Ruxcon slides [TABLE=class: roundTable] [TR] [TH]Presentation[/TH] [TH]Speaker[/TH] [TH][/TH] [TH]Slides[/TH] [TH][/TH] [/TR] [TR] [TD] Amateur Satellite Intelligence: Watching North Korea[/TD] [TD] Dave Jorm[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Payment Applications Handle Lots of Money. No, Really: Lots Of It.[/TD] [TD] Mark Swift & Alberto Revelli[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] 50 Shades of Oddness - Inverting the Anti-Malware Paradigm[/TD] [TD] Peter Szabo[/TD] [TD] [/TD] [TD] Not available[/TD] [/TR] [TR] [TD] Visualization For Reverse Engineering and Forensics[/TD] [TD] Danny Quist[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Electronic Voting Security, Privacy and Verifiability[/TD] [TD] Vanessa Teague[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Cracking and Analyzing Apple iCloud Protocols: iCloud Backups, Find My iPhone, Document Storage[/TD] [TD] Vladimir Katalov[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Buried by time, dust and BeEF[/TD] [TD] Michele Orru[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Under the Hood Of Your Password Generator[/TD] [TD] Michael Samuel[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Malware, Sandboxing and You: How Enterprise Malware and 0day Detection is About To Fail (Again)[/TD] [TD] Jonathan Brossard[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Espionage: Everything Old Is New Again[/TD] [TD] Kayne Naughton[/TD] [TD] [/TD] [TD] Not available[/TD] [/TR] [TR] [TD] VoIP Wars: Return of the SIP[/TD] [TD] Fatih Ozavci[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] BIOS Chronomancy: Fixing the Static Root of Trust for Measurement[/TD] [TD] John Butterworth[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] The BYOD PEAP Show: Mobile Devices Bare Auth[/TD] [TD] Josh Yavor[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Bypassing Content-Security-Policy[/TD] [TD] Alex "kuza55" Kouzemtchenko[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Deus Ex Concolica - Explorations in end-to-end automated binary exploitation[/TD] [TD] Mark 'c01db33f' Brand[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Mining Mach Services within OS X Sandbox[/TD] [TD] Meder Kydyraliev [/TD] [TD] [/TD] [TD] Not available[/TD] [/TR] [TR] [TD] Top of the Pops: How to top the charts with zero melodic talent and a few friendly computers[/TD] [TD] Peter Fillmore[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] A Beginner's Journey into the World of Hardware Hacking[/TD] [TD] Silvio Cesare[/TD] [TD] [/TD] [TD] Not available[/TD] [/TR] [TR] [TD] Underground: The Julian Assange story (with Q&A)[/TD] [TD] Suelette Dreyfus & Ken Day[/TD] [TD] [/TD] [TD] Not available[/TD] [/TR] [TR] [TD] Wardriving in the cloud: A closer look at Apple and Google location services[/TD] [TD] Hubert Seiwert[/TD] [TD] [/TD] [TD] Not available[/TD] [/TR] [TR] [TD] AntiTraintDroid - Escaping Taint Analysis on Android for Fun and Profit[/TD] [TD] Golam 'Babil' Sarwar[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] The Art Of Facts - A look at Windows Host Based Forensic Investigation[/TD] [TD] Adam Daniel[/TD] [TD] [/TD] [TD] Not available[/TD] [/TR] [TR] [TD] Introspy : Security Profiling for Blackbox iOS and Android[/TD] [TD] Alban Diquet & Marc Blanchou[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Inside Story Of Internet Banking: Reversing The Secrets Of Banking Malware[/TD] [TD] Sean Park[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Edward Snowden: It's Complicated[/TD] [TD] Patrick Gray[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Schoolin' In: How to Build Better Hackers[/TD] [TD] Brendan Hop[/TD] [TD] [/TD] [TD] Not available[/TD] [/TR] [TR] [TD] Roll the Dice and Take Your Chances[/TD] [TD] Thiébaud Weksteen[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [TR] [TD] Cracking, CUDA and the Cloud – Cracking Passwords Has Never Been So Simple, Fast and Cheap[/TD] [TD] John Bird[/TD] [TD] [/TD] [TD] Slides[/TD] [/TR] [/TABLE] Sursa: Slides
-
[h=3]Creating a simple and fast packet sniffer in C++[/h]I have seen several articles on the web about writing packet sniffers using C and C++ which suggest using raw sockets and dealing with protocol headers and endianness yourself. This is not only very tedious but also there are libraries which already deal with that for you and make it extremely easy to work with network packets. One of them is libtins, a library I have been actively developing for the past two years. It has support for several protocols, including Ethernet II, IP, IPv6, TCP, UDP, DHCP, DNS and IEEE 802.11, and it works on GNU/Linux, Windows, OSX and FreeBSD. It even works on different architectures such as ARM and MIPS, so you could go ahead and develop some application which could be executed inside routers and other devices. Let's see how you would sniff some TCP packets and print their source and destination port and addresses: #include <iostream>#include <tins/tins.h> using namespace Tins; bool callback(const PDU &pdu) { const IP &ip = pdu.rfind_pdu<IP>(); const TCP &tcp = pdu.rfind_pdu<TCP>(); std::cout << ip.src_addr() << ':' << tcp.sport() << " -> " << ip.dst_addr() << ':' << tcp.dport() << std::endl; return true; } int main() { // Sniff on interface eth0 // Maximum packet size, 2000 bytes Sniffer sniffer("eth0", 2000); sniffer.sniff_loop(callback); } This is the output I get when executing it: As you can see, it's fairly simple. Let's go through the snippet and see what it's doing: The callback function is the one that libtins will call for us each time a new packet is sniffed. It returns a boolean, whch indicates whether sniffing should go on or not, and takes a parameter of type PDU, which will hold the sniffed packet. This library represents packets as a series of Protocol Data Units(PDU) stacked over each other. So in this case, every packet would contain an EthernetII, IP and TCP PDUs. Inside callback's body, you can see that we're calling PDU::rfind_pdu. This is a member function template which looks for the provided PDU type inside the packet, and returns a reference to it. So in the first two lines we're retrieving the IP and TCP layers, and then we're simply printing the addresses and ports. Finally, in main an object of type Sniffer is constructed. When constructing it, we indicate that we want to sniff on interface eth0 and we want a maximum packet capture size of 2000 bytes. After that, Sniffer::sniff_loop is called, which will start sniffing packets and calling our callback for each of them. Note that this example will run successfully on any of the supported operating systems(as long as you use the right interface name, of course). The endianness of each of the printed fields is handled internally by the library, so you don't even have to worry about making your code work in Big Endian architectures. Now, you may be wondering whether using libtins will make your code significantly slower. If that is your concern, then you should not worry about it at all! This library was designed keeping efficiency in mind at all times. As a consequence it's the fastest packet sniffing and interpretation library I've tried out. Go ahead and have a look at these benchmarks to see how fast it actually works. If you want to learn more about libtins, please visit this tutorial, which covers everything you should know before starting to develop your network sniffing application! Posted by Matias Fontanini at 12:41 Sursa: Average coder: Creating a simple and fast packet sniffer in C++
-
Date: Mon, 4 Nov 2013 06:11:22 +0400 From: Solar Designer <solar@...nwall.com> To: announce@...ts.openwall.com Subject: [openwall-announce] php_mt_seed went beyond PoC Hi, With the functionality added in October, our php_mt_seed PHP mt_rand() seed cracker is no longer just a proof-of-concept, but is a tool that may actually be useful, such as for penetration testing. It is now a maintained project with its own homepage: php_mt_seed - PHP mt_rand() seed cracker Changes implemented in October, leading up to version 3.2, include addition of AVX2 and Intel MIC (Xeon Phi) support, and more importantly support for advanced invocation modes, which allow matching of multiple, non-first, and/or inexact mt_rand() outputs to possible seed values. The revised README file provides php_mt_seed usage examples (both trivial and advanced), as well as benchmarks on a variety of systems (ranging from quad-core CPU to 16-core server and to Xeon Phi): php_mt_seed: README With the new AVX2 support, php_mt_seed searches the full 32-bit seed space on a Core i7-4770K CPU in 48 seconds. On Xeon Phi 5110P, it does the same in 7 seconds. In advanced invocation modes, the running times are slightly higher, but are still very acceptable. For example, let's generate 10 random numbers in the range 0 to 9: $ php5 -r 'mt_srand(1234567890); for ($i = 0; $i < 10; $i++) { echo mt_rand(0, 9), " "; } echo "\n";' 6 6 4 1 1 2 8 4 5 8 and find the seed(s) based on these 10 numbers using our HPC Village machine's CPUs (2x Xeon E5-2670): [solar@...er php_mt_seed-3.2]$ GOMP_CPU_AFFINITY=0-31 time ./php_mt_seed 6 6 0 9 6 6 0 9 4 4 0 9 1 1 0 9 1 1 0 9 2 2 0 9 8 8 0 9 4 4 0 9 5 5 0 9 8 8 0 9 Pattern: EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 Found 0, trying 1207959552 - 1241513983, speed 222870766 seeds per second seed = 1234567890 Found 1, trying 4261412864 - 4294967295, speed 222760735 seeds per second Found 1 615.57user 0.00system 0:19.28elapsed 3192%CPU (0avgtext+0avgdata 3984maxresident)k 0inputs+0outputs (0major+292minor)pagefaults 0swaps We found the correct seed (and there turned out to be only one such seed) in under 20 seconds. What if we did not know the very first mt_rand() output (had only 9 known values out of 10, in this example)? Let's specify "0 0 0 0" to have php_mt_seed skip the first output: [solar@...er php_mt_seed-3.2]$ GOMP_CPU_AFFINITY=0-31 time ./php_mt_seed 0 0 0 0 6 6 0 9 4 4 0 9 1 1 0 9 1 1 0 9 2 2 0 9 8 8 0 9 4 4 0 9 5 5 0 9 8 8 0 9 Pattern: SKIP EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 EXACT-FROM-10 Found 0, trying 469762048 - 503316479, speed 203360193 seeds per second seed = 485860777 Found 1, trying 637534208 - 671088639, speed 203036371 seeds per second seed = 641663289 Found 2, trying 1073741824 - 1107296255, speed 202975770 seeds per second seed = 1091847690 Found 3, trying 1207959552 - 1241513983, speed 203018412 seeds per second seed = 1234567890 Found 4, trying 3388997632 - 3422552063, speed 203177316 seeds per second seed = 3414448749 Found 5, trying 4261412864 - 4294967295, speed 203117867 seeds per second Found 5 675.08user 0.00system 0:21.14elapsed 3192%CPU (0avgtext+0avgdata 4000maxresident)k 0inputs+0outputs (0major+291minor)pagefaults 0swaps We found 4 extra seeds, and the speed is slightly lower (by the way, there's much room for optimization in handling of cases like this - maybe later). The original seed value was found as well. Other (and possibly more) mt_rand() outputs could be specified and/or skipped as well, and/or ranges of possible values could be specified. The mt_rand() output range does not have to be 0 to 9, too - any other range supported by PHP's mt_rand() is also supported in php_mt_seed. Enjoy, and please spread the word. Alexander Sursa: announce - [openwall-announce] php_mt_seed went beyond PoC
-
[h=1]Hei, Google! Sunt eu, utilizatorul. M? mai ?ii minte?[/h] Liviu Mihai - 5 nov 2013 Drag?, Google, îmi pare r?u s?-?i spun, dar, de data asta, Bill Gates a avut dreptate. ?tiu, o mai d? ?i-n garduri, ca atunci, în 2004, când ne-a promis c? în doi ani sc?p?m definitiv de spam. Dar, legat de tine, cu p?rere de r?u î?i spun, a nimerit-o. Tot prin dou? mii ?i ceva, chiar atunci când începeai s? fii favoritul nostru ?i antiteza a tot de nu ne pl?cea la Microsoft. A spus Bill Gates atunci, ?i am sperat cu to?ii s? fie doar înc? o previziune e?uat?, c? treci prin a?a-numita er? de aur, pe care orice mare companie o traverseaz? la un moment dat, dar c? va veni o vreme când vei fi ?i tu blamat. P?rea de neconceput atunci, dar iat?-ne 10 ani mai tarziu cum ne for?ezi s?-i d?m dreptate. Ai început mai mult decât promi??tor, ca un motor de c?utare mult mai bun decât concuren?a anemic?. De-a lungul timpului ?i-ai îmbun?t??it constant algoritmul ?i ai ad?ugat numeroase func?ii auxiliare c?ut?rii. Apoi ai început marele asalt. 1 aprilie 2004. A p?rut o glum? la început ?i, probabil c? a?a au crezut ?i Microsoft ?i Yahoo!. Ai lansat Gmail pentru un num?r limitat de utilizatori, care, mai târziu, aveau s? trimit? (sau s? vând?) invita?iile dup? care toat? lumea p?rea s? râvneasc?. 1 GB pentru stocarea mail-urilor, ce nebunie! Când Yahoo! ?i Microsoft î?i d?deau 2 MB am?râ?i!?! Ai lansat ?i cump?rat servicii pe band? rulant?. Ai luat Writley ?i l-ai transformat în Google Docs. Ai lansat Google Reader cu o interfa?? monstruoas?, dar care, la primul ?i ultimul facelift, s-a dovedit un pariu câ?tig?tor. YouTube. Picasa. Feed Burner. Blogger ?i multe altele. Ai lansat chiar ?i un browser web, Chrome, de?i ai sus?inut mereu c? nu e?ti o companie de software. Apoi, o dat? cu Android, ai început asaltul pe mobile. Apple ?i-a deschis apetitul dar ?i tu ai avut destule merite proprii. Ne pl?cea de tine. Î?i p?sa de noi. ?i, dintr-o dat?, a încetat s?-?i mai pese. Ai devenit plin de tine. Ai devenit Microsoft. Sau mult mai r?u. Ai început s? închizi servicii, surd la plângerile noastre. Ai închis Wave, Answers, iGoogle, Buzz, Dictionary ?i ai culminat cu Reader. Înc? te mai boscorodesc pentru Reader. Dar nu-i asta problema. Înteleg, e vorba de bani. Afacerea ta, deciziile tale. Problema e ca nu-?i mai pas?, e?ti lipsit de orice scrupule fa?? de mine. Nu-?i mai îmbun?t??e?ti serviciile a?a des cum o f?ceai sau chiar deloc. Nu mai ai de ce. Nu ai un rival care s? te motiveze. Iar asta e r?u pentru tine, e mai r?u pentru noi. Ai ajuns s? faci schimb?ri care m? afecteaz? direct, f?r? s? mai ?ii deloc cont de nevoile ?i r?spunsul meu. Pentru c? po?i, pentru c? e?ti mare ?i tare ?i nu te mai doare de mine. Pentru c? esti r?u. Tocmai tu, cel care se d?dea bun. Nu mai e?ti Google care ne f?cea s? ciulim urechile de fiecare dat? când se scria de tine. Nu mai e?ti cel care ne f?cea s? exclam?m: "wow, ai vazut, m?, ce chestie tare a scos Google?". Acum e mai mult: “b?i, iar? ne-a tras-o Google”. Ne superi din ce în ce mai des ?i ajungem s? regret?m c? te-am ajutat s? fii în pozi?ia actual?. Începem s? c?utam alternative la serviciile tale, doar pentru c? ne deranjeaz? politica ta. Începem s? ne s?tur?m. Înainte s? ne desp?r?im definitiv, î?i mai dau o ?ans?. Pentru mine, o speran??. Doi ani, maxim trei s?-?i revii. S?-?i aduci aminte de mine. S?-?i aduci aminte de tine, cum erai cand erai tu. S? m? faci s?-mi pese din nou de tine, ca acum 10 ani. Ca atunci cand erai corect cu mine. Cand erai Google. Sursa: Hei, Google! Sunt eu, utilizatorul. M? mai ?ii minte? Mi s-a parut interesant.
-
[h=1]BitDefender Internet Security 2014 – licenta GRATUITA![/h] By Radu FaraVirusi(com) on November 5, 2013 BitDefender domina de cativa ani testele de specialitate, uimind prin detectia excelenta si impactul minim asupra resurselor sistemului. Acum puteti avea BitDefender Internet Security 2014 cu licenta GRATUITA printr-o promotie speciala. Accesati site-ul urmator pentru a beneficia de un an licenta gratuita: Product Page Sursa: BitDefender Internet Security 2014 – licenta GRATUITA!
-
"Hello World" For Windbg This post is an introductory to Windbg from an Ollydbg user's perspective. It contains an example of the "Hello World" of malware analysis; which is unpacking UPX and bypassing IsDebuggerPresent. I'm still learning Windbg but I'm hoping this post will be useful to others. I assume the reader has the Debugging Tools for Windows installed and symbols setup. The quickest way to verify that the symbols are setup properly is to try the following in Windbg File > Open Executable > notepad.exe. Once the executable loads execute !peb from the command line. We can see the symbols are not properly set up by the warning in the banner. If this is the case, the following link can be helpful in setting up the symbols and fixing them. If the symbols are setup properly we would see the following output. One of the difficult parts of learning Windbg is enumerating useful commands. Ollydbg is great because you can see all the useful commands by selecting a drop down. This isn't the case for Windbg. The best place to look up commands is Windbg Help file. It's surprisingly useful. Belowis a list of commands that I enumerated that are commonly used in debugging malware. .tlist list all running processes [*]lm list all loaded modules [*]lmf list all loaded modules - full path [*]!dlls list all loaded modules - more detailed [*]!dh address displays the headers for the specified image [*]!dh -options address no options, display all -f display file headers -s display sections headers -a display all header [*]@$exentry location of entry point [*]u unassemble [*]!SaveModule startaddress path [*]~ thread status for all threads [*]| proces status [*]!gle get last error [*]r dump registers [*]r reg=value assign register value [*]rF dump Floating point [*]k display call stack for current thread [*]!peb dump process block [*]!address [*].lastevent [*].imgscan dump al [*]bl list breakpoints [*]bc clear breakpoint, * or # [*]bd disable breakpoints [*]bp breakpoint [*]ba r/write/execute (r,w,e) size addr [*]sxe cpr break on process creation [*]sxe epr break on process exit [*]sxe ct break on thread creation [*]sxe et break on thread exit [*]sxe ld break on loading of module [*]sxe ud break on unloading of module [*]$$ print string [*]p step over [*] t step into [*]restart restarts the debugging of the executable process [*]q quit Let's explore these commands on some UPX packed C code. #include <windows.h>#include <stdio.h> int main(int argc, char *argv[]) { if (IsDebuggerPresent() == TRUE) { MessageBox(NULL, TEXT("Please close your debugging application and restart the program"), TEXT("Debugger Found!"), 0); ExitProcess(0); } MessageBox(NULL, TEXT("Hello World!"), TEXT("Bypassed"), 0); ExitProcess(0); return 0; } Source The compiled code can be downloaded from here (MD5: 4F6B57487986FD7A40CFCFA424FDB7B8). The first thing to do is File > Open Executable.. to load the sample into the debugger. Do not use the folder icon via the menu to load the executable. This will only open the executable as if we were opening it in Notepad.exe. Windbg will not break at the entry point of the executable but at what OllyDbg labels as the system breakpoint. Except Windbg breaks one instruction before OllyDbg sets it's system breakpoint. To get the address of the entry point we can read it from the portable executable (PE) header but in order to get it we have to know the base address of the executable. This is a common theme when using Windbg. In order to run one command, we have to calculate or parse the results of another command and then us as an argument. The base address is present in the output (as seen above) when the executable is first loaded in windbg. If we accidentally cleared the screen with the command.cls, the easiest way to get the base address is to execute lm (list modules) or lmf (list module with file path). To read the PE header we can use the command dh (dump headers) with an argument of the base address. The entry point of the executable will be found in the OPTIONAL HEADER VALUES. Since we have the base address (012c0000) and the entry point (79A0) we can add the two and print the assembly of the entry point using u with an argument of the address. A much quicker approach to access the address of entry point is @$exentry. Since we are at the system breakpoint, we need to set a breakpoint using the command bp with an address as the argument and then execute until the breakpoint is hit by pressing g and then enter in the command window. It is sometimes useful to dump out the register to have an idea of where we are at. Dumping the registers can be done using the command r. The pushad instruction is a good indicator that we are at the entry point of UPX. Let's dump the section headers using !dh -s address. The -s is short for section. For safe practice we should remove the breakpoint. To remove a breakpoint we first show a list of all breakpoints by using the command bl (break list) and to remove it we use command bc (break clear) with an argument of the index of the breakpoint. UPX is very easy to unpack. The classic technique is to step over pushad, set a break on access on the contents of ESP, execute till breakpoint, set a breakpoint on the jump after the restoring of the stack (SUB ESP -0x80), then execute until breakpoint and single step into the jump. Stepping into is executed by the command t. To create a break on access we use the command ba with an argument of r/w/x, size and address. To execute till the breakpoint the g command is used. Note: base address has changed to 01330000 due to restarting At address 01337b54 we can see the JMP to the original entry point of the executable. First we need to remove the breakpoint via the the bl and bc combinations of commands. Once removed we will set a breakpoint by using the command bp, execute g till breakpoint, step into t and we will be at the original entry point. Now it's time to bypass IsDebuggerPresent. The technique I'm going to use is from mmmmmm blog. Once you grasp the concepts presented in this post I'd highly recommend reading his post Games for Windows – Live. It's an excellent article on exploring anti-debugging using Windbg. What would happen if we executed our compiled code? Quick recap to unpack UPX and get back to the original entry point. .restart bp @$exentry g t bl bc 0 ba r 4 @esp bl bc 0 bp address of JMP g bl bc 0 t IsDebuggerPresent can be easily bypassed by patching the second byte of the PEB with 0. typedef struct _PEB { BYTE Reserved1[2]; BYTE BeingDebugged; BYTE Reserved2[1]; PVOID Reserved3[2]; PPEB_LDR_DATA Ldr; PRTL_USER_PROCESS_PARAMETERS ProcessParameters; BYTE Reserved4[104]; PVOID Reserved5[52]; PPS_POST_PROCESS_INIT_ROUTINE PostProcessInitRoutine; BYTE Reserved6[128]; PVOID Reserved7[1]; ULONG SessionId; } PEB, *PPEB; To patch the byte we can use eb (edit byte) with an argument of the address and the value . We can see the BeingDebugged has a value of No. If we press g to execute we can see we bypass IsDebuggerPresent(). As previously mentioned I'm still learning if you have any recommends on commands or examples please leave a comment or ping me on Twitter. Cheers. Resources windbg | mmmmmm http://www.codeproject.com/Articles/29469/Introduction-Into-Windows-Anti-Debugging http://www.codeproject.com/Articles/6084/Windows-Debuggers-Part-1-A-WinDbg-Tutorial http://windbg.info/doc/1-common-cmds.html http://www.exploit-db.com/download_pdf/18576/ Sursa: Hooked on Mnemonics Worked for Me: "Hello World" For Windbg
-
[h=2]Sample Code – Dictionary Zip Cracker[/h] Posted by Adam on November 4, 2013 Leave a comment (0) Go to comments After reading Violent Python, I decided to try my hand at making a basic dictionary zip cracker just for fun. Some of the other free open source tools out there are great but it does work. I’m primarily posting it for fun and to test the blog’s new syntax highlighting. It can generate a biographical dictionary from a specified file’s ASCII strings as well as populate it with a recursive directory listing. Got the idea while studying for my AccessData cert. Their Password Recovery Toolkit does this in hopes of increasing the likelihood that the dictionary will contain a relevant password. The idea is that a user either used the word in the past or that it can be found elsewhere on his or her computer. A very cool idea that’s helped me on forensics challenges. I’ve designed the code below for Python 2.7.5 on Windows 7. It uses the Strings binary from Picnix Utils. You can also click here to download a copy. import argparseimport zipfile import subprocess import os print ''' SYNTAX: Dictionary: zipdict.py -f (zip) -d (dict) Bio Dictionary Generator: zipdict.py -f (zip) -s (file with desired strings) ''' parser = argparse.ArgumentParser(description='Zip file dictionary attack tool.') parser.add_argument('-f', help='Specifies input file (ZIP)', required=True) parser.add_argument('-d', help='Specifies the dictionary.', required=False) parser.add_argument('-s', help='Build ASCII strings dictionary.', required=False) args = parser.parse_args() zipfile = zipfile.ZipFile(args.f) print '{*} Cracking: %s' % args.f print '{*} Dictionary: %s' % args.d def biodictattack(): print '{*} Generating biographical dictionary...' stringsdict = open('stringsdict', 'w') stringsout = subprocess.Popen(['strings', args.f], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) for string in stringsout.stdout: stringsdict.write(string) stringsout.wait() walkpath = raw_input("Directory listing starting where? [ex. C:\] ") for root, dirs, files in os.walk(walkpath): for name in files: filenames = os.path.join(name) stringsdict.write(filenames + '\n') for root, dirs, files in os.walk(walkpath): for name in dirs: dirlisting = os.path.join(name) stringsdict.write(dirlisting + '\n') print '{*} Done. Re-run to crack with zipdict.py -f (zip) -d stringsdict' def dictattack(): dict = open(args.d, 'r') with open(args.d, 'r') as dict: for x in dict.readlines(): dictword = x.strip('\n') try: zipfile.extractall(pwd=dictword) print '{*} Password found = ' + dictword + '\n' print '{*} File contents extracted to zipdict path.' exit(0) except Exception, e: pass if args.s: biodictattack() else: dictattack() My next post will be on analyzing Volume Shadow Copies on Linux and some cool methods that I used on the 2013 DC3 Forensic Challenge. Sursa: Sample Code - Dictionary Zip Cracker | fork()
-
Understanding Session Fixation 1. Introduction Session ID is used to identify the user of web application. It can be sent with the GET method. An attacker can send a link to the user with predefined session ID. When the user logs in, the attacker can impersonate him, because the user uses the predefined session ID, which is known to the attacker. This is how session fixation works. As we can see, there is no need to guess the session ID because the attacker just chooses the session ID that will be used by the victim. 2. Environment Let’s analyze session fixation step by step in one of the lessons available in WebGoat [1]. WebGoat is a web application that is intentionally vulnerable. It can be useful for those who want to play with web application security stuff. The goal of WebGoat is to teach web application security lessons. WebGoat is part of the Samurai Web Testing Framework [2]. The Samurai Web Testing Framework is a Linux-based environment for web penetration testing. This aforementioned lesson is entitled “Session Fixation” (part of “Security Management Flaws”). It was created by Reto Lippuner and Marcel Wirth. 3. Session Fixation Lesson from WebGoat The attacker first sends a mail to a victim with a predefined session ID (SID). It has the value 12345 for the purpose of demonstration. The attacker has to convince the user to click the link. The victim gets the mail and is going to click the link to log in. As we can see, the link has a predefined session ID. The victim logs into the web application and is recognized by the attacker’s predefined session ID. The attacker knows the predefined session ID and is able to impersonate the user. 4. Summary Users can be impersonated when they use links with predefined session ID values chosen by the attacker. Session fixation was described and the lesson from WebGoat (“Session Fixation” from “Session Management Flaws” created by Reto Lippuner and Marcel Wirth) was presented to analyze session fixation step by step. The mitigation for session fixation would be session ID regeneration after successful log in of the user. Then the predefined session ID wouldn’t be helpful any longer to the attacker. References: [1] WebGoat https://www.owasp.org/index.php/Category:OWASP_WebGoat_Project (access date: 22 October 2013) [2] Samurai Web Testing Framework Samurai Web Testing Framework (access date: 22 October 2013) By Dawid Czagan|October 31st, 2013 Sursa: Understanding Session Fixation - InfoSec Institute
-
Reverse Engineering with OllyDbg Abstract The objective of writing this paper is to explain how to crack an executable without peeping at its source code by using the OllyDbg tool. Although, there are many tools that can achieve the same objective, the beauty behind OllyDbg is that it is simple to operate and freely available. We have already done much reverse engineering of .NET applications earlier. This time, we are confronted with an application whose origin is unknown altogether. In simple terms, we are saying that we don’t have the actual source code. We have only the executable version, which is a tedious task of reverse engineering. Essentials The security researcher must have a rigorous knowledge of assembly programming language. It is expected that the machine is configured with the following tools: OllyDbg Assembly programming knowledge CFF explorer Patching Native Binaries When the source code is not provided, it is still possible to patch the corresponding software binaries in order to remove various security restrictions imposed by the vendor, as well as fixing the inherent bugs in the source code. A familiar type of restriction built into software is copy protection, which is normally forced by the software vendor in order to test the robustness of the software copy protection. In copy protection, the user is typically obliged to register the product before use. The vendor stipulates a time restriction on the beta software in order to avoid license misuse and to permit the product to run only in a reduced-functionality mode until the user registers. Executable Software The following sample shows a way of bypassing or removing the copy protection in order to use the product without extending the trial duration or, in fact, without purchasing the full version. The copy protection mechanism often involves a process in which the software checks whether it should run and, if it should, which functionality should be allowed. One type of copy protection common in trial or beta software allows a program to run only until a certain date. In order to explain reverse engineering, we have downloaded the beta version of software from the Internet that is operative for 30 days. As you can see, the following trial software application is expired and not working further and it shows an error message when we try to execute it. We don’t know in which programming language or under which platform this software is developed, so the first task is to identify its origin. We can engage CFF explorer, which displays some significant information such as that this software is developed by using VC++ language, as shown below. We can easily conclude that this is a native executable and it is not executing under CLR. We can’t use ILDASM or Reflector in order to analyze its opcodes. This time, we have to choose some different approach to crack the native executable. Disassembling with OllyDbg When we attempt to load the SoftwareExpiration.exe file, it will refuse to run because the current date is past the date on which the authorized trial expired. How can we use this software despite the expiration of the trial period? The following section illustrates the steps in the context of removing the copy protection restriction: The Road Map Load the expired program in order to understand what is happening behind the scenes. Debug this program with OllyDbg. Trace the code backward to identify the code path. Modify the binary to force all code paths to succeed and to never hit the trial expiration code path again. Test the modifications. Such tasks can also be accomplished by a powerful tool, IDA Pro, but it is commercial and not available freely. OllyDbg is not as powerful as IDA Pro, but it is useful in some scenarios. First download OllyDbg from its official website and configure it properly on your machine. Its interface looks like this: Now open the SoftwareExpiration.exe program in OllyDbg IDE from File à open menu and it will decompile that binary file. Don’t be afraid of the bizarre assembly code, because all the modifications are performed in the native assembly code. Here the red box shows the entry point instructions of the program, referred to as 00401204. The CPU main thread window displays the software code in form of assembly instructions that are executed in top-to-bottom fashion. That is why, as we stated earlier, assembly programming knowledge is necessary when reverse engineering a native executable. Unfortunately, we don’t have the actual source code, so how can we inspect the assembly code? Here the error message “Sorry, this trial software has expired” might help us to solve this problem because, with the help of this error message, we can identify the actual code path that leads to it. While the error dialog box is still displayed, start debugging by pressing F9 or from Debug menu. Now you can find the time limit code. Next, press F12 in order to pause the code execution so that we can find the code that causes the error message to be displayed. Okay. Now view the call stack by pressing the Alt+ K. Here, you can easily figure out that the trial error text is a parameter of MessageBoxA as follows: Select the USER32.MessageBoxA near the bottom of the call stack, right click, and choose “Show call”: This shows the starting point in which the assembly call to MessageBoxA is selected. Notice that the greater symbol (>) next to some of the lines of code, which indicates that another line of code jumps to that location. Directly before the call to MessageBoxA (in red color right-pane), four parameters are pushed onto the stack. Here the PUSH 10 instruction contains the > sign, which is referenced by another line of code. Select the PUSH 10 instruction located at 004011C0 address, the line of code that references the selected line is displayed in the text area below the top pane in the CPU windows as follows: Select the text area code in the above figure and right click to open the shortcut menu. It allows you to easily navigate to the code that refers to a selected line of code as shown: We have now identified the actual line of code that is responsible for producing the error message. Now it is time to do some modification to the binary code. The context menu in the previous figure shows that both 00401055 and 00401063 contains JA (jump above) to the PUSH 10 used for message box. First select the Go to JA 00401055 from the context menu. You should now be on the code at location 0×00401055. Your ultimate objective is to prevent the program from hitting the error code path. This can be accomplished by changing the JA instruction to NOP (no operation), which actually does nothing. Right click the 0×00401055 instruction inside the CPU window and select “Binary” and click over Fill with NOPs as shown below: This operation fills all the corresponding instruction for 0×00401055 with NOPs: Go back to PUSH 10 by pressing hyphen (~) and repeat the previous process for the instruction 0×00401063, as follows: Now save the modifications by right-clicking in the CPU window, clicking Copy to Executable, and then clicking All Modifications. Then hit the Copy all button in the next dialog box, as shown below: Right after hitting the “Copy all” button, a new window will appear named “SoftwareExpiration.exe.” Right-click in this window and choose Save File: Finally, save the modified or patched binary with a new name. Now load the modified program; you can see that no expiration error message is shown. We successfully defeated the expiration trial period restriction. Final Note This article demonstrates one way to challenge the strength of the copy protection measure using OllyDbg and to identify ways to make your software more secure against unauthorized consumption. By attempting to defeat the copy protection of your application, we can learn a great deal about how robust the protection mechanism is. By doing this testing before the product becomes publically available, we can modify the code to make circumvention of copy protection more difficult before its release. By Ajay Yadav|November 1st, 2013 Sursa: Reverse Engineering with OllyDbg - InfoSec Institute
-
Android 4.4 arrives with new security features - but do they really matter? Stefan Tanase Kaspersky Lab Expert Posted November 04, 15:53 GMT Last week, Google has released the 4.4 (KitKat) version of their omni-popular Android OS. Between the improvements, some have noticed several security-related changes. So, how much more secure is Android 4.4? When talking about Android 4.4 (KitKat) major security improvements, they can be divided into 2 categories: 1. Digital certificates Android 4.4 will warn the user if a Certificate Authority (CA) is added to the device, making it easy to identify Man-in-the-Middle attacks inside local networks. At the same time, Google Certificate Pinning will make it harder for sophisticated attackers to intercept network traffic to and from Google services, by making sure only whitelisted SSL certificates can connect to certain Google domains. 2. OS hardening SELinux is now running in enforcing mode, instead of permissive mode. This helps enforce permissions and thwart privilege escalation attacks, such as exploits that want to gain root access. Android 4.4 comes compiled with FORTIFY_SOURCE set at level 2, making buffer overflow exploits harder to implement. Privilege escalation and buffer overflows are techniques used for rooting mobile phones, so this makes it harder for Android 4.4 users to get root access on their device. On the bright side, it also makes it harder for malware to do the same, which is an important step in the infection of Android based terminals. From the point of view of malware threats, these enhancements do not really make a big difference. The most common Android infection source remains the same: unofficial apps downloaded from third-party stores. Nothing has changed here. One of the biggest problems in the Android ecosystem is the big amount of different versions of the OS, including ancient ones, that are still running on users’ mobile devices - this is known as version fragmentation. For instance, more than 25% of the users are still running Android 2.3, which has been released years ago. This between other things, represents a big security issue. Therefore, perhaps the most important change from KitKat is the lowered resource usage. Android 4.4 can run on devices with just 512MB of RAM, which for high end hardware means faster operation and better battery life, while for devices with less resources, the chance to use a modern, more secure OS. Power users have always wanted to use the latest versions of Android on their devices - that's why phone rooting has become so popular and that's why community projects such as CyanogenMod have evolved into fully-fledged companies. The real problem here, is the fact that most non-technical users will have to rely on hardware vendors to get an Android update. For instance, I have an old smartphone from a leading mobile phone maker from South Korea, that stopped receiving updates at Android 2.3.3. Sadly, many mobile phone makers prefer to withhold updates as a method of forcing users to purchase newer terminals. At the same time, this is effectively increasing the risk across their entire user base. It’s a pity this problem is not discussed in a wider manner. Sursa: https://www.securelist.com/en/blog/208214116/Android_4_4_arrives_with_new_security_features_but_do_they_really_matter
-
Notacon 10 - Encryption For Everyone Dru Streicher (_Node) Description: Encryption protects your privacy and is essential for communication. However encryption is sometimes complicated and hard to use. I want to discuss what encryption is, how it is used, and make it easy for everyone to use. This will be a very n00b friendly talk about how to actually use encryption in email, in websurfing, and on your hard drive. Bio I am a hardware hacker, chiptune musician (_node), and a system admin. I am a system administrator for Hurricane Labs (www.hurricanelabs.com), an information security company based in Cleveland. I am interested in security and all things opensource. I attended notacon last year for the first time and performed at pixeljam. I really had a good time last year and I would enjoy presenting a paper this year. For More Information please visit : - Notacon 11: April 10-13, 2014 in Cleveland, OH Notacon 10 (2013) Videos (Hacking Illustrated Series InfoSec Tutorial Videos) Sursa: Notacon 10 - Encryption For Everyone Dru Streicher (_Node)
-
Blackhat Eu 2013 - The Sandbox Roulette: Are You Ready For The Gamble? Description: What comes inside an application sandbox always stays inside the sandbox. Is it REALLY so? This talk is focused on the exploit vectors to evade commercially available sandboxes Las Vegas-style: We'll spin a "Sandbox Roulette" with various vulnerabilities on the Windows Operating System and then show how various application sandboxes hold up to each exploit. Each exploit will be described in detail and how it affected the sandbox. There is a growing trend in enterprise security practices to decrease the attack surface of vulnerable endpoints through the use of application sandboxing. Many different sandbox environments have been introduced by vendors in the security industry, including OS vendors, and even application vendors. Lack of sandboxing standards has led to the introduction of a range of solutions without consistent capabilities or compatibility and with their own inherent limitations. Moreover some application sandboxes are used by malware analysts to analyze malware and this could impose risks if the sandbox was breached. This talk will present an in-depth, security focused, technical analysis of the application sandboxing technologies available today. It will provide a comparison framework for different vendor technologies that is consistent, measurable, and understandable by both IT administrators and security specialists. In addition we will explore each of the major commercially available sandbox flavors, and evaluate their ability to protect enterprise data and the enterprise infrastructure as a whole. We will provide an architectural decomposition of sandboxing to highlight its advantages and limitations, and will interweave the discussion with examples of exploit vectors that are likely to be used by sophisticated malware to actively target sandboxes in the future. For More Information please visit : - Black Hat | Europe 2013 - Briefings Sursa: Blackhat Eu 2013 - The Sandbox Roulette: Are You Ready For The Gamble?
-
Error Based SQL Injection - Tricks In The Trade Trigger an error In this article I am going to describe some simple tips and tricks, which are useful to find and/or exploit error based on SQL injection. The tips/tricks will be for MySQL and PHP, because these are the most common systems you will encounter. Detect if database errors are displayed: Knowing if some database errors are displayed is a really valuable information, because it simplifies the process to detect injection points and exploiting a SQL vulnerability, we will discuss more of it later. But how do you provoke a error, even if everything is escaped correctly? Look for the integers: example: http://vulnsite.com/news.php?id=1 Let the assume id to be used internally as a integer in a MySQL query. Using testing vectors like id=1' or id=2-1 will not provoke any errors nor does the vector seem to be vulnerable to an injection. To provoke an error you can use the following values for id: 1) ?id=0X01 2) ?id=99999999999999999999999999999999999999999 The first example is a valid integer in PHP but not in MySQL, because of the uppercase X (there are even more difference, check PHP: Integers - Manual vs !!!). That's why this value provokes an error in the database. The second example will be converted by PHP to INF (which is also a integer in PHP , but it is definitely not a valid integer in the MySQL database. As an example, the query will look like this: SELECT title,text from news where id=INF By using this method, it is easy to determine if error reporting is enabled. This method will only work if the value is used internally as a integer. It won't provoke a database error if the value is used as a string! Using error reporting to our advantage: After getting the information that database errors are displayed, how can we use them for our advantage. In MySQL, it is not that easy in comparison to other DBMS to extract information via error reports. But there are two methods to do so: o) UPDATEXML and extractValue o) insane select statement Personally, I prefer using UPDATEXML( it is available since MySQL v. 5.1.5). Like its name suggest that it is used to modify a xml fragment, by specifying a XPATH expression. It has three parameters, the first one is the xml fragment, the second one the XPATH expression and the third one specifies the new fragment which will be inserted. A “normal” example: SELECT UpdateXML('<a><b>ccc</b><d></d></a>', '/b', '<e>fff</e>') Output: <a><b>ccc</b><d></d></a> What do you think will happen, if you specify a illegal xpath expression, like the @@version? Lets take a real life example to see what happens. Let us assume that the num parameter in the following url: http://example.com/author.php?num=2 ends up unescaped in the following query: SELECT name,date,username from author where number=2 Normally you would try to find the number of columns to construct a valid UNION SELECT. But lets assume none of the data are passed back to the webpage. You would need techniques like time based(sleep) or off-band (DNS etc.) to extract information. If error reporting is enabled, UPDATEXML can shorten this process a lot. To extract the version of the database, the following value for num would be enough: http://example.com/author.php?num=2 and UPDATEXML(null,@@version,null) ==> SELECT name,date,username from author where number=2 and UPDATEXML(null,@@version,null) This will produce an XPATH Syntax Error: `version´ It is also possible to create a complete select statement: UPDATEXML(null,(select schema_name from information_schema.schemata),null) Although UPDATEXML seems like a really awesome function it has a drawback too. It can only extract the last 20 bytes at a time. If you want to extract more bytes and still use error based extraction, you have to use the second method. The next example will create a query, which will create duplicate entry error. The duplicate entry will be the name of a table: select 1 from dual where 1=1 AND(SELECT COUNT(*) FROM (SELECT 1 UNION SELECT null UNION SELECT !1)x GROUP BY CONCAT((SELECT table_name FROM information_schema.tables LIMIT 1 OFFSET 1),FLOOR(RAND(0)*2))) That's all for now, but if you want to read on, here are some interesting links regarding SQL injection: -) The SQL Injection Knowledge Base (? Examples where taken from there, really the best sql injection cheat sheet IMHO) -) Methods of Quick Exploitation of Blind SQL Injection -) SQLi filter evasion cheat sheet (MySQL) | Reiners' Weblog About The Author Alex Infuhr is an independent security researcher, His core area of research includes Malware analysis and WAF bypassing. Sursa: Error Based SQL Injection - Tricks In The Trade | Learn How To Hack - Ethical Hacking and security tips
-
Raspunsul e simplu: Un hacker NU este infractor. Sunt 2 tipuri de persoane: 1. Persoanele care descopera/creaza lucruri noi: hackeri 2. Persoanele care le folosesc: altii Cu alte cuvinte: - ai descoperit o noua metoda de SQL Injection: time based, error based... Esti hacker. - ai folosit o astfel de metoda sa obtii acces si sa te lauzi ca esti smecher: esti o pula bleaga Faptul ca un personaj descopera o noua metoda de exploatare, sa zicem time based SQLI, sau creaza ceva util, sa zicem Metasploit/nmap... nu il face infractor, il face hacker. Faptul ca un alt personaj se foloseste de acea descoperire pentru a obtine adrese de mail, useri si parole, nu il face hacker, ci il face infractor. La fel, faptul ca un personaj foloseste Metasploit pentru a obtine ceva ilegal (generic vorbind), nu il face hacker ci infractor. Asta e pe scurt ideea.
-
Sunt banii tai?
-
Mori tigane.