-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
Registry Dumper – Find and Dump Hidden Registry Keys Posted on December 6, 2014 by darryl The cybercriminals behind Poweliks implemented two clever techniques in their malware. The first was leveraging rundll32.dll to execute Javascript and the second was using a method to hide/protect their registry keys. I’ll be focusing on the second method. The technique of hiding/protecting registry keys using a non-ASCII character goes all the way back to over a decade ago. It’s remarkable in a sense that after all these years, it still works on the latest Windows platform. Here we see the built-in Windows Registry Editor choke on the hidden/protected key after infecting the computer with Poweliks. Clicking past the error dialog, you should see something like this. This default key is exposed and fully downloadable/viewable. However, there’s another key that contains the “trigger” that’s not visible. If we need to research what this particular malware is doing, we ought to find out what else is hiding there. For that we need to find a tool to help us view these hidden registry keys. With online registry viewers/editors, you can get mixed results. Some seem to work well but lack some basic functionality like exporting keys as text. Others get confused and display the wrong key. Offline registry viewers/editors fare much better and offer consistent results. However, you will need to log into a separate account on the computer and use this tool. Or you have to copy the registry off of the infected machine and view it on a computer with the tool installed. I prefer to do an initial triage on the live machine and get to the data as quickly as possible. Since I couldn’t find a portable, online tool that had the features I wanted, I figure I would try my hand at creating one. The tool is called Registry Dumper and uses a DLL which interacts with the registry via NT native APIs that was written by Hoang Khanh Nguyen. This tool allows you to scan for null characters in a given path. It will iterate through the path to find all the keys with nulls in them. If you click on the “Show in Hex” checkbox, you can see the key names in hex. Here you will notice that the second entry’s name is “010001” which is equivalent to 0x01 0x00 0x01. This is impossible to view, edit, or delete using the Windows’ Registry Editor. From here you can copy/paste the path over to the left side and dump the keys to a text file. Here’s the text file containing all the key values in the given path. With this tool you can create hidden keys for testing purposes. And if you wanted to delete that impossible-to-remove key, you can use this tool by entering “[x01][null][x01]” as the key name. The obfuscated data you see there is the result of running it through Microsoft Script Encoder. To deobfuscate it, you can use an online decoder or download a VBS decoder. A fellow by the name of Lewis E. Moten III wrote a decoder program. I repackaged his function in the following tool. Here is the decoded version. You will notice that I didn’t have to strip away everything else but the encoded string. The decoder program will look for the start and end markers of the encoded text and replace it with the decoded result. Just recently, a newer variant of Poweliks was found. It uses a different registry hiding technique based on user permissions. You can read about it here. If you use this tool to access one of these keys, you will get an error message saying that the key doesn’t exist. It does exist but it’s just that it doesn’t have the rights to view it. Here’s the permission properties of the key using the Windows Registry Editor. Notice that the current user has no read permissions. You can still use this tool to dump the keys but you first need to grant permission to the user account that’s running the tool. Just click on the Set Permission to User button and the permission is changed to allow the current user the rights. Now you can access the key: Here is the dump of the keys: And the decoded string: By the way, that Javascript in the “(Default)” key can be deobfuscated easily using Converter. You will see that the value in between the quotes are shifted over by one character (e.g. the word hello = ifmmp). Just enter the value “-1? and click on the SHIFTx button (or you can click once on the minus button on the right). You can download both tools here. Sursa: Registry Dumper – Find and Dump Hidden Registry Keys | Kahu Security
-
US lawmaker pushes back against FBI backdoor calls David Meyer Dec. 5, 2014 - 12:58 AM PST U.S. Senator Ron Wyden (D-OR) has introduced a bill that would stymie almost any attempt by a government agency to force device manufacturers and app developers to install backdoors for surveillance purposes. Wyden’s Secure Data Act, introduced on Thursday, follows calls by FBI chief James Comey for companies such as Apple and Google to give his agents a way through their encryption mechanisms, which have been tightened in the wake of Edward Snowden’s NSA revelations and episodes such as the celebrity iCloud hack. Apple’s most recent move, for example, makes it impossible for the company to bypass the passcode on a user’s iPhone for the benefit of law enforcement or intelligence agencies. Wyden’s bill gives an exemption to CALEA, the U.S. law that already compels carriers and router manufacturers to install “lawful intercept” capabilities, but beyond that it states: … no agency may mandate that a manufacturer, developer, or seller of covered products design or alter the security functions in its product or service to allow the surveillance of any user of such product or service, or to allow the physical search of such product, by any agency. “Covered products” means any hardware or software made available to the general public, so the bill would arguably not cover, say, flawed random number generators. Wyden’s main impetus for this move, the NSA critic said in a statement, was that backdoors inherently weaken the security of the systems they’re installed in. He also reckons that backdoor mandates are a disincentive to innovation in “strong new data security technologies”, and harmful to trust in American products and services. “Strong encryption and sound computer security is the best way to keep Americans’ data safe from hackers and foreign threats,” he said in the statement. “It is the best way to protect our constitutional rights at a time when a person’s whole life can often be found on his or her smartphone. And strong computer security can rebuild consumer trust that has been shaken by years of misstatements by intelligence agencies about mass surveillance of Americans.” It’s interesting, if unsurprising, that Wyden’s bill gives a get-out to CALEA. His own statement cites the 2005 case of senior Greek politicians being illicitly tapped, using an Ericsson lawful intercept feature, as an example of how backdoors can compromise a system’s security for the benefit of more people than they’re supposed to. Earlier this year, security researchers also identified critical weaknesses in some companies’ lawful intercept products. Sursa: https://gigaom.com/2014/12/05/us-lawmaker-pushes-back-against-fbi-backdoor-calls/
-
Kaspersky: That 2 years we took to warn you about Regin ? We had GOOD REASON Security community: We only saw fragments... By John Leyden, 5 Dec 2014 Kaspersky Lab has responded to criticism that security vendors took years too long to spot Regin, a recently discovered strain of ultra-sophisticated (and probably state-sponsored) spyware. Regin is a software framework rather than an individual malicious code sample. Security vendors have until recently only seen fragments of the whole, making analysis difficult. Kaspersky Lab explained the two-year delay in releasing info about the Regin cyberweapon by comparing its work to an investigation by police. Security research - not unlike law enforcement investigations - requires meticulous scrutiny and analysis, and in many cases, it's important to watch the crime unfold in real-time to build a proper case. In our case, without unlimited resources and the fact that we're tracking multiple APT actors simultaneously (Careto/Mask, EpicTurla, Darkhotel, Miniduke/Cosmicduke, to name a few), this becomes a process that takes months, even years, to gain a full understanding of a cyber-operation. Sean Sullivan from F-Secure compares APT research to the work of paleontologists that find some bones of a dinosaur. Everyone may have a bone, but nobody has the full skeleton. Kaspersky picks up this analogy and runs with it. "In the case of Regin, what we first discovered in 2012 was a slightly damaged bone from unknown part of a monster living in a mysterious mountain lake," the firm said in a blog post on its official Securelist blog. The Russian security firm goes on to firmly deny withholding information about and detections of Regin at the request of governments, customers or anyone else. Security firm Symantec was the first to publish research about Regin around two weeks ago. The cyber espionage tool has been used for the past six years to spy on business and private targets. As previously reported, Symantec has previously come out swinging at accusations it was tardy in releasing information about Regin. Neither Kaspersky's or Symantec's denials are likely to silence either conspiracy theorists or anti-virus naysayers, of course. It's only possible to note that the offenders have a big advantage over defenders in cyber-espionage operations, and huge resources at their disposal, so the length of time taken to detect Regin is poor evidence of complicity between security software firms and cyber-spies. There are precedents for the delay in releasing information about Regin, as Kaspersky Lab points out. Like Regin, sometimes we find that we had been detecting pieces of malware for several years before realizing that it was a part of global cyber-espionage campaign. One good example is the story of RedOctober. We had been detecting components of RedOctober long before we figured out that it was being used in targeted attacks against diplomatic, governmental and scientific research organisations. Regin is most likely the work of an advanced nation state using multiple levels of encryption to obfuscate itself and other trickery in order to avoid detection, say securobods. Advanced functionality in Regin includes the ability to directly monitor mobile phone traffic, with Symantec reporting that 28 per cent of the samples seen attacked telecoms backbone infrastructure. Once installed into a computer, Regin can carry out a variety of malign actions – including capturing screenshots, monitor keystrokes, steal passwords and even recovering deleted files. ISPs, energy companies, airlines and research-and-development labs are among its victims. What really marks the Regin platform out as something special is its ability to attack GSM and take over the management functions of mobile networks. The attackers were able to obtain credentials that would allow them to control GSM cells in the network of a large cellular operator, according to Kaspersky Lab. This gave attackers the access to information about which calls are processed by a particular cell, along with the ability to redirect these calls to other cells, activate neighbour cells and perform other offensive actions. Samples of Regin were injected into systems at Belgian telecoms outfit Belgacom around 2010, and builds of the spyware has been circulating for at least six years. Security firm G Data said it was aware of attacks on targets in 18 countries, including Germany, Russia, Syria and India. The Belgacom link is evidence that GCHQ might have had a hand in its creation but this is a bit circumstantial and who created Regin remains something of a mystery. This and the fact that the modules are called LEGSPIN could be a diversionary tactic. What is curious is that none of the “Five Eyes” countries (Australia, Canada, New Zealand, the UK, and the United States) make an appearance in the list of victims. ® Sursa: Kaspersky: That 2 years we took to warn you about Regin ? We had GOOD REASON • The Register
-
In timpul tau liber, sau ca parte din job?
-
Good guy spam, poze cu gagici
-
E mai mult o curiozitate de-a mea. Sa presupunem ca ai datele publice: 1. Numele complet 2. Cod numeric personal 3. Serie si numar buletin 4. Adresa de acasa 5. Adresa de mail 6. Adresa IP 7. Cont IBAN 8. Cont Paypal Ce poti pati in cazul in care cineva are aceste date si iti vrea raul? Raspundeti punctual: 1. Gaseste Facebook si poate alte informatii 2. Cere informatii despre cont in banca 3. Inchide abonamentul de telefonie 4. Vine la tine si se caca la usa 5. Gaseste informatii despre tine: conturi, interese 6. Afla zona in care stai, cauta loguri ale accesarilor 7. Afla banca la care ai cont 8. Afla numele tau trimitand 1 euro (cred) Sunt doar cateva opinii. Voi ce parere aveti?
-
How to Stop DNS Hijacking You have (probably more than once in your life) keyed in a familiar domain name and ended up in an entirely different page that was not even close to what you had expected. Chances are that you never even noticed the abnormality and you went ahead retyping the domain name or making a custom search of your preferred destination on Google. Well, what you have never realized is that you may have been a victim of Domain Name System Hijacking or redirection. Apparently, DNS hijacking is a growing threat, and no single organization is large enough for DNS attacks. Not long ago, a hackers’ group known as the Iranian Cyber army took Twitter by storm, after having successfully managed to redirect domain requests from Twitter.com to its own hosted IP addresses. Similarly, on Thanksgiving day, the Syrian Electronic Army hijacked network traffic to major media outlets sites including the Independent , the Telegraph and the Canadian Broadcasting Corporation, just to mention a few. So, What is DNS Hijacking? Technically, Domain Name System Hijacking intercepts legitimate DNS requests from the user and matches them to compromised IP addresses hosted on the attacker’s servers. Ideally, websites on the World Wide Web are identified using a combination of numbers known as IP addresses, which are unique to every single site. To save you from the hassle of having to remember each IP address like you do with all your passwords, all IP addresses are given a custom domain name, which is easy to remember. In simple terms, you can access the web by searching its domain name or through its IP address if you can recall it, although you won’t. When it comes to DNS hijacking, the attacker launches a man-in-the-middle-like attack, which subverts the user’s DNS requests and directs them to their own compromised DNS server. The basic function of a DNS server is to match the user’s DNS request with the correct IP addresses. However, an attackers’ compromised DNS server uses a DNS switching Trojan to attach the wrong IP address to the user’s DNS request, therefore directing him to a spoofed website. Such attacks are known as Pharming and could be employed by scammers in a phishing campaign aimed at stealing personal information. Notably, DNS hijacking is not a pattern of malicious hackers only. Legitimate ISP providers also engage in the ill-feting activity to suit their own selfish interests, including placing ads or collecting statistics for Big Data Analysis. Ordinarily, you should come up with a “server not found” error message every time you query a non-existent domain name. However, ISP providers have perfected the art of manipulating DNS (NXDOIMAN) responses and directing users to their ad-ridden IP addresses. DNS Hijacking is not only irritating, but could also expose the user to potentially dangerous cross-site scripting attacks, besides violating RFC international standard for DNS (NXDOMAIN) responses. A fine example of an ISP “behaving badly” and somehow manipulating Internet users without their consent (otherwise, it would not be manipulation, right?) is that of Verizon and its Perma Cookie. EFF has heavily criticized the abuse of personal information that has been made by Verizon Wireless, and this is just a drop in the ocean of non-consensual traffic redirection and tracking occurring daily on the web. Shore Up DNS Security Whether it’s a DNS cache poisoning or simple DNS blocking by the ISP providers, no one wants to traverse the web at gun point. In a Cache Poisoning attack, the hacker secretly injects false addressing data into DNS resolvers, enabling the attackers to redirect legitimate DNS requests away from legitimate websites to compromised DNS servers. The clandestine activity can go undetected for ages, allowing the attack to siphon huge chunks of sensitive information, including all Passwords and Usernames. The first step in fortifying your DNS security is to deploy Domain Name System Security Extensions (DNSSEC). This is a security standard that allows the Domain owners to physically monitor traffic to their domain. The owners are able to register their Domains’ zones, enabling DNS resolvers to verify the authenticity of all DNS responses. Anyone with .org domains can now register them to domain registers through companies such as GoDaddy. DNSSEC will also enable you to manage customer identities. Configure Your DNS Settings In reality, the solution to your DNS solution lies within. If your ISP provider’s DNS server does not live up to its security expectations, ditch that for an alternative third party DNS, such as OpenDNS, NortonDNS or DNSResolvers. Here is how to configure your DNS settings of your operating system and prevent DNS hijacking: For Windows Users: Open the control panel. Under “Network and Internet” click on “Network status and tasks” and proceed to the Wireless connection button on the far right of your window. Under the “Wireless Network Connection Status”, click on “Properties” and go ahead to select “Internet Protocol Version 4 (TCP/IPv4) properties.” Now you should be able to give an alternative DNS address of your choice. Almost at the bottom of the dialog box, select the “Use the following DNS server addresses” button and fill out your alternate DNS server IP information (e.g. 8.8.8.8 for Google or 205.210.42.205 for DNSResolvers). Click OK and your alternate DNS will now be active by default. For Ubuntu Users: On the system menu, click on “Network connections” and select “Preferences”. Now you’re at a dialog box with three optional tabs. If you want to configure the Ethernet connection, click on the “Wired” button and then specify your network from the interface list. For a wireless connection, click on the “Wireless connection” button and select your network. Then click on the “Edit” button. Select the IPv4 Settings tab and, if the Automatic (DHCP) is the selected method, then proceed to select “Automatic (DHCP) addresses only”. In the Dialog box that appears, key in your DNS IP information (e.g. for Google type 8.8.8.8 8.8.4.4 and then follow up with the changes). You may be required to set a password of your choice. Repeat the above steps for any network you wish to modify. After having followed the guidelines provided above in order to amend your settings, you will see that the alternative DNS servers you use work wonders for your privacy. Without having the proper information displayed on your computer for anyone to track down on, DNS hijacking cannot happen. Instead, your wireless connection will be completed using fake details, and therefore you will be thoroughly protected at all times while surfing the web! By Ali Qamar|December 5th, 2014 Sursa: How to Stop DNS Hijacking - InfoSec Institute
-
[h=2]Improving Your Malware Forensics Skills[/h]ednesday, June 25, 2014 Posted by Corey Harrell By failing to prepare, you are preparing to fail. ~ Benjamin Franklin In many ways preparation is key to success. Look at any sporting event and the team who usually comes out on top are the ones who are better prepared. I'm not just referring to game day; I'm also talking about the coaching schemes and building a roster. Preparation is a significant factor to one's success in the Digital Forensic and Incident Response field. This applies to the entire field and not just malware forensics, which is the focus of this post. When you are confronted with a system potentially impacted with malware your ability to investigate the system successfully depends on your knowledge, experience, and toolset. This is where there is a conundrum. There is a tendency for people not to do malware cases (either through being hired or within an organization) due to a lack of knowledge and experience but people are unable to gain knowledge and experience without working malware cases. The solution is through careful preparation one can acquire knowledge, experience, and toolset that can eventual lead to working malware cases. This post outlines the process I used and currently use to improve my malware forensics skills. The process described in this post helped me to develop the skills I have today. I even use a similar process to create test systems to help develop malware forensic skills in others (if you read this blog then you have already seen the benefits of doing this). With that said, what I am describing is a very time consuming process. However, if you decide to replicate the path I took it will be worth it and along the way you'll improve your malware forensic skills. This post is specifically directed at those wanting to take this step using their own resources and time. Those wanting to lose themselves in the DFIR music. [h=2]Process, Process, Process[/h] Malware forensics is the process of examining a system to: find malicious code, determine how it got there, and what changes it caused on system. The first place to start for improving one's skills is by exploring the process one should use. The purpose of starting with the process is twofold. First and foremost it is to understand the techniques, examination steps, and knowing what to look for. The second reason is to explore the various tools to use to carry out the process. There are only a few resources available specific to this craft such as: Malware Forensics Field Guide for Windows Systems and Windows Forensic Analysis Toolkit, Fourth Edition. In addition, this has been an area on my radar to add one more book to the discussion but in the meantime my jIIr methodology page outlines my process and links to various posts. My suggestion is to first review the methodology I put together which is further explained in the posts: Overall DF Investigation Process and End to End Digital Investigation. Afterwards, review one or two of the books I mentioned. As you work your way through this material pay attention to the malware forensic process the author uses or references. When it is all said and done and you completed reviewing what you set out to then document a malware forensic process you want to use. If this sounds familiar then you either started reading jIIr from the beginning or you are one of my early followers. This is exactly what I described in my second and third posts Where to start? and Initial Examination Steps & First Challenge that I wrote almost four years ago. However, a lot of time has passed since I wrote those posts and I improved my process as outlined below: - Examine the master boot record - Obtain information about the operating system and its configuration - Examine the volatile data - Examine the files on the system that were identified in volatile data - Hash the files on the system - Examine the programs ran on the system - Examine the auto-start locations - Examine the host-based logs - Examine file system artifacts - Malware searches - Perform a timeline analysis - Examine web browsing history - Examine specific artifacts - Perform a keyword search - Examine suspected malicious files [h=3]Tools, Tools, Tools[/h] After the process you want to use is documented then the next step is to identify the tools you will use in each examination step. There are numerous tools you can use; the tools mentioned by the authors in the reference material, tools talked about the bloggers in my blog roll, or tools you already have experience with. To be honest, what tools someone should use depends. It really depends on what you prefer and are comfortable with. The tools I started out with are not the same ones I use today; the important thing is each tool helped me learn and grow. Pick any tools you want as a starting point and over time you will start to see the pros and cons of various tools. [h=2]Testing Environment[/h] With your process and tools selected, now it is finally time to stop the researching/documenting and to actually use the process you documented and tools you selected. To do this you first have to set up a testing environment. There is an inherit risk to using virtualization for the testing environment; the malware may be virtualization aware and behave differently than on real computer. However, despite this risk I highly recommend to use virtualization as your testing environment. It's a lot faster to create multiple test systems (by copying virtual machines) and the snapshot feature makes it easier to revert mistakes. There are various virtualization options available with great documentation such as VirtualBox and VMware. Pick a virtualization platform and install it using the provided instructions. [h=3]Creating your VMs[/h] Another decision you'll need to make is what operating systems to perform your testing on. This not only includes the operating system versions (i.e. Windows 7 vs Windows 8) but what processor to use as well (32 bit vs 64 bit). I ended up selecting VMware for the virtualization software and Windows 7 32 bit as the testing platform. You will need to create your first VM by installing the operating system of your choice. After the installation try to make things easier for the system to be compromised. First disabled security features. This includes the built-in firewall and the user account control. Next make sure the account you are using has administrative privileges. Next you will want to make your test system a very juicy target. To do this you'll need to install vulnerable client-side applications including: Adobe flash, Adobe Reader, Java, Silverlight, Microsoft Office, Internet Explorer, and a non-patched operating system. One place to grab these applications is Old Apps and to determine what versions to install pick the ones targeted by exploit kits. At a minimum, make sure you don't patch the OS and install Java, Silverlight, Adobe reader, and Adobe flash. This will make your VM a very juicy target. After the VM is created and configured then you'll want to make multiple copies of it. Using copies makes things easier during analysis without having to deal with snapshots. [h=2]Manually Infecting Systems[/h] The first approach to improving your skills is a manual method to help show the basics. The purpose is to familiarize yourself with the artifacts associated with malware executing in the operating system you picked. These artifacts are key to be successful in performing malware forensics on a compromise system. The manual method involves you infecting your test VM and then analyzing it to identify the artifacts. The manual method consists of two parts: using known and unknown samples. However, before proceeding there is very important configuration change. The virtual machine's network configuration needs to be isolated to prevent the malware from calling home or attacking other systems. [h=3]Using Known (to you) Samples[/h] Starting out it is better to practice with a sample that is known. By known I mean documented so that you can reference the documentation in order to help determine what the malware did. Again, we are trying to improve our ability to investigate a system potentially impacted with malware and not trying to reverse the malware. The documentation is just to help you account for what the malware did to make it easier to spot the other artifacts associated with the malware running in the operating system. The way to find known samples really depends. You could find them using information on antivirus websites since they list reports using their malware naming convention. For example, Symantec's Threat Listing, Symantec's Response blog, Microsoft's Threat Reports, or Microsoft's Malware Encyclopedia to name a few. These are only a few but there are a lot more out there; just look at antivirus websites. The key is to find malware with a specific name that you can search on such as Microsoft's Backdoor:Win32/Bergat.B. Once you find one you like then review the technical information to see the changes the malware makes. I suggested finding known malware by names first because there are more options to do this. A better route if you can find it is to use a hash of a known malware sample. Some websites share the hash of the sample they are discussing but this doesn't occur frequently. A few examples are: Contagio Malware Dump, KernelMode.info, or MxLab blog. Another option is to look at the public sandboxes for samples that people submitted such as Joe Sandbox or one listed on Lenny Zelster's automated malware analysis services list. After you pick a malware name or hash to use then the next step is to actually find the malware. Lenny Zelster has another great list outlining different malware sample sources for researchers. Anyone one of these could be used; it just needs the ability to search by detection name or hash. I had great success using: VirusShare, Open Malware, Contagio Malware Dump, and KernelMode.info. Remember the purpose of going through all of this is to improve your malware forensic skills and not your malware analysis skills. We are trying to find malware and determine how the infection happened; not reversing malware to determine its functionality. Now that you have your sample just infect your virtual machine (VM) with it and then power it down. If the VM has any snapshots then delete them to make it easier. Now that you have an infected image (i.e. the vmdk file) you can analyze it using the process you outlined and the tools you selected. At this point you are making sure the process and tools work for you. You are also looking to explore the artifacts created during the infection. You know the behavior of the known malware so don't focus on this. Malware is different and so will be their artifacts. Focus on the artifacts created by a program executing in the operating system you selected. Artifacts such as program execution, logs, and file system. [h=3]Using Unknown (to you) Samples[/h] Using a known sample is helpful to get your feet wet but it gets old pretty quick. After you used a few different known samples it is not as challenging to find the artifacts. This is where you take the next step by using an unknown (to you) sample. Just download a random sample from one of the sources listed at malware sample sources for researchers. Infect your virtual machine (VM) with it and then power it down. If the VM has any snapshots then delete them to make it easier. Now you can start your examination using the same process and tools you used with a known malware sample. This method makes it a little more challenging because you don't know what the malware did to the operating system. [h=2]Automatically Infecting Systems[/h] The manual method is an excellent way to explore the malware forensic process. It allows you to get familiar with an examination process, tools, and artifacts associated with an infection. One important aspect about performing malware forensics is to identify the initial infection vector which was used to compromise the system in the first place. The manual method infections always trace back to you executing malware samples so you need to use a different method. This method is automatically infecting systems to simulate how real infections appear. Before proceeding there is a very important configuration change. The virtual machine's network configuration needs to be connected to the Internet. This can be done through the NAT or bridged configuration but you will want to be in a controlled environment (aka not your company's production network). There are some risks with doing this so you will need to take that into consideration. Personally, I accept this risk since improving my skills to help protect organizations is worth the trade off. [h=3]Using Known Websites Serving Malware[/h] In the automatically infecting systems approach the first method is to use a known website serving malware. There are different ways to identify these websites. I typically start by referring to Scumware.org (FYI, site hates Internet Explorer), Malc0de database, and the Malware Domain List. In both instances I look for URLs that point to a malicious binary. Other sources you can use are the ones listed by Lenny Zelster on his Blocklists of Suspected Malicious IPs and URLs page. Again, you are trying to find a URL to a site hosting a malicious binary. Another, source you shouldn't overlook is your email SPAM/Junk folder. I have found some nice emails with either malicious attachments or malicious links. Lastly, if you pay attention to the current trends being used to spread malware then you can found malicious sites leveraging what you read and Google. This is a bit harder to pull off but it's worth it since you see a technique currently being used in attacks. Inside your VM open a web browser, enter in the URL you identified, and if necessary click any required buttons to execute the binary. Wait a minute or two for the malware to run and then power it down. If the VM has any snapshots then delete them to make it easier. Now you can start your examination using the same process and tools you used with the manual approach. The purpose is to find the malware, artifacts associated with the infection, and the initial infection vector. Infecting a VM in this manner simulates a social engineering attack where a user is tricked into infecting themselves. If a SPAM email was used then it simulates a email based attack. To see how beneficial this method is for creating test images to analyze you can check out my posts Examining IRS Notification Letter SPAM and Coming To A System Near You. The first post simulates a phishing email while the later simulates social engineering through Google image search. [h=3]Using Potentially Malicious Websites[/h] The second method in the automatically infecting systems approach is to use potentially malicious websites. This method tries to infect the VM through software vulnerabilities present in either the operating system or installed client-side applications. This is the most time consuming out of all of the methods I described in this post. It's pretty hard to infect a VM on purpose so you will end up going through numerous URLs before hitting one that works. This is where you may need to use the VM snapshot feature. To find potentially malicious URLs you can look at the Malware Domain List. Look for any URLs from the current or previous day that are described as exploit or exploit kit as shown below. You can ignore the older URLs since they are most likely no longer active. Another option I recently discovered but haven't tried yet is using information posted at Malware-Traffic-Analysis.net. The site doesn't obfuscate websites used so you may be able to use it to find active websites serving up exploits. The last option I'm sharing is the one I use the most; the URLs others are submitting to URLQuery.net. Just keep in mind, there are certain things that can't be unseen and there are some really screwed up people submitting stuff to URLQuery. When reviewing the submitted URLs you want to pay attention to those that have detections as shown below: After you see a URL with detections then you'll need to examine it closer by reviewing the URLQuery report. To save yourself time, focus on any URLs whose reports mention: malicious iframes, exploits, exploit kits, or names of exploit kits. These are the better candidates to infect your VM. The pictures below show what I am referring to. Before proceeding make sure your VM is powered on and you created a snapshot. The snapshot comes in handy when you want to start with a clean slate after visiting numerous URLs with no infection. An easy way to determine if an infection occurred is to monitor the program execution artifacts. One way I do this is by opening the C:\Windows\Prefetch folder with the items sorted by last modification time. If an infection occurs then prefetch files are modified which lets me know. Now you can open a web browser inside your VM, enter in the URL you identified, and monitor the program execution artifacts (i.e. prefetch files). If nothing happens then move on to the next URL. Continue going through URLs until one successfully exploits your VM. Upon infection wait a minute or two for the malware to run and then power it down. Make sure you delete any snapshots to make it easier. Now you can start your examination using the same process and tools you have been using. The purpose is to find the malware, artifacts associated with the infection, and the initial infection vector. The initial infection vector will be a bit of a challenge since your VM has various vulnerable programs. Infecting a VM in this manner simulates a drive-by which is a common attack vector used to load malware onto a system. To see how beneficial this method is for creating test images you can check out my post Mr Silverlight Drive-by Meet Volatility Timelines (FYI, I suspended the VM to capture the vmem file in addition instead to powering it down to get the disk image). [h=2]Summary[/h] Benjamin Franklin said "by failing to prepare, you are preparing to fail." To be successful when confronted with a system potentially impacted with malware we should be preparing for this moment now. Taking the time to improve our malware forensic skills including our process, tools, and knowledge of artifacts. Making the right preparations so when game day approaches we will come out on top. The process I use to improve my malware forensic skills and the one I described in this post is not for everyone. It takes time and a lot of work; I've spent countless days working through it. However, working your way through this process you will attain something that can't be bought with money. There is no book, training, college course, or workshop that can replicate or replace the skills, knowledge, and experience you gain through careful preparation by training yourself. Sursa: Journey Into Incident Response: Improving Your Malware Forensics Skills
-
Python Kerberos Exploitation Kit PyKEK (Python Kerberos Exploitation Kit), a python library to manipulate KRB5-related data. (Still in development) For now, only a few functionalities have been implemented (in a quite Quick'n'Dirty way) to exploit MS14-068 (CVE-2014-6324) . More is coming... Author Sylvain Monné Contact : sylvain dot monne at solucom dot fr http://twitter.com/bidord Special thanks to: Benjamin DELPY gentilkiwi Library content kek.krb5: Kerberos V5 (RFC 4120) ASN.1 structures and basic protocol functions kek.ccache: Credential Cache Binary Format (cchache) kek.pac: Microsoft Privilege Attribute Certificate Data Structure (MS-PAC) kek.crypto: Kerberos and MS specific cryptographic functions Exploits ms14-068.py Exploits MS14-680 vulnerability on an un-patched domain controler of an Active Directory domain to get a Kerberos ticket for an existing domain user account with the privileges of the following domain groups : Domain Users (513) Domain Admins (512) Schema Admins (518) Enterprise Admins (519) Group Policy Creator Owners (520) Sursa: https://github.com/bidord/pykek
-
Android Malware Evasion Techniques - Emulator Detection Most of the modern malware try to escape from being analysed and one of the first thing they do is to check if they are running on a controlled environment. The controlled environment refers to an emulator in the world of Android malware. If the malware runs on an emulator, that means it is most probably being investigated by a researcher. There are various methods that malware writers use to detect the emulated environment. 1.) Check Product Name: In Android emulator, product name of the device contains "sdk" string so it is a useful clue to detect if the app is running on an emulator. In order to check the product name, you can use the following code snippet: 2.) Check Model Name: The default product name of the Android emulator contains "sdk" string. So it is worth to check model name in order to detect emulator use. 3.) Check SIM Operator Name: In Android emulators, the SIM operator name comes with the default "Android" string. It is not the case that you can see in regular physical devices even there is no SIM card installed in the device. 4.) Check Network Operator Name: Similar to the SIM Operator Name, the network operator name also comes with the default "Android" string. It is a good idea to check network operator name in order to decide if the app is running on an emulator. By combining these 4 techniques mentioned above, you can write a basic Android app that shows these values. In order to compare if they are really work, you can install the app both to the emulator and a real device. The picture on the left hand side is the screenshot taken from Samsung Galaxy S4 phone and the one on the right is the screenshot of an emulator. You can see the difference clearly. 5.) Check ro.kernel.qemu and ro.secure Property: Additionally you can check the Android system properties to detect emulated environment. There are various property files in Android filesystem: /default.prop /system/build.prop /data/local.prop Properties are stored in a key value pair format in these property files. You can see the values of the properties by typing adb shell getprop <key> command. There are some critical properties indicating the emulator environment. ro.secure ro.kernel.qemu If the value of ro.secure is "0", or the value of ro.kernel.qemu is 1, ADB shell runs as root and that means the environment which, the app is running is an emulator. Because in a physical device ADB shell runs in a regular user right, not the root. In order to check these properties you can use the code snippets below. I uploaded a sample detecttion code to my Github page that combines all the methods above: https://github.com/oguzhantopgul/Android-Emulator-Detection In this blog post i tried to mention about my favourite Android Emulator detection methods. If you know much better techniques please send me a comment. Edit: I've just noticed the Tim Strazzere's (@timstrazz) Android Project on emulator detection. You can find it in his github repo link below: https://github.com/strazzere/anti-emulator Yesterday, Oguzhan Topgul taraf?ndan yay?nland? Sursa: {ouz}: Android Malware Evasion Techniques - Emulator Detection
-
Getting started with SSH security and configuration Are you a new UNIX administrator who needs to be able to run communication over a network in the most secure fashion possible? Brush up on the basics, learn the intricate details of SSH, and delve into the advanced capabilities of SSH to automate securely your daily system maintenance, remote system management, and use within advanced scripts to manage multiple hosts. PDF (391 KB) | Roger Hill (unixman@charter.net), Independent author What is SSH? A basic description Secure Shell (SSH) was intended and designed to afford the greatest protection when remotely accessing another host over the network. It encrypts the network exchange by providing better authentication facilities as well as features such as Secure Copy (SCP), Secure File Transfer Protocol (SFTP), X session forwarding, and port forwarding to increase the security of other insecure protocols. Various types of encryption are available, ranging from 512-bit encryption to as high as 32768 bits, inclusive of ciphers, like Blowfish, Triple DES, CAST-128, Advanced Encryption Scheme (AES), and ARCFOUR. Higher-bit encryption configurations come at a cost of greater network bandwidth use. Figure 1 and Figure 2 show how easily a telnet session can be casually viewed by anyone on the network using a network-sniffing application such as Wireshark. Figure 1. Telnet protocol sessions are unencrypted. Frequently used acronyms API: Application programming interface FTP: File Transfer Protocol IETF: Internet Engineering Task Force POSIX: Portable Operating System Interface for UNIX RFC: Request for Comments VPN: Virtual private network When using an unsecured, "clear text" protocol such as telnet, anyone on the network can pilfer your passwords and other sensitive information. Figure 1 shows user fsmythe logging in to a remote host through a telnet connection. He enters his user name fsmythe and password r@m$20!0, which are both then viewable by any other user on the same network as our hapless and unsuspecting telnet user. Figure 2. SSH protocol sessions are encrypted. Figure 2 provides an overview of a typical SSH session and shows how the encrypted protocol cannot be viewed by any other user on the same network segment. Every major Linux® and UNIX® distribution now comes with a version of the SSH packages installed by default—typically, the open source OpenSSH packages—so there is little need to download and compile from source. If you're not on a Linux or UNIX platform, a plethora of open source and freeware SSH-based tools are available that enjoy a large following for support and practice, such as WinSCP, Putty, FileZilla, TTSSH, and Cygwin (POSIX software installed on top the Windows® operating system). These tools offer a UNIX- or Linux-like shell interface on a Windows platform. Whatever your operating system, SSH touts many positive benefits for commonplace, everyday computing. Not only is it dependable, secure, and flexible, but it is also simple to install, use, and configure—not to mention feature laden. SSH architecture IETF RFCs 4251 through 4256 define SSH as the "Secure Shell Protocol for remote login and other secure network services over an insecure network." The shell consists of three main elements (see Figure 3): Transport Layer Protocol: This protocol accommodates server authentication, privacy, and integrity with perfect forward privacy. This layer can provide optional compression and is run over a TCP/IP connection but can also be used on top of any other dependable data stream. User Authentication Protocol: This protocol authenticates the client to the server and runs over the transport layer. Connection Protocol: This protocol multiplexes the encrypted tunnel to numerous logical channels, running over the User Authentication Protocol. Figure 3. SSH protocol logical layers The transport layer is responsible for key exchange and server authentication. It sets up encryption, integrity verification, and (optionally) compression and exposes to the upper layer an API for sending and receiving plain text packets. A user authentication layer provides authentication for clients as well as several authentication methods. Common authentication methods include password, public key, keyboard-interactive, GSSAPI, SecureID, and PAM. The connection layer defines channels, global requests, and the channel requests through which SSH services are provided. A single SSH connection can host multiple channels concurrently, each transferring data in both directions. Channel requests relay information such as the exit code of a server-side process. The SSH client initiates a request to forward a server-side port. This open architecture design provides extensive flexibility. The transport layer is comparable to Transport Layer Security (TLS), and you can employ custom authentication methods to extend the user authentication layer. Through the connection layer, you can multiplex secondary sessions into a single SSH connection (see Figure 4). Articol complet: Getting started with SSH security and configuration
-
Finding Zero-Day XSS Vulns via Doc Metadata Posted by eskoudis Filed under Methodology, web pen testing [Editor's Note: Chris Andre Dale has a nice article for us about cross-site-scripting attacks, and he's found a ton of them in various high-profile platforms on the Internet, especially in sites that display or process images. He even found one in WordPress and responsibly disclosed it, resulting in a fix for the platform released just a few weeks ago. In this article, Chris shares his approach and discoveries, with useful lessons for all pen testers. Oh... and if you are going to test systems, make sure you have appropriate permission and don't do anything that could break a target system or harm its users. Thanks for the article, Chris! --Ed.] By Chris Andre Dale XSS Here, XSS There, XSS Everywhere! Today Cross-Site Scripting (XSS) is very widespread. While it is not a newly discovered attack vector, we still see it all the time in the wild. Do you remember back in the days, when you would click on a website's guestbook, and suddenly you would have tons of pop-ups or redirections happen? Yeah, that's often XSS for you. Today I see XSS vulnerabilities in almost all of the penetration testing engagements that I conduct. Even to this very day, there is evidence of old XSS worms stuck on the web. Remember MySpace? Yeah, me neither. Make a Google search for "Samy is my hero site:myspace.com". You will see thousands of ghostly remains of a XSS worm back from 2006! The infamous Samy worm does not still linger, but what you are seeing is the remains of MySpace profiles that were victims of this worm back in 2006. XSS is usually ranked only as a medium impact when exploited. For instance, OWASP has rated this vulnerability as a moderate impact. I disagree on this. In many cases XSS can be truly brutal and potentially life threatening. What do I mean? When XSS is bundled with other vulnerabilities, such as Cross-Site Request Forgery (CSRF), we can quickly imagine some very nasty scenarios. What if your XSS exploit hooks an IT Operations administrator, and through the XSS you add your CSRF payloads to perform administrative functions to their HVAC solutions? Alternatively, consider the unfortunate event where an attacker has successfully compromised thousands of hosts, using them all to DDOS an unsuspected victim. New XSS attack vectors arise all the time, however we don't often see something truly new or untraditional. Wouldn't it be cool to see something else other than just your ordinary filter bypass? In this article, I'll cover how I've successfully found 0-day exploits in WordPress, public sites and plugins for popular CMS systems, by merely using using this technique. Let's take a look at embedding XSS payloads into image metadata, more specifically EXIF data in JPEG images. This can be accomplished several ways. If you are old school (or perhaps just old , you can accomplish this by modifying your camera settings: The camera type used here is a Canon(1) camera. Any hacker with any respect for themselves, uses ExifTool(2) by Phil Harvey to accomplish the task. The following command allows us to add/overwrite an exif tag, specifically the camera type that has allegedly been used to take this photograph: exiftool.exe -"Camera Model Name"="// " "C:\research.jpg" Let's not just add the model name, let's extend it to other values as well: As you can see, we've added the standard javascript alert code to a whole set of different EXIF data fields. Now we'll create a simple PHP script that will mimic a real world example of a system that uses EXIF data: <?php $filename = $_GET['filename']; echo $filename . " \n"; $exif = exif_read_data('tests/' . $filename, 'IFD0'); echo $exif===false ? "No header data found. \n" : "Image contains headers \n"; $exif = exif_read_data('tests/' . $filename, 0, true); foreach ($exif as $key => $section) { foreach ($section as $name => $val) { echo "$key.$name: $val \n"; } } ?> The above script will simply iterate through all EXIF data keys it finds and will output the respective value. For testing purposes, this is exactly what we want. PHP's EXIF parser does not have filtering in place by default, making it very interesting to test this attack vector. In cases where a developer has forgotten to do sanitization on these fields, we may have a successful attack. Developers think, in many cases, that some of their data is read-only, so why would they EVER need to sanitize it?! Common mistake... Using the script above, and armed with our the metadata-bombed picture, we can try to attack ourselves through the demo script. In the following picture, we have told the script to fetch our metadata-bombed picture, simply illustrating the attack with a JavaScript pop-up: So what? We've successfully attacked ourselves with a pop-up message? Well, there is so much more to this than just attacking ourselves. First of all, we've verified that there is no built-in filtering in the PHP exif_read_data function. That means that all developers need to remember is to apply filtering manually, and as we covered before, we all know that developers always remember this... Secondly, we've verified that we can gain executable JavaScript in someone's browser. From here, we can simply rewrite our pop-up payload to something much more subtle and evil, such as introducing a BeEF hook. More on this later. Scouring For 0 Day's Armed with a fully metadata-bombed picture, I set sail into the Wild Wild Web. I had to check whether my assumptions of developers failing to sanitize the EXIF data was true or not. From there, I roamed into the depths of picture upload sites... I started googling "upload picture", "picture sharing", "photograph sharing" and much more. On the sites that I found interesting, I registered an account and started uploading my pictures. On a side note, Mailinator(3) comes in handy doing this kind of research. In fact, I registered with the account no-reply@mailinator.com for most of the sites, however, to my great surprise, one of the sites already had an account with this username! What?! Someone had actually registered with this account before? Then undoubtedly, I could do a password reset! Sure enough, doing a password reset, I gained access to someone else's account. Whew... Now, who would EVER register an account on a Mailinator address for their private pictures? Another security researcher? Criminals? Where do I go from here? Do I really want to venture into someone else's account? If so, what will I find? Regardless of my questions and doubt, I decided to continue, knowing surely that there is no turning back from what I might be about to see. To my surprise, and more importantly, to my relief, the site contained a bunch of family vacation pictures from a trip to Indonesia. Many of the sites required registration, while many of them did not show any metadata at all. Out of 21 sites tested, 11 sites did not have a feature to display EXIF data, 7 sites had at least rudimentary filtering and 3 sites were found to be vulnerable. Not amazing numbers, however still fun to see it working in the wild, outside of my lab. What do I mean by rudimentary filtering? Well, it just means I didn't try to bypass the filtering. Additionally, I tested the attack vector on 3 WordPress plugins, whereas 2 were found vulnerable and one had the appropriate filtering in place. Responsible disclosure against the sites and plugins has been conducted. Some of the examples in this article have been anonymized because as of the launch of the article, they have still not patched the issue. Keep in mind, many of the sites that were applying filtering could still be vulnerable. I did not conduct any filter bypass in my testing. My gut feeling is that the filters were very rudimentary and could easily be bypassed. First, here is an example from 500px.com which was not found to be vulnerable: You can see the payload present in the title and the camera is automatically populated by the site. That means that instead of prompting me to set a title for my picture, the site used one of the EXIF data fields to pre-populate it for me. Interesting...this was something I saw as a repeating characteristic when doing my testing. Flickr also did appropriate filtering, keeping in mind, no filter bypass has been tried: One particular site did not like my testing at all. When trying to upload my picture, it seemed to break something: Anyway, we're not here for the failures, are we? We're here for the success stories! Ahh, this is the wonderful world of hacking...gaining success through other people's failures... *evil grin* Here is a site where we can see our attack manifest itself. Just by uploading the picture and then viewing it, it triggers this vulnerability: I also found the same vulnerability at other sites. We can see the image I've uploaded in the background -- a princess and a unicorn. Sadly, no farting rainbows... Many of the big sites were also tested, such as Google Plus, DeviantArt, and Photobucket. These were all applying some filtering. A site, however, that did not apply the necessary filtering was Wordpress. In the screenshot above I've successfully uploaded an image, by accessing it through its respective attachment page. Remember, I am using a harmless payload, just alerting a text message. This could be a completely stealthy attack payload if I wanted it to be. Let's dive further into the WordPress finding. The Wordpress Exploit WordPress is the most popular blogging platform on the internet today, ranking up more than 60 million websites in 2012 (4). Finding working exploits in such a platform can be very interesting for many actors, hence they also have a working bug bounty program (5). The vulnerability I'm demonstrating in this paper has been submitted to WordPress through responsible disclosure, and we held this article until they had properly patched the issue. The WordPress vulnerability manifests when an administrator, or editor, uploads an image with the ImageDescription EXIF data tag set to a JavaScript payload. The exploit works only for the user accounts as more strict filtering is put on the other accounts. This has sparked some controversy about this vulnerability, however, as I will prove in this article, we will create an attack that is fully stealthy, allowing the attack to take place without an administrator knowing what is going on. Why the controversy? With WordPress, and other CMS systems such as Sharepoint, some roles are allowed to upload HTML elements. With WordPress, administrators and editors are allowed to implement unfiltered HTML (6). The other side of the controversy is how the attack can be made super stealthy. The administrator has very limited ways to realize that he is doing something wrong and actually uploading malware into his site. Now, that's cool! This is also why WordPress has chosen to patch this issue. Embedding some JavaScript into the tag and then uploading it will trigger the vulnerability once a user views the attachment page of the image. Using Exiftool, you can accomplish this with the following command: Here I've changed my JavaScript payload to a reference instead of embedding the JavaScript file itself in the image. This will give us increased flexibility when creating working payloads. exiftool.exe -"ImageDescription"="<script src=\"http://pentesting.securesolutions.no/js.js\">" paramtest1.jpg The following example is one of my first runs of the attack. It is not stealthy as the administrator can easily pick up that something is wrong, simply by looking at the title element of the page. WordPress uses the ImageDescription element to populate the title element, and properly filters before doing so. We'll see soon how to bypass this. The attack works when you navigate to the attachment page, however any WordPress editor with IQ higher than their shoe-size would most likely realize that something is fishy, immediately deleting the picture. If we stopped at this point, I don't think the issue would warrant a patch or much attention at all, however the next steps allows us to go into stealth mode. If I figured out a way for the payload to be embedded, but without the title element being overridden, I could make the attack feasible. Luckily, I discovered a small artifact when doing the testing. Trying different types of encoding, and other obfuscation techniques, produced some really long strings. When producing a long enough string, I noticed that WordPress suddenly defaulted to using the filename as the title element! Nice! The following Exiftool command makes WordPress ignore the ImageDescription, allowing a more stealthy attack: exiftool.exe -"ImageDescription"=" <script src\"http://pentesting.securesolutions.no/js.js\"></script>" paramtes1.jpg Notice all the extra spaces. This extra padding makes WordPress think that this is too long for the title field, thus defaulting to simply using the filename. The attack now manifests more beautifully when we upload the picture: The picture loads normally. Our XSS vector is currently invisible. Here is what happens when someone, e.g. the administrator, visits the picture: The screenshot shows how I've successfully included my malicious JavaScript. This could be a simple BEeF(7) hook, allowing us a very high level of control of the victims. From here, it's game over. Best Regards, Cross-Site Scripting Why stop at EXIF data? What about other types of data, perhaps not in the same magnitude as online EXIF parsers, but let's look at embedding XSS into other data. What if a webpage allowed you to upload a Word document , and it would then automatically extract the Author field of the document and embed it on the site? That could definitely lead to a vulnerability. It sounds like a good vector for a XSS attacks, or even other types of attacks such as SQL Injection if they store the information in a database. When I look at the document I'm writing right now, I can see the following metadata information: Without a doubt, many of these parameters can easily be changed by the user, either through Exiftool or using the word processor itself. The following example shows editing the username to a XSS payload. I do apologize for the Norwegian text; I've been cursed with a Norwegian installation of Windows and Office by my IT-department. Pictures and documents. What about audio? Here is an example adding XSS to an mp3 file through the awesome free, and open-source, tool Audacity (8): There are probably tons of other situations where we can add these types of attacks. It's up to us to find them before the bad guys does. Conclusion Let's consider the future. The data we embed in metadata today might, sometime in the future, exploit services that has not yet been developed. Perhaps, we'll see XSS shooting out from projectors, chat services, glasses (e.g. Google Glass) or robots going crazy having alert(1)'s all over the place. Or perhaps even cooler, your files embedded with XSS today might someday, in the future, trigger a callback connection straight back to your BEeF hook... The bottom line is, data coming from a third party system, being a user or another system, should be sanitized! You know that whole concept of garbage in, garbage out? Let's stop that. Additionally, it is important for pen testers to have this information in their arsenal when doing their testing. The testers need to think outside the box and cover as much testing surface as possible. Also, Ed Skoudis had a student who mentioned some great research that has been made on sites processing metadata. I recommend checking out the research done at embeddedmetadata.org (9). It might spark some further testing and research for some of our readers. Now, go onward my friends and... References: http://www.amazon.com/Canon-CMOS-Digital-Camera-3-0-Inch/dp/B0040JHVCC http://www.sno.phy.queensu.ca/~phil/exiftool/ http://mailinator.com/ With 60 Million Websites, WordPress Rules The Web. So Where's The Money? - Forbes Security — Automattic WordPress Security Bug Bounty Program - White Fir Design BeEF - The Browser Exploitation Framework Project Audacity: Free Audio Editor and Recorder Embedded Metadata Initiative -Chris Andre Dale Sursa: SANS Penetration Testing | Finding Zero-Day XSS Vulns via Doc Metadata | SANS Institute
-
The No CAPTCHA problem When I read about No CAPTCHA for the first time I was really excited. Did we finally find a better solution? Hashcash? Or what? Finally it's available and the blog post disappointed me a bit. Here's Wordpress registration page successfully using No CAPTCHA. Now let's open it in incognito tab... Wait, annoying CAPTCHA again? But i'm a human! So what Google is trying to sell us as a comprehensive bot detecting algorithm is simply a whitelist based on your previous online behavior, CAPTCHAs you solved. Essentially - your cookies. Under the hood they replaced challenge/response pairs with token "g-recaptcha-response". Good guys get it "for free", bad guys still have to solve a challenge. Does it make bot's job harder? No at all. The legacy flow is still available and old OCR bots can keep recognizing. But what about new "find a similar image" challenges? Bots can't do that! As long as $1 per hour is ok for many people in 3rd world, bots won't need to solve new challenges. No matter how complex they are, bots simply need to get the JS code of challenge, show it to another human being (working for cheap or just a visitor on popular websites) and use the answer that human provided. The thing is No CAPTCHA actually introduces a new weakness! Abusing clickjacking we can make the user (a good guy) generate g-recaptcha-response for us - make a click (demo bot for wordpress). Then we can use this g-recaptcha-response to make a valid request to the victim (from our server or from user's browser). It's pretty much a serious weakness of new reCAPTCHA - instead of making everyone recognize those images we can make a bunch of good "trustworthy" users generate g-recaptcha-response-s for us. Bot's job just got easier! You're probably surprised, how can we use 3rd party data-sitekey on our website? Don't be - the Referrer-based protection was pretty easy to bypass with <meta name="referrer" content="never">. P.S. Many developers still think you need to wait a while to get a new challenge. @homakov I've used them in the past, accuracy is about 80% and response time about 10 seconds per attempt. Still too slow for some attacks. — Stephen de Vries (@stephendv) December 4, 2014 In fact you can prepare as many challenges as you want and then start spaming later. It's another reCAPTCHA weakness that will never be fixed. Author: Egor Homakov on 3:52 AM Sursa: Egor Homakov: The No CAPTCHA problem
-
[h=1]FFmpeg 2.5 Officially Released[/h]By Silviu Stahie The developers have added some new features A new FFmpeg version is out FFmpeg is a complete solution to record, convert, and stream audio and video and it was just upgraded to a new major version, 2.5. It comes with a lot of new features and it's pretty interesting. FFmpeg 2.5 has been dubbed "Bohr" and comes just 2.5 months after the previous release. There are not as many changes as you might think, but there are more than enough to keep users interested. "2.5 was released on 2014-12-04. It is the latest stable FFmpeg release from the 2.5 release branch, which was cut from master on 2014-12-04. Amongst lots of other changes, it includes all changes from ffmpeg-mt, libav master of 2014-12-03, libav 11 as of 2014-12-03,” reads the official announcement. Some of the updated packages in the latest FFmpeg framework include libavutil, libavcodec, libavformat, libavdevice, libavfilter, libavresample, libswscale, libswresample, and libpostproc. The devs have also said that the STL subtitle decoder is now supported, the XCB-based screen-grabber is now working properly, a SUP/PGS subtitle demuxer is now available, a number of fixes have been implemented as well. FFmpeg is the leading multimedia framework able to decode, encode, transcode, mux, demux, stream, filter, and play pretty much any media that humans and machines have created. A complete list of updates, features, and other fixes can be found in the official announcement. You can download FFmpeg 2.5 source package right now from Softpedia. Sursa: FFmpeg 2.5 Officially Released - Softpedia
-
GCHQ boffins quantum-busted its OWN crypto primitive 'Soliloquy' only ever talked to itself By Richard Chirgwin, 3 Dec 2014 While the application of quantum computers to cracking cryptography is still, for now, a futuristic scenario, crypto researchers are already taking that future seriously. It came as a surprise to Vulture South to find that in October of this year, researchers at GCHQ's information security arm the CESG abandoned work on a security primitive because they discovered a quantum attack against it. Presented to the ETSI here, with the full paper here, the documents outline the birth and death of a primitive the CESG called Soliloquy. Primitives are building blocks in the dizzyingly-complex business of assembling a cryptosystem: individual modules that are expected to be very well-characterised before they're accepted into security standards (and, in the case of crypto like RC4, dropped when they're no longer safe). Given that improving computer power is one of the ways a primitive can be broken, there's a constant background research effort into both creating the primitives of the future, and testing them before they're adopted – and that's where Soliloquy comes in. As the CESG paper states, Soliloquy was first proposed in 2007 as a cyclic-lattice key exchange primitive supporting between 3,000 and 10,000 bits for the public key. Between 2010 and 2013 – presumably as part of their effort to case-harden the primitive before releasing it into the wild – the boffins (Peter Campbell, Michael Groves and Dan Shepherd) developed what they call “a reasonably efficient quantum attack on the primitive”, and as a result, they cancelled the project. The quantum algorithm they describe would work by creating a quantum fingerprint of the lattice Soliloquy creates; “discreteise and bound” the control space needed; and run a quantum Fourier transform over that control space, iteratively to get lots of samples approximating the lattice. That's where the quantum attack is complete: after that, the samples would get fed into a classical lattice-based algorithm to recover the values you want – in other words, the key. The main challenge, the authors write, is to define “to define a suitable quantum fingerprinter” that could handle the control space. As the researchers drily note in their conclusion, “designing quantum-resistant cryptography is a difficult task”, and while researchers are starting to create such algorithms for deployment, “we caution that much care and patience will be required” to provide a thorough security assessment of any such protocol. ® Sursa: GCHQ boffins quantum-busted its OWN crypto primitive • The Register
-
An Analysis of the “Destructive” Malware Behind FBI Warnings 4:06 pm (UTC-7) | by Trend Micro TrendLabs engineers were recently able to obtain a malware sample of the “destructive malware” described in reports about the Federal Bureau of Investigation (FBI) warning to U.S. businesses last December 2. According to Reuters, the FBI issued a warning to businesses to remain vigilant against this new “destructive” malware in the wake of the recent Sony Pictures attack. As of this writing, the link between the Sony breach and the malware mentioned by the FBI has yet to be verified. The FBI flash memo titled “#A-000044-mw” describes an overview of the malware behavior, which reportedly has the capability to override all data on hard drives of computers, including the master boot record, which prevents them from booting up. Below is an analysis of our own findings: Analysis of the BKDR_WIPALL Malware Our detection for the malware detailed in the FBI report is BKDR_WIPALL. Below is a quick overview of the infection chain for this attack. The main installer here is diskpartmg16.exe (detected as BKDR_WIPALL.A). BKDR_WIPALL.A’s overlay is encrypted with a set of user names and passwords as seen in the screenshot below: Figure 1. BKDR_WIPALL.A’s overlay contains encrypted user names and passwords These user names and passwords are found to be encrypted by XOR 0x67 in the overlay of the malware sample and are then used to log into the shared network. . Once logged in, the malware attempts to grant full access to everyone that will access the system root. Figure 2. Code snippet of the malware logging into the network The dropped net_var.dat contains a list of targeted hostnames: Figure 3. Targeted host names The next related malware is igfxtrayex.exe (detected as BKDR_WIPALL., which is dropped by BKDR_WIPALL.A. It sleeps for 10 minutes (or 600,000 milliseconds as seen below) before it carries out its actual malware routines: Figure 4. BKDR_WIPALL.B (igfxtrayex.exe) sleeps for 10 minutes Figure 5. Encrypted list of usernames and passwords also present in BKDR_WIPALL.B Figure 6. Code snippet of the main routine of igfxtrayex.exe (BKDR_WIPALL. This malware’s routines, aside from deleting users’ files, include stopping the Microsoft Exchange Information Store service. After it does this, the malware sleeps for another two hours. It then forces the system to reboot. Figure 7. Code snippet of the force reboot It also executes several copies of itself named taskhost{random 2 characters}.exe with the following parameters: taskhost{random 2 characters}.exe -w – to drop and execute the component Windows\iissvr.exe taskhost{random 2 characters}.exe -m – to drop and execute Windows\Temp\usbdrv32.sys taskhost{random 2 characters}.exe -d – to delete files in all fixed or remote (network) drives Figure 8. The malware deletes all the files (format *.*) in fixed and network drives The malware components are encrypted and stored in the resource below: Figure 9. BKDR_WIPALL.B malware components Additionally, BKDR_WIPALL.B accesses the physical drive that it attempts to overwrite: Figure 10. BKDR_WIPALL.B overwrites physical drives We will be updating this post with our additional analysis of the WIPALL malware. Analysis by Rhena Inocencio and Alvin Bacani Update as of December 3, 2014, 5:30 PM PST Upon analysis of the same WIPALL malware family, its variant BKDR_WIPALL.D drops BKDR_WIPALL.C, which in turn, drops the file walls.bmp in the Windows directory. The .BMP file is as pictured below: Figure 11. Dropped wallpaper This appears to be the same wallpaper described in reports about the recent Sony hack last November 24 bearing the phrase “hacked by #GOP.” Therefore we have reason to believe that this is the same malware used in the recent attack to Sony Pictures. Note that BKDR_WIPALL.C is also the dropped named as igfxtrayex.exe in the same directory of BKDR_WIPALL.D. We will update this blog entry for more developments. Additional analysis by Joie Salvio Sursa: An Analysis of the "Destructive" Malware Behind FBI Warnings
-
How the NSA Hacks Cellphone Networks Worldwide By Ryan Gallagher @rj_gallagher In March 2011, two weeks before the Western intervention in Libya, a secret message was delivered to the National Security Agency. An intelligence unit within the U.S. military’s Africa Command needed help to hack into Libya’s cellphone networks and monitor text messages. For the NSA, the task was easy. The agency had already obtained technical information about the cellphone carriers’ internal systems by spying on documents sent among company employees, and these details would provide the perfect blueprint to help the military break into the networks. The NSA’s assistance in the Libya operation, however, was not an isolated case. It was part of a much larger surveillance program—global in its scope and ramifications—targeted not just at hostile countries. According to documents contained in the archive of material provided to The Intercept by whistleblower Edward Snowden, the NSA has spied on hundreds of companies and organizations internationally, including in countries closely allied to the United States, in an effort to find security weaknesses in cellphone technology that it can exploit for surveillance. The documents also reveal how the NSA plans to secretly introduce new flaws into communication systems so that they can be tapped into—a controversial tactic that security experts say could be exposing the general population to criminal hackers. Codenamed AURORAGOLD, the covert operation has monitored the content of messages sent and received by more than 1,200 email accounts associated with major cellphone network operators, intercepting confidential company planning papers that help the NSA hack into phone networks. One high-profile surveillance target is the GSM Association, an influential U.K.-headquartered trade group that works closely with large U.S.-based firms including Microsoft, Facebook, AT&T, and Cisco, and is currently being funded by the U.S. government to develop privacy-enhancing technologies. Karsten Nohl, a leading cellphone security expert and cryptographer who was consulted by The Intercept about details contained in the AURORAGOLD documents, said that the broad scope of information swept up in the operation appears aimed at ensuring virtually every cellphone network in the world is NSA accessible. The operation appears aimed at ensuring virtually every cellphone network in the world is NSA accessible. “Collecting an inventory [like this] on world networks has big ramifications,” Nohl said, because it allows the NSA to track and circumvent upgrades in encryption technology used by cellphone companies to shield calls and texts from eavesdropping. Evidence that the agency has deliberately plotted to weaken the security of communication infrastructure, he added, was particularly alarming. “Even if you love the NSA and you say you have nothing to hide, you should be against a policy that introduces security vulnerabilities,” Nohl said, “because once NSA introduces a weakness, a vulnerability, it’s not only the NSA that can exploit it.” NSA spokeswoman Vanee’ Vines told The Intercept in a statement that the agency “works to identify and report on the communications of valid foreign targets” to anticipate threats to the United States and its allies. Vines said: “NSA collects only those communications that it is authorized by law to collect in response to valid foreign intelligence and counterintelligence requirements—regardless of the technical means used by foreign targets, or the means by which those targets attempt to hide their communications.” Network coverage The AURORAGOLD operation is carried out by specialist NSA surveillance units whose existence has not been publicly disclosed: the Wireless Portfolio Management Office, which defines and carries out the NSA’s strategy for exploiting wireless communications, and the Target Technology Trends Center, which monitors the development of new communication technology to ensure that the NSA isn’t blindsided by innovations that could evade its surveillance reach. The center’s logo is a picture of the Earth overshadowed by a large telescope; its motto is “Predict – Plan – Prevent.” The NSA documents reveal that, as of May 2012, the agency had collected technical information on about 70 percent of cellphone networks worldwide—701 of an estimated 985—and was maintaining a list of 1,201 email “selectors” used to intercept internal company details from employees. (“Selector” is an agency term for a unique identifier like an email address or phone number.) From November 2011 to April 2012, between 363 and 1,354 selectors were “tasked” by the NSA for surveillance each month as part of AURORAGOLD, according to the documents. The secret operation appears to have been active since at least 2010. The information collected from the companies is passed onto NSA “signals development” teams that focus on infiltrating communication networks. It is also shared with other U.S. Intelligence Community agencies and with the NSA’s counterparts in countries that are part of the so-called “Five Eyes” surveillance alliance—the United Kingdom, Canada, Australia, and New Zealand. Aside from mentions of a handful of operators in Libya, China, and Iran, names of the targeted companies are not disclosed in the NSA’s documents. However, a top-secret world map featured in a June 2012 presentation on AURORAGOLD suggests that the NSA has some degree of “network coverage” in almost all countries on every continent, including in the United States and in closely allied countries such as the United Kingdom, Australia, New Zealand, Germany, and France. One of the prime targets monitored under the AURORAGOLD program is the London-headquartered trade group, the GSM Association, or the GSMA, which represents the interests of more than 800 major cellphone, software, and internet companies from 220 countries. The GSMA’s members include U.S.-based companies such as Verizon, AT&T, Sprint, Microsoft, Facebook, Intel, Cisco, and Oracle, as well as large international firms including Sony, Nokia, Samsung, Ericsson, and Vodafone. The trade organization brings together its members for regular meetings at which new technologies and policies are discussed among various “working groups.” The Snowden files reveal that the NSA specifically targeted the GSMA’s working groups for surveillance. Claire Cranton, a spokeswoman for the GSMA, said that the group would not respond to details uncovered by The Intercept until its lawyers had studied the documents related to the spying. “If there is something there that is illegal then they will take it up with the police,” Cranton said. By covertly monitoring GSMA working groups in a bid to identify and exploit security vulnerabilities, the NSA has placed itself into direct conflict with the mission of the National Institute for Standards and Technology, or NIST, the U.S. government agency responsible for recommending cybersecurity standards in the United States. NIST recently handed out a grant of more than $800,000 to GSMA so that the organization could research ways to address “security and privacy challenges” faced by users of mobile devices. The revelation that the trade group has been targeted for surveillance may reignite deep-seated tensions between NIST and NSA that came to the fore following earlier Snowden disclosures. Last year, NIST was forced to urge people not to use an encryption standard it had previously approved after it emerged NSA had apparently covertly worked to deliberately weaken it. Jennifer Huergo, a NIST spokewoman, told The Intercept that the agency was “not aware of any activities by NSA related to the GSMA.” Huergo said that NIST would continue to work towards “bringing industry together with privacy and consumer advocates to jointly create a robust marketplace of more secure, easy-to-use, privacy-enhancing solutions.” GSMA headquarters in London (left) Encryption attack The NSA focuses on intercepting obscure but important technical documents circulated among the GSMA’s members known as “IR.21s.” Most cellphone network operators share IR.21 documents among each other as part of agreements that allow their customers to connect to foreign networks when they are “roaming” overseas on a vacation or a business trip. An IR.21, according to the NSA documents, contains information “necessary for targeting and exploitation.” The details in the IR.21s serve as a “warning mechanism” that flag new technology used by network operators, the NSA’s documents state. This allows the agency to identify security vulnerabilities in the latest communication systems that can be exploited, and helps efforts to introduce new vulnerabilities “where they do not yet exist.” The IR.21s also contain details about the encryption used by cellphone companies to protect the privacy of their customers’ communications as they are transmitted across networks. These details are highly sought after by the NSA, as they can aid its efforts to crack the encryption and eavesdrop on conversations. Last year, the Washington Post reported that the NSA had already managed to break the most commonly used cellphone encryption algorithm in the world, known as A5/1. But the information collected under AURORAGOLD allows the agency to focus on circumventing newer and stronger versions of A5 cellphone encryption, such as A5/3. The documents note that the agency intercepts information from cellphone operators about “the type of A5 cipher algorithm version” they use, and monitors the development of new algorithms in order to find ways to bypass the encryption. In 2009, the British surveillance agency Government Communications Headquarters conducted a similar effort to subvert phone encryption under a project called OPULENT PUP, using powerful computers to perform a “crypt attack” to penetrate the A5/3 algorithm, secret memos reveal. By 2011, GCHQ was collaborating with the NSA on another operation, called WOLFRAMITE, to attack A5/3 encryption. (GCHQ declined to comment for this story, other than to say that it operates within legal parameters.) The extensive attempts to attack cellphone encryption have been replicated across the Five Eyes surveillance alliance. Australia’s top spy agency, for instance, infiltrated an Indonesian cellphone company and stole nearly 1.8 million encryption keys used to protect communications, the New York Times reported in February. Click to enlarge. The NSA’s documents show that it focuses on collecting details about virtually all technical standards used by cellphone operators, and the agency’s efforts to stay ahead of the technology curve occasionally yield significant results. In early 2010, for instance, its operatives had already found ways to penetrate a variant of the newest “fourth generation” smartphone-era technology for surveillance, years before it became widely adopted by millions of people in dozens of countries. The NSA says that its efforts are targeted at terrorists, weapons proliferators, and other foreign targets, not “ordinary people.” But the methods used by the agency and its partners to gain access to cellphone communications risk significant blowback. According to Mikko Hypponen, a security expert at Finland-based F-Secure, criminal hackers and foreign government adversaries could be among the inadvertent beneficiaries of any security vulnerabilities or encryption weaknesses inserted by the NSA into communication systems using data collected by the AURORAGOLD project. “If there are vulnerabilities on those systems known to the NSA that are not being patched on purpose, it’s quite likely they are being misused by completely other kinds of attackers,” said Hypponen. “When they start to introduce new vulnerabilities, it affects everybody who uses that technology; it makes all of us less secure.” “It affects everybody who uses that technology; it makes all of us less secure.” In December, a surveillance review panel convened by President Obama concluded that the NSA should not “in any way subvert, undermine, weaken, or make vulnerable generally available commercial software.” The panel also recommended that the NSA should notify companies if it discovers previously unknown security vulnerabilities in their software or systems—known as “zero days” because developers have been given zero days to fix them—except in rare cases involving “high priority intelligence collection.” In April, White House officials confirmed that Obama had ordered NSA to disclose vulnerabilities it finds, though qualified that with a loophole allowing the flaws to be secretly exploited so long as there is deemed to be “a clear national security or law enforcement” use. Vines, the NSA spokeswoman, told The Intercept that the agency was committed to ensuring an “open, interoperable, and secure global internet.” “NSA deeply values these principles and takes great care to honor them in the performance of its lawful foreign-intelligence mission,” Vines said. She declined to discuss the tactics used as part of AURORAGOLD, or comment on whether the operation remains active. ——— Documents published with this article: AURORAGOLD – Project Overview AURORAGOLD Working Group IR.21 – A Technology Warning Mechanism AURORAGOLD – Target Technology Trends Center support to WPMO NSA First-Ever Collect of High-Interest 4G Cellular Signal AURORAGOLD Working Aid WOLFRAMITE Encryption Attack OPULENT PUP Encryption Attack NSA/GCHQ/CSEC Network Tradecraft Advancement Team ——— Photo: Cell tower: Justin Sullivan/Getty Images; GSMA headquarters: Google Maps Sursa: https://firstlook.org/theintercept/2014/12/04/nsa-auroragold-hack-cellphones/
-
WebSocket Security Issues Overview In this article, we will dive into the concept of WebSocket introduced in HTML 5, security issues around the WebSocket model, and the best practices that should be adopted to address security issues around WebSocket. Before going straight to security, let’s refresh our concepts on WebSocket. Why Websocket and Not HTTP? In older times, the client server model was built with client requests the server for a resource. The Web was built for this kind of model, and HTTP was sufficient to handle these requests. However, with new advancements of technologies, needs of online gaming and real time applications have marked the need of a protocol that could provide a bidirectional connection between client and server to allow live streaming. Web applications have grown up a lot, and are now consuming more data than ever before. The biggest thing holding them back was the traditional HTTP model of client initiated transactions. To overcome this, a number of different strategies were devised to allow servers to push data to the client. One of the most popular of these strategies was long-polling. This involves keeping an HTTP connection open until the server has some data to push down to the client. The problem with all of these solutions is that they carry the overhead of HTTP. Every time you make an HTTP request, a bunch of headers and cookie data are transferred to the server. Initially HTTP was thought to be modified to create a bidirectional channel between client and server, but this model could not sustain because of the HTTP overhead and would certainly introduce latency. But in real time applications, especially gaming applications, latency cannot be afforded. Because of this shortcoming of HTTP, a new protocol known as WebSocket, which runs over the same TCP/IP model, was designed. How WebSockets Work WebSockets provide a persistent connection between client and server that both parties can use to start data at any time. The connection is initiated from client through a WebSocket handshake. This happens over a normal HTTP request packet with an “Upgrade” header. A sample connection is shown below: Later, if the server supports the WebSocket connections, then the server responds with an “Upgrade” header in the response. Sample is below: After an exchange of these request and response messages, a persistent WebSocket connection is established between a client and a server. WebSockets can transfer as much data as you like without incurring the overhead associated with traditional HTTP requests. Data is transferred through a WebSocket as messages, each of which consists of one or more frames containing the data you are sending (the payload). In order to ensure the message can be properly reconstructed when it reaches the client, each frame is prefixed with 4-12 bytes of data about the payload. Using this frame-based messaging system helps to reduce the amount of non-payload data that is transferred, leading to significant reductions in latency. Note: the “Upgrade” header tells the server that the client wants to initiate a WebSocket connection. WebSocket Security Issues WebSocket has some inherent security issues. Some of them are listed below: Open to DOS attack: WebSocket allows unlimited number of connections to the target server and thus resources on the server can be exhausted because of DOS attack. The WebSocket protocol does not give any particular way to allow a server to authenticate the clients during the handshake process. WebSocket has to take forward only the mechanism available to normal HTTP connections such as cookies, HTTP authentication or TLS authentication. It has been seen that during the upgrade handshake from HTTP to WebSocket (WS), HTTP sends all the authentication information to WS. This attack has been termed as Cross Site WebSocket Hijacking (CSWSH). WebSockets can be used over unencrypted TCP channels, which can lead to major flaws such as those listed in OWASP Top 10 A6-Sensitive Data Exposure. WebSockets are vulnerable to malicious input data attacks, therefore leading to attacks like Cross Site Scripting (XSS). The WebSocket protocol implements data masking which is present to prevent proxy cache poisoning. But it has a dark side: masking inhibits security tools from identifying patterns in the traffic. Products such as Data Loss Prevention (DLP) software and firewalls are typically not aware of WebSockets, so they can’t do data analysis on WebSocket traffic, and therefore can’t identify malware, malicious JavaScript and data leakage in WebSocket traffic. The Websocket protocol doesn’t handle authorization and/or authentication. Application-level protocols should handle that separately in case sensitive data is being transferred. It’s relatively easy to tunnel arbitrary TCP services through a WebSocket, for example, tunnel a database connection directly through to the browser. This is of high risk, as it would enable access to services to an in-browser attacker in the case of a Cross Site Scripting attack, thus allowing an escalation of a XSS attack into a complete remote breach. Recommendations around WebSockets Security flaws Below are the recommendations / best practices around the security flaws listed above: The WebSocket standard defines an Origin header field which Web browsers set to the URL that originates a WebSocket request. This can be used to differentiate between WebSocket connections from different hosts, or between those made from a browser and some other kind of network client. However, the Origin header is essentially advisory: non-browser clients can easily set the Origin header to any value, and thus “pretend” to be a browser. Origin headers are roughly analogous to the X-Requested-With header used by AJAX requests. Web browsers send a header of X-Requested-With: XMLHttpRequest which can be used to distinguish between AJAX requests made by a browser and those made directly. However, this header is easily set by non-browser clients, and thus isn’t trusted as a source of authentication. Use session-individual random tokens (like CSRF-Tokens) on the handshake request and verify them on the server. Websockets must be configured to use secure TCP channel. URI with syntax wss:// illustrates the usage of secure Websocket connection. Any data from untrusted sources must not be trusted. All input must be sanitized before it goes in the execution context. You should apply equal suspicion to data returned from the server as well. Always process messages received on the client side as data. Don’t try to assign them directly to the DOM, nor evaluate as code. If the response is JSON, always use JSON.parse() to safely parse the data. Avoid tunneling if at all possible, instead developing more secured and checked protocols on top of WebSockets. References https://devcenter.heroku.com/articles/websocket-security An Introduction to WebSockets | Treehouse Blog By Lohit Mehta|December 4th, 2014 Sursa: WebSocket Security Issues - InfoSec Institute
-
New TLS/SSL Version Ready In 2015 Kelly Jackson Higgins One of the first steps in making encryption the norm across the Net is an update to the protocol itself and a set of best-practices for using encryption in applications.The Internet's standards body next year will release the newest version of the Transport Layer Security (TLS) protocol, which, among other things, aims to reduce the chance of implementation errors that have plagued the encryption space over the past year. The more streamlined Version 1.3 of TLS (TLS is the newest generation of its better known predecessor, SSL) trims out unnecessary features and functions that ultimately could lead to buggy code. The goal is a streamlined yet strong encryption protocol that's easier to implement and less likely to leave the door open to implementation flaws. "Having options in there that are a smoking gun and one developer gets wrong… could lead to a huge security problem," Russ Housley, chair of the Internet Architecture Board (IAB), says of the problem that TLS 1.3 aims to solve. That's the kind of scenario that led to the Heartbleed bug in the OpenSSL implementation of the encryption protocol. Heartbleed came out of an error in OpenSSL's deployment of the "heartbeat" extension in TLS. The bug, if exploited, could allow an attacker to leak the contents of the memory from the server to the client and vice versa. That could leave passwords and even the SSL server's private key potentially exposed in an attack. [The era of encrypted communications may have finally arrived. Internet Architecture Board chairman Russ Housley explains what the IAB's game-changing statement about encryption means for the future of the Net: Q&A: Internet Encryption As The New Normal.] Aside from the updated TLS protocol, the Internet Engineering Task Force (IETF), which crafts the protocols, also is looking at how to better deploy encryption in applications. The IETF's Using TLS in Applications (UTA) working group will offer best-practices for using TLS in applications, as well as guidance on how certain applications should use the encryption protocol, which also will promote interoperability among encrypted systems. Pete Resnick, the IETF's applications area director, says among the best-practices are the use of the latest crypto algorithms and avoiding the use of weak (or no) encryption, as well as eliminating the use of older TLS/SSL versions. "This will end up making things more secure in the long run by providing common guidelines across implementations," he says. UTA also is working on guidance for using TLS with the instant messaging protocol XMPP (a.k.a. Jabber) and also using TLS with email client protocols POP, IMAP, and SMTP Submission. The goal is to make encryption more interoperable among messaging servers to help propel the use of encrypted communications, according to Resnick. Kelly Jackson Higgins is Executive Editor at DarkReading.com... Sursa: New TLS/SSL Version Ready In 2015
-
[TABLE=align: left] [TR] [TD][TABLE=width: 100%] [TR] [TD]Windows Password Kracker is a free software to recover the lost or forgotten Windows password. It can quickly recover the original windows password from either LM (LAN Manager) or NTLM (NT LAN Manager) Hash.[/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Windows encrypts the login password using LM or NTLM hash algorithm. Since these are one way hash algorithms we cannot directly decrypt the hash to get back the original password. In such cases 'Windows Password Kracker' can help in recovering the windows password using the simple dictionary crack method. Before that you need to dump the password hashes from live or remote windows system using pwdump tool (more details below). Then feed the hash (LM/NTLM) for the corresponding user into 'Windows Password Kracker' to recover the password for that user. In forensic scenarios, investigator can dump the hashes from the live/offline system and then crack it using 'Windows Password Kracker' to recover the original password. This is very crucial as such a password can then be used to decrypt stored credentials as well as encrypted volumes on that system. 'Windows Password Kracker' uses simple & quicker Dictionary based password recovery technique. By default it comes with sample password file. However you can find good collection of password dictionaries (also called wordlist) here & here. Though it supports only Dictionary Crack method, you can easily use tools like Crunch, Cupp to generate brute-force based or any custom password list file and then use it with 'Windows Password Kracker'. It works on both 32 bit & 64 bit windows systems starting from Windows XP to Windows 8.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] Features[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Free tool to quickly recover the Windows login password. Supports Windows password recovery from both LM & NTLM Hash. Uses simple dictionary crack method. Displays detailed statistics during Cracking operation Stop the password cracking operation any time. Very easy to use with cool GUI interface. Generate Windows Password Recovery report in HTML/XML/TEXT format. Includes Installer for local Installation & Uninstallation. [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader]Installation & Un-installation[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]Windows Password Kracker comes with Installer to help in local installation & un-installation. This installer has intuitive wizard which guides you through series of steps in completion of installation.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]At any point of time, you can uninstall the product using the Uninstaller located at following location (by default)[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_code][Windows 32 bit] C:\Program Files\SecurityXploded\WindowsPasswordKracker [Windows 64 bit] C:\Program Files (x86)\SecurityXploded\WindowsPasswordKracker[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] How to Dump LM/NTLM Hash & Crack it?[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] 'Windows Password Kracker' is very easy to use tool for any generation of users. Here are simple steps[/TD] [/TR] [TR] [TD] Install 'Windows Password Kracker' on any system (preferably faster high end systems). Use pwdump tool ( ) to recover the password hashes from live or offline windows system. Sample output will be as shown below [/TD] [/TR] [TR] [TD=class: page_code] Administrator:500:D702A1D01B6BC2418112333D93DFBB4C:C8DBB1CFF1970C9E3EC44EBE2BA7CCBC::: ASPNET:1001:359E64F7361B678C283B72844ABF5707:49B784EF1E7AE06953E7A4D37A3E9529::: Guest:501:NO PASSWORD*********************:NO PASSWORD*********************::: Test:1002:D702A1D01B6BC2418112333D93DFBB4C:C8DBB1CFF1970C9E3EC44EBE2BA7CCBC:::[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Each dumped user account is in following format[/TD] [/TR] [TR] [TD=class: page_code] Username : User ID : LM hash : NTLM Hash :::[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]On newer operating systems (such as vista, win7 etc) LM hash will be absent as it is disabled by default.[/TD] [/TR] [TR] [TD] Once you get the password hash, you can copy either LM (preferred) or NTLM hash onto 'Windows Password Kracker'. Then select the type of hash as LM or NTLM from the drop down box. Next select the password dictionary file by clicking on Browse button or simply drag & drop it. You can find a sample dictionary file in the installed location. Finally click on 'Start Crack' to start the Windows Password recovery. During the operation, you will see all statistics being displayed on the screen. Message box will be displayed on success. At the end, you can generate detailed report in HTML/XML/Text format by clicking on 'Report' button and then select the type of file from the drop down box of 'Save File Dialog'. [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader]Screenshots[/TD] [/TR] [TR] [TD=align: center][/TD] [/TR] [TR] [TD]Screenshot 1: Windows Password Kracker is showing the recovered Password from NTLM hash.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD]Screenshot 2: Detailed Windows Password Recovery report generated by Windows Password Kracker[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] Test Results[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] WindowsPasswordKracker is successfully tested on Windows XP to latest operating system, Windows 8. It can recover the hash password successfully for LM /NTLM hash value.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] Disclaimer[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] 'Windows Password Kracker' is designed with good intention to recover the Lost Windows Password. Like any other tool its use either good or bad, depends upon the user who uses it. However neither author nor SecurityXploded is in anyway responsible for damages or impact caused due to misuse of WindowsPasswordKracker. Read our complete 'License & Disclaimer' policy here.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] Release History[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 90%, align: center] [TR] [TD=class: page_sub_subheader]Version 2.6: 3rd Dec 2014[/TD] [/TR] [TR] [TD]Removed false positive with various Antivirus solutions[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_sub_subheader]Version 2.5: 31st Mar 2014[/TD] [/TR] [TR] [TD]Improved GUI interface with magnifying icon effects and about dialog changes.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_sub_subheader]Version 2.0: 21st Feb 2013[/TD] [/TR] [TR] [TD]Quick help link on dumping LM/NTLM hash from system and cracking it. Fix for screen refresh problem and few UI improvements.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_sub_subheader]Version 1.5: 28th Oct 2012[/TD] [/TR] [TR] [TD]Added support to automatically remember and restore user settings.[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_sub_subheader]Version 1.0: 3rd Aug 2012[/TD] [/TR] [TR] [TD]First public release of Windows Password Kracker.[/TD] [/TR] [TR] [TD][/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD=class: page_subheader] Download[/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 95%, align: center] [TR] [TD] FREE Download Windows Password Kracker v2.6 License : Freeware Platform : Windows XP, 2003, Vista, Windows 7, Windows 8 Download [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD]Sursa: Windows Password Kracker : Free Windows Password Recovery Software.[/TD] [/TR] [/TABLE]
-
[h=2].: XNTSV:. [/h] XNTSV is a utility that displays detailed information about Windows system structurs. Download XNTSV(32 bit) ver. 1.8 (OS Windows) Download XNTSV(64 bit) ver. 1.8 (OS Windows) XNTSV is absolute free for commercial and non-commercial use. Sursa: .:NTInfo:.
-
[h=2].: PDBRipper:. [/h] PDBRipper is a utility for extract a information from PDB-files. PDPRipper can extract: Enumerations User define types(structures, unions ...) Type defines Download PDBRipper ver. 1.12 (OS Windows) PDBRipper is absolute free for commercial and non-commercial use. Sursa: .:NTInfo:.
-
.: Detect It Easy:. Detect It Easy is a packer identifier Download DIE ver. 0.93 (Mac OS X) Download DIE ver. 0.93 (Windows) Download DIE ver. 0.93 (Linux Ubuntu 32-bit(x86)) Download DIE ver. 0.93 (Linux Ubuntu 64-bit(x64)) For other Linux you can try to compile DIE from the sources. Download DIE DLL (Windows) Download DieSort (Windows) Plugin for HIEW (author exet0l) more info Plugin for CFF Explorer(32 bits only!)(author exet0l) more info GITHUB signatures GITHUB engine Executable Image Viewer (This program uses DIE DLL) more info(EN,PL) Detect It Easy is absolute free for commercial and non-commercial use. Sursa: .:NTInfo:.
-
Behavior Analysis Stops Romanian Data-Stealing Campaign By Ankit Anubhav , Christiaan Beek on Dec 03, 2014 In a recent press announcement, McAfee and Europol’s European Cyber Centre announced a cooperation of our talents to fight cybercrime. In general these joint operations are related to large malware families. Writing or spreading malware, even in small campaigns, is a crime. McAfee Labs doesn’t hesitate to reach out to its partners and contacts in CERTs and law enforcement. In the following case, a new Romanian-based data-stealing campaign was caught early due to behavioral and data analytics. In our sample behavioral database, we found a new site hxxp://virus-generator.hi2.ro. Visiting the link revealed an open directory that allowed us to browse the content: Often we observe that malware authors become overzealous in attacking victims, and forget to protect their own malware servers. Despite this campaign’s effectiveness, the malware authors took very little care to ensure that they themselves were not breached. The binaries, which help us to understand how this campaign works, are injector.exe and blurmotion.exe. As the name suggests, injector.exe compromise the victim’s system via code injection in Internet Explorer. It first disables the firewall to ensure a smooth connection to the malware control server. With the help of the mget command, the malware connects control site and downloads the payload blurmotion.exe. The fact that the malware site doesn’t use any authentication makes sense because it leads to a swift connection between the victim and the attacker. Once the payload is downloaded, the batch file root.vbs takes over. This batch file is dropped by injector.exe and ensures that blurmotion.exe is executed. We see the use of wscript.sleep 30000, which makes sure no activity happens for 5 minutes. This could be an attempt to deceive malware analyzers that the sample won’t do anything. Necessary run entries make sure root.vbs runs. After that a misspelled “restartt” is forced. After this step, the system goes into a forced restart, and by this time the work of injector.exe (to download and install the payload) is done. From here the payload takes over. Blurmotion.exe, like its parent, drops a batch file to perform malicious activities. Blurmotion takes the username of the victim and dumps all the processes running in the victim’s system with the name %usename%.ini. Once the stolen data is logged, the malware uploads it to the control server via the mput command. We can see “echo cd BM” used in commands. This is the same BM folder on the malware control server that stores the logs of all victims. Like the payload, this stolen data is exposed to anyone who finds the malware control server. Our test virtual machine “victim” was named Klone, and we found it quickly uploaded on the control server. The size of Klone.ini is zero because we had reverted to the virtual machine before the malware could steal data. In all the other infected user logs, we can see the malware executable blurmotion.exe running, confirming that those systems had been compromised. We can also see repeated connections made to a specific site (mygarage.ro), possibly an attempt to increase its traffic. The author is so aggressive that he or she even tried to overclock the CPU to bring more traffic to this site. The author succeeded in these attempts. In our internal behavioral database we found a lot of redirects to this site. McAfee detects these payloads as Rodast. McAfee SiteAdvisor also warns against connecting to this site: Because the campaign was based in Romania, McAfee Labs contacted the Romanian CERT. After we discussed the approach and strategy with them, the Romanian team took the appropriate actions, and gave us permission to publish our analysis of the campaign in this article. Malware authors sometimes act carelessly, and assume that they are safe if no one detects them. But data from behavioral analysis, along with cooperation with CERTs and law enforcement, can find live campaigns and stop them. Sursa: Behavior Analysis Stops Romanian Data-Stealing Campaign | McAfee
-
[h=2]Facebook Partners With ESET to Fight Malware[/h]By Brian Prince on December 03, 2014 Facebook is teaming with security vendor ESET to improve defenses against malware. The move follows a partnership Facebook announced in May involving F-Secure and Trend Micro. "[F-Secure and Trend Micro] built free versions of their products directly into Facebook so that people could get the help they need without additional hassle," blogged Chetan Gowda, a software engineer on the Site Integrity team at Facebook. "Today, we are expanding those capabilities by adding the anti-malware technology of another IT security vendor, ESET," he wrote. "A larger number of providers increases the chances that malware will get caught and cleaned up, which will help people on Facebook keep their information more secure." According to Facebook, if the device a user is using to access its services is behaving suspiciously and shows signs of a possible malware infection, a message will appear offering the user an anti-malware scan for their device. The user can run the scan, see the results and disable the software without logging out of Facebook. "Glancing through headlines in recent months reveals that malware continues to be a persistent problem for governments, companies, and individuals," Gowda noted. "With the potential to remain undetected on devices for months, malicious code can collect personal information and even spread to other computers in some cases. Compounding the challenges for defense, most people lack basic anti-malware programs that could protect their devices or clean up infections more quickly. "We've worked with ESET to incorporate their finely tuned security software directly into our existing abuse detection and prevention systems, similarly to what we did earlier this year with the other providers," Gowda continued. "Together, these three systems will help us block malicious links and harmful sites from populating the News Feeds and Messages of the 1.35 billion people who use Facebook." Sursa: Facebook Partners With ESET to Fight Malware | SecurityWeek.Com