Leaderboard
Popular Content
Showing content with the highest reputation on 04/16/20 in all areas
-
Salut, Suntem in cautarea unui coleg in echipa de security. Cautam pe cineva senior, care sa stie foarte bine web security, dar si altele (e.g. Windows, networking, cloud). Mai exact, o persoana care sa stie lucruri avansate despre exploatarea unor vulnerabilitati, tips & tricks, bypass-uri si sa nu o deranjeze sa faca pentest cu ajutorul codului sursa - deci code review. Puteti aplica aici: https://www.linkedin.com/jobs/view/1699417011/ Sau imi puteti trimite mesaj privat. Astept de asemenea orice intrebare legata de pozitie. Mersi, // Nytro2 points
-
Ceea ce ai gasit tu nu e tocmai critic si e normal sa nu se oboseasca sa repare, mai ales ca probabil nu au o echipa de securitate interna. Nu vad sa fie niciun "client information disclosure" daca te referi la GDPR. E util ceea ce faci tu, dar e si riscant pentru tine - poti fi dat in judecata, sper insa ca nu mai exista firme "comuniste" care sa faca asa ceva in prezent. Apoi, nu ar trebui sa te stresezi ca "de ce nu repara?". E treaba lor, tu ai fost OK, le-ai zis de probleme. Si in ultimul rand, nu ar trebui sa ai nicio asteptare din partea lor, sa iti ofere ceva sau sa te plateasca. O pot face unele firme, sa iti ofere ceva produs al lor sau mai stiu eu ce, ar fi frumos, dar sansele sunt destul de mici mai ales fiindca vulnerabilitatile nu sunt tocmai critice.1 point
-
De tinut minte. Stiu ca mai vazusem un demo in care comanda/fisierul/binaryul se gasea(u) in alti registri.1 point
-
Probabil, pare ca pentru persistence sa foloseasca acea cheie de registry ca sa ruleze Powershell (11:45).1 point
-
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run aici e pus sa porneasca un cmd.1 point
-
Bypassing Xamarin Certificate Pinning on Android by Alexandre Beaulieu | Apr 6, 2020 Xamarin is a popular open-source and cross-platform mobile application development framework owned by Microsoft with more than 13M total downloads. This post describes how we analyzed an Android application developed in Xamarin that performed HTTP certificate pinning in managed .NET code. It documents the method we used to understand the framework and the Frida script we developed to bypass the protections to man-in-the-middle (MITM) the application. The script’s source code, as well as a sample Xamarin application, are provided for testing and further research. When no Known Solution Exists During a recent mobile application engagement, we ran into a challenging hurdle while setting up an HTTPS man-in-the-middle with Burp. The application under test was developed with the Xamarin framework and all our attempts at bypassing the certificate pinning implementation seemed to fail. Using one of the several available pinning bypass Frida scripts, we were able to intercept traffic to some telemetry sites, but the actual API calls of interest were not intercepted. Searching the Internet for similar work led us to a Frida library, frida-mono-api, which adds basic capabilities to interface with the mono runtime and an article describing how-to exfiltrate request signing keys in Xamarin/iOS applications. With the lack of an end-to-end solution, it quickly started to feel like a DIY moment. Building a Test Environment The first step taken to tackle the problem was to learn as much as possible about Xamarin, Mono and Android by re-creating a very simple application using the Visual Studio 2019 project template and implementing certificate pinning. This approach is interesting for multiple reasons: Learn Xamarin from a developer’s perspective; Solidify understanding of the framework; Reading documentation will be required regardless; Sources are available for debugging. An additional benefit was that the application developed as part of this exploration phase could be used for demonstration purposes and to reliably validate our attempts to bypass certificate pinning. For this reason alone, the time spent upfront on development was more than worth it. The logical progression towards a working bypass can be outlined as follows Identify the interfaces that allow to customize the certificate validation routines; Identify how they are used by typical code bases; Determine how to alter them at runtime in a stable fashion; Write a proof of concept script and test it against the demo application. Another important objective that we had with this work was that any improvements towards Mono support in Frida should be a contribution to existing projects. Down the Rabbit Hole After setting up an Android development environment inside a Windows VM and following along with the Xamarin Getting Started guide, we were able to build and sign a basic Android application. With the application working, we implemented code simulating a certificate pinning routine as shown in listing 1: A handler that flags all certificates as invalid. If we’re able to bypass this handler, then it implies that we should also be able to bypass a handler that verifies the public key against a hardcoded one. Listing 1 – The simplest certificate “validation” handler. static class App { // Global HttpClient as per MSDN: // https://docs.microsoft.com/en-us/dotnet/api/system.net.http.httpclient public static readonly HttpClient Http {get; private set;} static App() { var h = new HttpClientHandler(); h.ServerCertificateCustomValidationCallback = ValidateCertificate; Http = new HttpClient(hh); } // This would normally check the public key with a hardcoded key. // Here we simulate an invalid certificate by always returning false. private static bool ValidateCertificate(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors) => false; } // ... // Elsewhere in the code. private async void MakeHttpRequest(object obj) { // ... var r = await App.Http.GetAsync("https://www.example.org"); // ... } Xamarin Concepts Xamarin is designed to provide cross-platform support for Android and iOS and minimize code duplication as much as possible. The UI code uses Microsoft’s Window Presentation Framework (WPF) which is an arguably nice way to program frontend code. There are two major components in any given Xamarin application: A shared library with the common functionality that does not rely on native operating system features and a native application launcher project specific to each supported target operating system. In practice, this means that there are at least three projects in most Xamarin applications: The shared library, an Android launcher, and an iOS launcher. Application code is written in C# and uses the .NET Framework implementation provided by Mono. The code output is generated as a regular .NET assembly (with the .dll extension) and can be decompiled reliably (barring obfuscation) with most type information kept intact using a decompiler, such as dnSpy. Xamarin has support for three compilation models provided by the underlying Mono framework: Just-in-Time (JIT): Code is compiled lazily as required Partial Ahead-of-Time compilation (AOT): Code is natively compiled ahead-of-time during build for compiler-selected methods Full AOT: All Intermediate Language (IL) code is compiled to native machine code (required for iOS) The Mono Runtime The Mono runtime is responsible for managing the memory heaps, performing garbage collection, JIT compiling methods when needed, and providing native functionality access to managed C# code. The runtime tracks metadata about all managed classes, methods objects, fields, and other states from the Xamarin application. It also exports a native API that enables native code to interact with managed code. While most of these methods are documented, some of them have empty or incomplete document strings and diving into the codebase has proven to be necessary multiple times while developing the Frida script. Mono uses a tiered compilation process, which will become relevant later as we describe the implementation of certificate pinning. In the pure JIT case, a method starts off as IL bytecode, which gets a compilation pass on the initial call. The resulting native code is referred to as the tier0 code and is cached in memory for re-use. When a method is deemed critical, the JIT compiler can decide to optimize it and recompile it using more aggressive optimizations. Mono is in fact much more complex than described here, but this overview covers the basics needed to understand the Frida script. Hijacking Certificate Validation Callbacks .NET has evolved over time and there are two entry points to override certificate validation routines, depending on whether .NET Framework or .NET Core is being used. Mono has recently moved to .NET Core APIs and rendered the .NET Framework method ineffective. Prior to .NET Core (and Mono 6.0), validation occurs through System.Net.ServicePointManager.ServerCertificateValidationCallback, which is a static property containing the function to call when validating a certificate. All HttpClient instances will call the same function, so only one function needs to be hooked. Starting with .NET Core, however, the HTTP stack has been refactored such that each HttpClient has its own HttpClientHandler exposing a ServerCertificateCustomValidationCallback property. This handler is injected into the HttpClient at construction time and is frozen after the first HTTP call to prevent modification. This scenario is much more difficult as it requires knowledge of every HttpClient instance and their location in memory at runtime. Listing 2 – Certificate validation callback setter preventing callback hijacking // https://github.com/mono/mono/blob/mono-6.8.0.96/mcs/class/System.Net.Http/HttpClientHandler.cs#L93 class HttpClientHandler { public Func<HttpRequestMessage, X509Certificate2, X509Chain, SslPolicyErrors, bool> ServerCertificateCustomValidationCallback { get { return (_delegatingHandler.SslOptions.RemoteCertificateValidationCallback? .Target as ConnectHelper.CertificateCallbackMapper)? .FromHttpClientHandler; } set { ThrowForModifiedManagedSslOptionsIfStarted (); // <---- Validation here _delegatingHandler.SslOptions .RemoteCertificateValidationCallback = value != null ? new ConnectHelper.CertificateCallbackMapper(value) .ForSocketsHttpHandler : null; } } } As seen in the previous listing, setting the callback after a request has been sent will throw an exception and most likely cause the application to crash. Fortunately for us, the base class of HttpClient is HttpMessageInvoker which contains a mutable reference to the HttpClientHandler that will perform the certificate validation so it’s possible to safely change the whole handler: Listing 3 – HttpMessageInvoker request dispatch mechanism // https://github.com/mono/mono/blob/mono-6.8.0.96/mcs/class/System.Net.Http/System.Net.Http/HttpMessageInvoker.cs public class HttpMessageInvoker : IDisposable { protected private HttpMessageHandler handler; readonly bool disposeHandler; // ... public virtual Task SendAsync (HttpRequestMessage request, CancellationToken cancellationToken) { return handler.SendAsync (request, cancellationToken); } } Hooking Managed Methods In the ServicePointManager case, intercepting the callback is as simple as hooking the static property’s get and set methods, so it will not be covered explicitly but is included with the bypass Frida script we are providing. Let’s focus on the more interesting HttpClientHandler case, which requires more than just method hooking. The idea is to replace the HttpClientHandler instance by one that we control that restores the default validation routine. To do this, we can hook the HttpMessageInvoker.SendAsync implementation and replace the handler immediately before it gets called. Now, SendAsync is a managed method, so it could be in any given state at any given moment: Not yet JIT compiled: The native code for hooking does not exist Tier0 compiled: We can hook the method if we can find its address AOT compiled: The method is in a memory mapped native image To make matters trickier, if the Mono runtime were to decide to optimize a method that we hooked, it is likely that our hook might be removed in the newly generated code. Thankfully, the native function mono_compile_method allows us to take a class method and force the JIT compilation process. However, it is not clear whether the method is tier 0 compiled or optimized, so there could still potentially be issues with optimizations. The return value of mono_compile_method is a pointer to the cached native code corresponding to the original method, making it very straightforward to patch using existing Frida APIs. Putting the Pieces Together We forked frida-mono-api project as a starting point and added some new export signatures, along with the JIT compilation export and a MonoApiHelper method to wrap the boilerplate required to hook managed methods. The resulting code is very clean and in theory should allow to hook any managed method: Listing 4 – Support for managed method hooking in frida-mono-api function hookManagedMethod(klass, methodName, callbacks) { // Get the method descriptor corresponding to the method name. let md = MonoApiHelper.ClassGetMethodFromName(klass, methodName); if (!md) throw new Error('Method not found!'); // Force a JIT compilation to get a pointer to the cached native code. let impl = MonoApi.mono_compile_method(md) // Use the Frida interceptor to hook the native code. Interceptor.attach(impl, {...callbacks}); } With the ability to hook managed methods, we can implement the approach described above and test the script on a rooted Android device. Listing 5 – Final certificate pinning bypass script import { MonoApiHelper, MonoApi } from 'frida-mono-api' const mono = MonoApi.module // Locate System.Net.Http.dll let status = Memory.alloc(0x1000); let http = MonoApi.mono_assembly_load_with_partial_name(Memory.allocUtf8String('System.Net.Http'), status); let img = MonoApi.mono_assembly_get_image(http); let hooked = false; let kHandler = MonoApi.mono_class_from_name(img, Memory.allocUtf8String('System.Net.Http'), Memory.allocUtf8String('HttpClientHandler')); if (kHandler) { let ctor = MonoApiHelper.ClassGetMethodFromName(kHandler, 'CreateDefaultHandler'); // Static method -> instance = NULL. let pClientHandler = MonoApiHelper.RuntimeInvoke(ctor, NULL); console.log(`[+] Created Default HttpClientHandler @ ${pClientHandler}`); // Hook HttpMessageInvoker.SendAsync let kInvoker = MonoApi.mono_class_from_name(img, Memory.allocUtf8String('System.Net.Http'), Memory.allocUtf8String('HttpMessageInvoker')); MonoApiHelper.Intercept(kInvoker, 'SendAsync', { onEnter: (args) => { console.log(`[*] HttpClientHandler.SendAsync called`); let self = args[0]; let handler = MonoApiHelper.ClassGetFieldFromName(kInvoker, '_handler'); let cur = MonoApiHelper.FieldGetValueObject(handler, self); if (cur.equals(pClientHandler)) return; // Already bypassed. MonoApi.mono_field_set_value(self, handler, pClientHandler); console.log(`[+] Replaced with default handler @ ${pClientHandler}`); } }); console.log('[+] Hooked HttpMessageInvoker.SendAsync'); hooked = true; } else { console.log('[-] HttpClientHandler not found'); } Running the script gives the following output: $ frida -U com.test.sample -l dist/xamarin-unpin.js --no-pause ____ / _ | Frida 12.8.7 - A world-class dynamic instrumentation toolkit | (_| | > _ | Commands: /_/ |_| help -> Displays the help system . . . . object? -> Display information about 'object' . . . . exit/quit -> Exit . . . . . . . . More info at https://www.frida.re/docs/home/ Attaching... [+] Created Default HttpClientHandler @ 0xa0120fc8 [+] Hooked HttpMessageInvoker.SendAsync with DefaultHttpClientHandler technique [-] ServicePointManager validation callback not found. [+] Done! Make sure you have a valid MITM CA installed on the device and have fun. [*] HttpClientHandler.SendAsync called [+] Replaced with default handler @ 0xa0120fc8 As seen above, the SendAsync hook has worked as expected and the HttpClientHandler got replaced by a default handler. Subsequent SendAsync calls will check the handler object and avoid replacing it if it is already hijacked. The screen capture below shows the sample application making a request before and after running the bypass script. The first request gives an SSL exception (as expected) because of the installed callback that always returns false. The second request triggers the hook, which replaces the client handler and returns execution to the HTTP client, hijacking the validation process generically for any HttpClient instance without having to scan memory to find them. Conclusion Xamarin and Mono are quickly evolving projects. This technique appears to work very well with the current (Mono 6.0+) framework versions but might require some modifications to work with older or future versions. We hope that sharing the method used to understand and tackle the problem will be useful to the security community in developing similar methods when performing mobile testing engagements. The complete repository containing the code and pre-build Frida scripts can be found on Github. Future Work The Frida script has been tested on our sample application with regular build options, without ahead-of-time compilation and with the .NET Core method (HttpClientHandler) and works reliably. There are however many scenarios that can occur with Xamarin and we were not able to test all of them. More specifically, any of the following has not been tested and could be an area of future development: .NET Framework applications which use ServicePointManager iOS Applications with Full AOT Android Applications with Partial AOT Android Applications with Full AOT If you try the script and run into issues, please open a bug on our issue tracker so we can improve it. Even better, if you end up fixing some issues, we’d be happy to merge your pull requests. And lastly, if you have APKs for one of the untested scenarios and feel like sharing them with us, it will help us ensure that the script works in more cases. References Mono Runtime Documentation: https://www.mono-project.com/docs/advanced/runtime/docs/ Mono Compilation Modes: https://www.mono-project.com/docs/advanced/aot/ Mono on Github: https://github.com/mono/mono ServicePointManager deprecation: https://github.com/xamarin/xamarin-android/issues/3682#issuecomment-535679023 Mono Tiered Compilation: https://github.com/mono/mono/issues/16018 Code Release: https://github.com/GoSecure/frida-xamarin-unpin Fridax – A Xamarin hacking framework: https://github.com/NorthwaveNL/fridax Sursa: https://www.gosecure.net/blog/2020/04/06/bypassing-xamarin-certificate-pinning-on-android/1 point
-
Sau var httpClient = new HttpClient(new HttpLoggingHandler(/*new NativeMessageHandler()*/)){ BaseAddress = new Uri(baseUrl)}; Si clasa HttpLoggingHandler o iei de aici: https://gist.github.com/dbacinski/5bd2793e33b0377ecfbcd980d6841f1e Testat acum jumatate de an si e okay.1 point
-
Red Team Tactics: Utilizing Syscalls in C# - Prerequisite Knowledge Jack Halon I like to break into things; both physically and virtually. United States Email Twitter LinkedIn Github YouTube Over the past year, the security community - specifically Red Team Operators and Blue Team Defenders - have seen a massive rise in both public and private utilization of System Calls in windows malware for post-exploitation activities, as well as for the bypassing of EDR or Endpoint Detection and Response. Now, to some, the utilization of this technique might seem foreign and brand new, but that’s not really the case. Many malware authors, developers, and even game hackers have been utilizing system calls and in memory loading for years. with the initial goal of bypassing certain restrictions and securities put into place by tools such as anti-virus and anti-cheat engines. A good example of how these syscall techniques can be utilized were presented in a few blog posts, such as - how to Bypass EDR’s Memory Protection, Introduction to Hooking by Hoang Bui and the greatest example of them all - Red Team Tactics: Combining Direct System Calls and sRDI to bypass AV/EDR by Cneelis which initially focused on utilizing syscalls to dump LSASS undetected. As a Red Teamer, the usage of these techniques were critical to covert operations - as it allowed us to carry out post exploitation activities within networks while staying under the radar. Implementation of these techniques were mostly done in C++ as to easily interact with the Win32 API and the system. But, there was always one caveat to writing tools in C++ and that’s the fact that when our code compiled, we had an EXE. Now for covert operations to succeed, we as a operators always wanted to avoid having to “touch the disk” - meaning that we didn’t want to blindly copy and execute files on the system. What we needed, was to find a way to inject these tools into memory which were more OPSEC (Operational Security) safe. While C++ is an amazing language for anything malware related, I seriously started to look at attempting to integrate syscalls into C# as some of my post-exploitation tools began transition toward that direction. This accomplishment became more desirable to me after FuzzySec and The Wover released their BlueHatIL 2020 talk - Staying # and Bringing Covert Injection Tradecraft to .NET. After some painstaking research, failed trial attempts, long sleepless nights, and a lot of coffee - I finally succeed in getting syscalls to work in C#. While the technique itself was beneficial to covert operations, the code itself was somewhat cumbersome - you’ll understand why later. Overall, the point of this blog post series will be to explore how we can use direct system calls in C# by utilizing unmanaged code to bypass EDR and API Hooking. But, before we can start writing the code to do that, we must first understand some basic concepts. Such as how system calls work, and some .NET internals - specifically managed vs unmanaged code, P/Invoke, and delegates. Understanding these basics will really help us in understanding how and why our C# code works. Alright, enough of my ramblings - let’s get into the basics! Understanding System Calls In Windows, the process architecture is split between two processor access modes - user mode and kernel mode. The idea behind the implementation of these modes was to protect user applications from accessing and modifying any critical OS data. User applications such as Chrome, Word, etc. all run in user mode, whereas OS code such as the system services and device drivers all run in kernel mode. The kernel mode specifically refers to a mode of execution in a processor that grants access to all system memory and all CPU instructions. Some x86 and x64 processors differentiate between these modes by using another term known as ring levels. Processors that utilize the ring level privilege mode define four privilege levels - other known as rings - to protect system code and data. An example of these ring levels can be seen below. Within Windows, Windows only utilizes two of these rings - Ring 0 for kernel mode and Ring 3 for user mode. Now, during normal processor operations, the processor will switch between these two modes depending on what type of code is running on the processor. So what’s the reason behind this “ring level” of security? Well, when you start a user-mode application, windows will create a new process for the application and will provide that application with a private virtual address space and a private handle table. This “handle table” is a kernel object that contains handles. Handles are simply an abstract reference value to specific system resources, such as a memory region or location, an open file, or a pipe. It’s initial goal is to hides a real memory address from the API user, thus allowing the system to carry out certain management functions like reorganize physical memory. Overall, a handles job is to tasks internal structures, such as: Tokens, Processes, Threads, and more. An example of a handle can be seen below. Because an applications virtual address space is private, one application can’t alter the data that belongs to another application - unless the process makes part of its private address space available as a shared memory section via file mapping or via the VirtualProtect function, or unless one process has the right to open another process to use cross-process memory functions, such as ReadProcessMemory and WriteProcessMemory. Now, unlike user mode, all the code that runs in kernel mode shares a single virtual address space called system space. This means that the kernel-mode drivers are not isolated from other drivers and the operating system itself. So if a driver accidentally writes to the wrong address space or does something malicious, then it can compromise the system or the other drivers. Although there are protections in place to prevent messing with the OS - like Kernel Patch Protection aka Patch Guard, but let’s not worry about these. Since the kernel houses most of the internal data structures of the operating system (such as the handle tables) anytime a user mode application needs to access these data structures or needs to call an internal Windows routine to carry out a privileged operation (such as reading a file), then it must first switch from user mode to kernel mode. This is where system calls come into play. For a user application to access these data structures in kernel mode, the process utilizes a special processor instruction trigger called a “syscall”. This instruction triggers the transition between the processor access modes and allows the processor to access the system service dispatching code in the kernel. This in turn calls the appropriate internal function in Ntoskrnl.exe or Win32k.sys which house the kernel and OS application level logic. An example of this “switch” can be observed in any application. For example, by utilizing Process Monitor on Notepad - we can view specific Read/Write operation properties and their call stack. In the image above, we can see the switch from user mode to kernel mode. Notice how the Win32 API CreateFile function call follows directly before the Native API NtCreateFile call. But, if we pay close attention we will see something odd. Notice how there are two different NtCreateFile function calls. One from the ntdll.dll module and one from the ntoskrnl.exe module. Why is that? Well, the answer is pretty simple. The ntdll.dll DLL exports the Windows Native API. These native APIs from ntdll are implemented in ntoskrnl - you can view these as being the “kernel APIs”. Ntdll specifically supports functions and system service dispatch stubs that are used for executive functions. Simply put, they house the “syscall” logic that allows us to transition our processor from user mode to kernel mode! So how does this syscall CPU instruction actually look like in ntdll? Well, for us to inspect this, we can utilize WinDBG to disassemble and inspect the call functions in ntdll. Let’s begin by starting WinDBG and opening up a process like notepad or cmd. Once done, in the command window, type the following: x ntdll!NtCreateFile This simply tells WinDBG that we want to examine (x) the NtCreateFile symbol within the loaded ntdll module. After executing the command, you should see the following output. 00007ffd`7885cb50 ntdll!NtCreateFile (NtCreateFile) The output provided to us is the memory address of where NtCreateFile is in the loaded process. From here to view the disassembly, type the following command: u 00007ffd`7885cb50 This command tells WinDBG that we want to unassemble (u) the instructions at the beginning of the memory range specified. If ran correctly, we should now see the following output. Overall the NtCreateFile function from ntdll is first responsible for setting up the functions call arguments on the stack. Once done, the function then needs to move it’s relevant system call number into eax as seen by the 2nd instruction mov eax, 55. In this case the syscall number for NtCreateFile is 0x55. Each native function has a specific syscall number. Now these number tend to change every update - so at times it’s very hard to keep up with them. But thanks to j00ru from Google Project Zero, he constantly updates his Windows X86-64 System Call Table, so you can use that as a reference anytime a new update comes out. After the syscall number has been moved into eax, the syscall instruction is then called. Here is where the CPU will jump into kernel mode and carry out the specified privileged operation. To do so it will copy the function calls arguments from the user mode stack into the kernel mode stack. It then executes the kernel version of the function call, which will be ZwCreateFile. Once finished, the routine is reversed and all return values will be returned to the user mode application. Our syscall is now complete! Using Direct System Calls Alright, so we know how system calls work, and how they are structured, but now you might be asking yourself… How do we execute these system calls? It’s simple really. For us to directly invoke the system call, we will build the system call using assembly and execute that in our applications memory space! This will allow us to bypass any hooked function that are being monitored by EDR’s or Anti-Virus. Of course syscalls can still be monitored and executing syscalls via C# still gives off a few hints - but let’s not worry about that as it’s not in scope for this blog post. For example, if we wanted to write a program that utilizes the NtCreateFile syscall, we can build some simple assembly like so: mov r10, rcx mov eax, 0x55 <-- NtCreateFile Syscall Identifier syscall ret Alright, so we have the assembly of our syscall… now what? How do we execute it in C#? Well in C++ this would be as simple as adding this to a new .asm file, enabling the masm build dependency, defining the C function prototype of our assembly, and simply just initialize the variables and structures needed to invoke the syscall. As easy as that sounds, it’s not that simple in C#. Why? Two words - Managed Code. Understanding C# and the .NET Framework Before we dive any deeper into understanding what this “Managed Code” is and why it’s going to cause us headaches - we need to understand what C# is and how it runs on the .NET Framework. Simply, C# is a type-safe object-oriented language that enables developers to build a variety of secure and robust applications. It’s syntax simplifies many of the complexities of C++ and provides powerful features such as nullable types, enumerations, delegates, lambda expressions, and direct memory access. C# also runs on the .NET Framework, which is an integral component of Windows that includes a virtual execution system called the Common Language Runtime or CLR and a unified set of class libraries. The CLR is the commercial implementation by Microsoft of the Common Language Infrastructure known as the CLI. Source code written in C# is compiled into an Intermediate Language (IL) that conforms to the CLI specification. The IL code and resources, such as bitmaps and strings, are stored on disk in an executable file called an assembly, typically with an extension of .exe or .dll. When a C# program is executed, the assembly is loaded into the CLR, the CLR then performs Just-In-Time (JIT) compilation to convert the IL code to native machine instructions. The CLR also provides other services such automatic garbage collection, exception handling, and resource management. Code that’s executed by the CLR is sometimes referred to as “managed code”, in contrast to “unmanaged code”, which is compiled directly into native machine code for a specific system. To put it very simply, managed code is just that: code whose execution is managed by a runtime. In this case, the runtime is the Common Language Runtime In therms of unmanaged code, it simply relates to C/C++ and how the programmer is in charge of pretty much everything. The actual program is, essentially, a binary that the operating system loads into memory and starts. Everything else, from memory management to security considerations are a burden of the programmer. A good visual example of the the .NET Framework is structured and how it compiles C# to IL then to machine code can be seen below. Now, if you actually read all that then you would have noticed that I mentioned that the CLR provides other services such as “garbage collection”. In the CLR, the garbage collector also known as the GC, serves as the automatic memory manager by essentially… you know, “freeing the garbage” that is your used memory. It also gives the benefit by allocating objects on the managed heap, reclaiming objects, clearing memory, and proving memory safety by preventing known memory corruption issues like Use After Free. Now while C# is a great language, and it provides some amazing features and interoperability with Windows - like in-memory execution and as such - it does have a few caveats and downsides when it comes to coding malware or trying to interact with the system. Some of these issues are: It’s easy to disassemble and reverse engineer C# assemblies via tools like dnSpy all because they are compiled into IL and not native code. It requires .NET to be present on the system for it to execute. It’s harder to do anti-debugging tricks in .NET then in native code. It requires more work and code to interoperate (interop) between managed and unmanaged code. In case of this blog post, #4 is the one that will cause us the most pain when coding syscalls in C#. Whatever we do in C# is “managed” - so how are we able to efficiently interact with the Windows system and processor? This questions is especially important for us since we want to execute assembly code, and unfortunately for us, there is no inline ASM in C# like there is in C++ with the masm build dependencies. Well, thankfully for us, Microsoft provided a way for us to be able to do that! And it’s all thanks to the CLR! Thanks to how the CLR was constructed, it actually allows us to pass the boundaries between the managed and unmanaged world. This process is known as interoperability or interop for short. With interop, C# supports pointers and the concept of “unsafe” code for those cases in which direct memory access is critical - that would be us! 😉 Overall this means that we can now do the same things C++ can, and we can also utilize the same windows API functions… but, with some major - I mean… minor headaches and inconveniences… heh. 😅 Of course, it is important to note that once the code passes the boundaries of the runtime, the actual management of the execution is again in the hands of unmanaged code, and thus falls under the same restrictions as it would when we code in C++. Thus we need be be careful on how we allocate, deallocate, and manage memory as well as other objects. So, knowing this, how are we able to enable this interoperability in C#? Well, let me introduce you the person of the hour - P/Invoke (short for Platform Invoke)! Understanding Native Interop via P/Invoke P/Invoke is a technology that allows you to access structs, callbacks, and functions in unmanaged libraries (meaning DLLs and such) from your managed code. Most of the P/Invoke API that allows this interoperability is contained within two namespaces - specifically System and System.Runtime.InteropServices. So let’s see a simple example. Let’s say you wanted to utilize the MessageBox function in your C# code - which usually you can’t call unless you’re building a UWP app. For starters, let’s create a new .cs file and make sure we include the two P/Invoke namespaces. using System; using System.Runtime.InteropServices; public class Program { public static void Main(string[] args) { // TODO } } Now, let’s take a quick look at the C MessageBox syntax that we want to use. int MessageBox( HWND hWnd, LPCTSTR lpText, LPCTSTR lpCaption, UINT uType ); Now for starters you must know that the data types in C++ do not match those used in C#. Meaning, that data types such as HWND (handle to a window) and LPCTSTR (Long Pointer to Constant TCHAR String) are not valid in C#. We’ll brief over converting these data types for MessageBox now so you get a brief idea - but if you want to learn more then I suggest you go read about the C# Types and Variables. So for any handle objects related to C++, such as HWND, the equivalent of that data type (and any pointer in C++) in C# is the IntPtr Struct which is a platform-specific type that is used to represent a pointer or a handle. Any strings or pointer to string data types in C++ can be set to the C# equivalent - which simply is string. And for UINT or unsigned integer, that stays the same in C#. Alright, now that we know the different data types, let’s go ahead and call the unmanaged MessageBox function in our code. Our code should now look something like this. using System; using System.Runtime.InteropServices; public class Program { [DllImport("user32.dll", CharSet = CharSet.Unicode, SetLastError = true)] private static extern int MessageBox(IntPtr hWnd, string lpText, string lpCaption, uint uType); public static void Main(string[] args) { // TODO } } Take note that before we import our unmanaged function, we call the DllImport attribute. This attribute is crucial to add because it tells the runtime that it should load the unmanaged DLL. The string passed in, is the target DLL that we want to load - in this case user32.dll which houses the function logic of MessageBox. Additionally, we also specify which character set to use for marshalling the strings, and also specify that this function calls SetLastError and that the runtime should capture that error code so the user can retrieve it via Marshal.GetLastWin32Error() to return any errors back to us if the function was to fail. Finally, you see that we create a private and static MessageBox function with the extern keyword. This extern modifier is used to declare a method that is implemented externally. Simply this tells the runtime that when you invoke this function, the runtime should find it in the DLL specified in DllImport attribute - which in our case will be in user32.dll. Once we have all that, we can finally go ahead and call the MessageBox function within our main program. using System; using System.Runtime.InteropServices; public class Program { [DllImport("user32.dll", CharSet = CharSet.Unicode, SetLastError = true)] private static extern int MessageBox(IntPtr hWnd, string lpText, string lpCaption, uint uType); public static void Main(string[] args) { MessageBox(IntPtr.Zero, "Hello from unmanaged code!", "Test!", 0); } } If done correctly, this should now execute a new message box with the title “Test!” and a message of “Hello from unmanaged code!”. Awesome, so we just learned how to import and invoke unmanaged code from C#! It’s actually pretty simple when you look at it… but don’t let that fool you! This was just a simple function - what happens if the function we want to call is a little more complex, such as the CreateFileA function? Let’s take a quick look at the C syntax for this function. HANDLE CreateFileA( LPCSTR lpFileName, DWORD dwDesiredAccess, DWORD dwShareMode, LPSECURITY_ATTRIBUTES lpSecurityAttributes, DWORD dwCreationDisposition, DWORD dwFlagsAndAttributes, HANDLE hTemplateFile ); Let’s look at the dwDesiredAccess parameter which specifies the access permissions of the file we created by using generic values such as GENERIC_READ and GENERIC__WRITE. In C++ we could simply just use these values and the system will know what we mean, but not in C#. Upon looking into the documentation we will see that Generic Access Rights used for the dwDesiredAccess parameter use some sort of Access Mask Format to specify what privilege we are to give the file. Now since this parameter accepts a DWORD which is a 32-bit unsigned integer, we quickly learn that the GENERIC-* constants are actually flags which match the constant to a specific access mask bit value. In the case of C#, to do the same, we would have to create a new structure type with the FLAGS enumeration attribute that will contain the same constants and values that C++ has for this function to work properly. Now you might be asking me - where would I get such details? Well the best resource for you to utilize in this case - and any case where you have to deal with unmanaged code in .NET is to use the PInvoke Wiki. You’ll pretty much find anything and everything that you need here. If we were to invoke this unmanaged function in C# and have it work properly, a sample of the code would look something like this: using System; using System.Runtime.InteropServices; public class Program { [DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)] public static extern IntPtr CreateFile( string lpFileName, EFileAccess dwDesiredAccess, EFileShare dwShareMode, IntPtr lpSecurityAttributes, ECreationDisposition dwCreationDisposition, EFileAttributes dwFlagsAndAttributes, IntPtr hTemplateFile); [Flags] enum EFileAccess : uint { Generic_Read = 0x80000000, Generic_Write = 0x40000000, Generic_Execute = 0x20000000, Generic_All = 0x10000000 } public static void Main(string[] args) { // TODO Code Here for CreateFile } } Now do you see what I meant when I said that utilizing unmanaged code in C# can be cumbersome and inconvenient? Good, so we’re on the same page now 😁 Alright, so we’ve covered a lot of material already. We understand how system calls work, we know how C# and the .NET framework function on a lower level, and we now know how to invoke unmanaged code and Win32 APIs from C#. But, we’re still missing a critical piece of information. What could that be… 🤔 Oh, that’s right! Even though we can call Win32 API functions in C#, we still don’t know how to execute our “native code” assembly. Well, you know what they say - “If there’s a will, then there’s a way”! And thanks to C#, even though we can’t execute inline assembly like we can in C++, we can do something similar thanks to something lovely called Delegates! Understanding Delegates and Native Code Callbacks Can we just stop for a second and actually admire how cool the CLR really is? I mean to manage code, and to allow interop between the GC and the Windows APIs is actually pretty cool. The runtime is so cool, that it also allows communication to flow in both directions, meaning that you can call back into managed code from native functions by using function pointers! Now, the closest thing to a function pointer in managed code is a delegate, which is a type that represents references to methods with a particular parameter list and return type. And this is what is used to allow callbacks from native code into managed code. Simply, delegates are used to pass methods as arguments to other methods. Now the use of this feature is similar to how one would go from managed to unmanaged code. A good example of this can be seen given by Microsoft. using System; using System.Runtime.InteropServices; namespace ConsoleApplication1 { public static class Program { // Define a delegate that corresponds to the unmanaged function. private delegate bool EnumWindowsProc(IntPtr hwnd, IntPtr lParam); // Import user32.dll (containing the function we need) and define // the method corresponding to the native function. [DllImport("user32.dll")] private static extern int EnumWindows(EnumWindowsProc lpEnumFunc, IntPtr lParam); // Define the implementation of the delegate; here, we simply output the window handle. private static bool OutputWindow(IntPtr hwnd, IntPtr lParam) { Console.WriteLine(hwnd.ToInt64()); return true; } public static void Main(string[] args) { // Invoke the method; note the delegate as a first parameter. EnumWindows(OutputWindow, IntPtr.Zero); } } } So this code might look a little complex, but trust me - it’s not! Before we walk though this example, let’s make sure we review the signatures of the unmanaged functions that we need to work with. As you can see, we are importing the native code function EnumWindows which enumerates all top-level windows on the screen by passing the handle to each window, and in turn, passing it to an application-defined callback function. If we take a peek at the C syntax for the function type we will see the following: BOOL EnumWindows( WNDENUMPROC lpEnumFunc, LPARAM lParam ); If we look at the lpEnumFunc parameter in the documentation, we will see that it accepts a pointer to an application-defined callback - which should follow the same structure as the EnumWindowsProc callback function. This callback is simply a placeholder name for the application-defined function. Meaning that we can call it anything we want in the application. If we take a peek at this function C syntax we will see the following. BOOL CALLBACK EnumWindowsProc( _In_ HWND hwnd, _In_ LPARAM lParam ); As you can see this function parameters accept a HWND or pointer to a windows handle, and a LPARAM or Long Pointer. And the return value for this callback is a boolean - either true or false to dictate when enumeration has stopped. Now, if we look back into our code, on line #9, we define our delegate that matches the signature of the callback from unmanaged code. Since we are doing this in C#, we replaced the C++ pointers with IntPtr - which is the the C# equivalent of pointers. On lines #13 and #14 we introduce the EnumWindows function from user32.dll. Next on line #17 - 20 we implement the delegate. This is where we actually tell C# what we want to do with the data that is returned to us from unmanaged code. Simply here we are saying to just print out the returned values to the console. And finally, on line #24 we simply call our imported native method and pass our defined and implemented delegate to handle the return data. Simple! Alright, so this is pretty cool. And I know… you might be asking me right now - “Jack, what’s this have to do with executing our native assembly code in C#? We still don’t know how to accomplish that!” And all I have to say for myself is this meme… There’s a reason why I wanted to teach you about delegates and native code callbacks before we got here, as delegates are a very important part to what we will cover next. Now, we learned that delegates are similar to C++ function pointers, but delegates are fully object-oriented, and unlike C++ pointers to member functions, delegates encapsulate both an object instance and a method. We also know that they allow methods to be passed as parameters and can also be used to define callback methods. Since delegates are so well versed in the data they can accept, there’s something cool that we can do with all this data. For example, let’s say we execute a native windows function such as VirtualAlloc which allows us to reserve, commit, or change the state of a region of pages in the virtual address space of the calling process. This function will return to us a base address of the allocated memory region. Let’s say, for this example, that we allocated some… oh you know… shellcode per say 😏- see where I’m going with this? No!? Fine… let me explain. So if we were able to allocate a memory region in our process that contained shellcode and returned that to our delegate, then we can utilize something called type marshaling to transform incoming data types to cross between managed and native code. This means that we can go from an unmanaged function pointer to a delegate! Meaning that we can execute our assembly or byte array shellcode this way! So with this general idea, let’s jump into this a little deeper! Type Marshaling & Unsafe Code and Pointers As stated before, Marshaling is the process of transforming types when they need to cross between managed and native code. Marshaling is needed because the types in the managed and unmanaged code are different as we’ve already seen and demonstrated. By default, the P/Invoke subsystem tries to do type marshaling based on the default behavior. But, for those situations where you need extra control with unmanaged code, you can utilize the Marshal class for things like allocating unmanaged memory, copying unmanaged memory blocks, and converting managed to unmanaged types, as well as other miscellaneous methods used when interacting with unmanaged code. A quick example of how this marshaling works can be seen below. In our case, and for this blog post, the most important Marshal method will be the Marshal.GetDelegateForFunctionPointer method, which allows us to convert an unmanaged function pointer to a delegate of a specified type. Now there are a ton of other types you can marshal to and from, and I highly suggest you read up on them as they are a very integral part of the .NET framework and will come in handy whenever you write red team tools, or even defensive tools if you are a defender. Alright, so we know that we can marshal our memory pointers to delegates - but now the question is, how are we able to create a memory pointer to our assembly data? Well in fact, it’s quite easy. We can do some simple pointer arithmetic to get a memory address of our ASM code. Since C# does not support pointer arithmetic, by default, what we can do is declare a portion of our code to be unsafe. This simply denotes an unsafe context, which is required for any operation involving pointers. Overall, this allows us to carry out pointer operations such as doing pointer dereferencing. Now the only caveat is that to compile unsafe code, you must specify the -unsafe compiler option. So knowing this, let’s go over a quick example. If we wanted to - let’s say - execute the syscall for NtOpenProcess, what we would do is start by writing the assembly into a byte array like so. using System; using System.ComponentModel; using System.Runtime.InteropServices; namespace SharpCall { class Syscalls { static byte[] bNtOpenProcess = { 0x4C, 0x8B, 0xD1, // mov r10, rcx 0xB8, 0x26, 0x00, 0x00, 0x00, // mov eax, 0x26 (NtOpenProcess Syscall) 0x0F, 0x05, // syscall 0xC3 // ret }; } } Once we have our byte array completed for our syscall, we would then proceed to call the unsafe keyword and denote an area of code where unsafe context will occur. Within that unsafe context, we can do some pointer arithmetic to initialize a new byte pointer called ptr and set that to the value of syscall, which houses our byte array assembly. As you will see below, we utilize the fixed statement, which prevents the garbage collector from relocating a movable variable - or in our case the syscall byte array. Without a fixed context, garbage collection could relocate the variables unpredictably and cause errors later down the line during execution. Afterwards, we simply cast the byte array pointer into a C# IntPtr called memoryAddress. Doing this will allow us to obtain the memory location of where our syscall byte array is located. From here we can do multiple things like use this memory region in a native API call, or we can pass it to other managed C# functions, or we can even use it in delegates! An example of what I explained above can be seen below. using System; using System.ComponentModel; using System.Runtime.InteropServices; namespace SharpCall { class Syscalls { // NtOpenProcess Syscall ASM static byte[] bNtOpenProcess = { 0x4C, 0x8B, 0xD1, // mov r10, rcx 0xB8, 0x26, 0x00, 0x00, 0x00, // mov eax, 0x26 (NtOpenProcess Syscall) 0x0F, 0x05, // syscall 0xC3 // ret }; public static NTSTATUS NtOpenProcess( // Fill NtOpenProcess Paramters ) { // set byte array of bNtOpenProcess to new byte array called syscall byte[] syscall = bNtOpenProcess; // specify unsafe context unsafe { // create new byte pointer and set value to our syscall byte array fixed (byte* ptr = syscall) { // cast the byte array pointer into a C# IntPtr called memoryAddress IntPtr memoryAddress = (IntPtr)ptr; } } } } } And that about does it! We now know how we can take shellcode from a byte array and execute it within our C# application by using unmanaged code, unsafe context, delegates, marshaling and more! I know this was a lot to cover, and honestly it’s a little complex at first - so take your time to read this though and make sure you understand the concepts. In our next blog post, we will focus on actually writing the code to execute a valid syscall by utilizing everything that we learned here! In addition to writing the code, we’ll also go over some concepts to managing your “tools” code and how we can prepare it for future integration between other tools. Thanks for reading, and stay tuned for Part 2! Sursa: https://jhalon.github.io/utilizing-syscalls-in-csharp-1/1 point
-
Hunter of Default Logins (Web/HTTP) 2020-04-07 We all like them, don’t we? Those easy default credentials. Surely we do, but looking for them during penetration tests is not always fun. It’s actually hard work! Especially when we have a large environment. How can we check all those different web interfaces? Is there a viable automation? In this article we will present our HTTP default login hunter. Table Of Contents hide Introduction NNdefaccts alternate dataset Nmap script limitations Introducing default-http-login-hunter Additional features Fingerprint contribution Conclusion Thanks Introduction Checking administrative interfaces for weak and default credentials is a vital part of every VAPT exercise. But doing it manually can quickly become exhausting. The problem with web interfaces is that they are all different. And so to develop an universal automation that could do the job across multiple interfaces is very hard. Although there are some solutions for this, they are mostly commercial and the functionality is not even that great. Luckily there is a free and open source solution that can help us. NNdefaccts alternate dataset The NNdefaccts dataset made by nnposter is an alternate fingerprint dataset for the Nmap http-default-accounts.nse script. The NNdefacts dataset can test more than 380 different web interfaces for default logins. For comparison, the latest Nmap 7.80 default dataset only supports 55. Here are some examples of the supported web interfaces: Network devices (3Com, Asus, Cisco, D-Link, F5, Nortel..) Video cameras (AXIS, GeoVision, Hikvision, Sanyo..) Application servers (Apache Tomcat, JBoss EAP..) Monitoring software (Cacti, Nagios, OpenNMS..) Server management (Dell iDRAC, HP iLO..) Web servers (WebLogic, WebSphere..) Printers (Kyocera, Sharp, Xerox..) IP Phones (Cisco, Polycom..) Citrix, NAS4Free, ManageEngine, VMware.. See the following link for a full list: https://github.com/InfosecMatter/http-default-logins/blob/master/list.txt The usage is quite simple – we simply run the Nmap script with the alternate dataset as a parameter. Like this: nmap --script http-default-accounts --script-args http-default-accounts.fingerprintfile=~/http-default-accounts-fingerprints-nndefaccts.lua -p 80 192.168.1.1 This is already pretty great as it is. Nmap script limitations Now the only caveat with this solution is that the http-default-accounts.nse script works only for web servers running on common web ports such as tcp/80, tcp/443 or similar. This is because the script contains the following port rule which matches only common web ports: So what if we find a web server running on a different port – say tcp/9999? Unfortunately the Nmap script will not run because of the port rule.. ..unless we modify the port rule in the Nmap script to match our web server port! And that’s exactly where our new tool comes handy. Introducing default-http-login-hunter The default-http-login-hunter tool, written in Bash, is essentially a wrapper around the aforementioned technologies to unlock their full potential and to make things easy for us. The tool simply takes a URL as an argument: default-http-login-hunter.sh <URL> First it will make a local temporary copy of the http-default-accounts.nse script and it will modify the port rule so that it will match the web server port that we provided in the URL. Then it will run the Nmap command for us and display the output nicely. Here’s an example: From the above screenshot we can see that we found a default credentials for Apache Tomcat running on port tcp/9999. Now we could deploy a webshell on it and obtain RCE. But that’s another story. Additional features List of URLs The tool also accepts a list of URLs as an input. So for instance, we could feed it with URLs that we extracted from Nessus scan results using our Nessus CSV parser. The tool will go through all the URLs one by one and check for default logins. Like this: default-http-login-hunter.sh urls.txt Here the tool found a default login to the Cisco IronPort running on port https/9443. Resume-friendly Another useful feature is that it saves all the results in the current working directory. So if it gets accidentally interrupted, it will just continue where it stopped. Like in this example: Here we found some Polycom IP phones logins. Staying up-to-date To make sure that we have the latest NNdefacts dataset, run the update command: default-http-login-hunter.sh update And that’s pretty much it. If you want to see more detailed output, use -v parameter in the command line. You can find the tool in our InfosecMatter Github repository here. Fingerprint contribution I encourage everyone to check out the NNdefacts project and consider contributing with fingerprints that you found during your engagements. Contribution is not hard – you can simply record the login procedure in the Fiddler, Burp or ZAP and send the session file to the author. Please see more information on the fingerprint contribution here. You may find these links useful while hunting for default logins manually: https://cirt.net/passwords https://www.routerpasswords.com/ Conclusion This tool can be of a great help not only while performing internal infrastructure penetration tests, but everywhere where we need to test a web interface for default credentials. Its simple design and smart features make it also very easy to use. Hope you will find it useful too! Thanks Lastly, I want to thank nnposter for his awesome NNdefacts dataset without which this would not be possible and also for his contributions to the Nmap project. Thank you nnposter! Sursa: https://www.infosecmatter.com/hunter-of-default-logins-web-http/1 point
-
1 point
-
1 point
-
Le dai un termen ca sa rezolve problema si le zici ca daca nu rezolva o sa te folosesti de vulnerabilitatea lor😂 :joking:0 points
-
3,4,5,6 Am predat TIC si stateam cu capul in catalog sa-mi procur bani de mancare, daca tu esti arogant si ai de comentat opinia mea, este strict problema ta, crezi ca ce postez nu citesc? On, la ce am postat ^ (SCOTT KELLY’S) este sursa de pe nationalgeographic, iar tu dai downvote asta denota faptul ca sunteti o gloata de prosti care va contrazice-ti, oricum nu dureaza mult nici Antrax, nici AH1N1, nici ebola, s.a.m.d0 points
-
0 points